Bacula Enterprise Edition Documentation text image transdoc
Search

Main


New Features in Older Bacula Enterprise Versions

This chapter presents some of the new features that have been added to the older Enterprise versions of Bacula. These features are available only with a Bacula Systems subscription.

Bacula Enterprise 8.8

Cloud Backup

A major problem of Cloud backup is that data transmission to and from the Cloud is very slow compared to traditional backup to disk or tape. The Bacula Cloud drivers provide a means to quickly finish the backups and then to transfer the data from the local cache to the Cloud in the background. This is done by first splitting the data Volumes into small parts that are cached locally, and then uploading those parts to the Cloud storage service in the background, either while the job continues to run or after the backup job has terminated. Once the parts are written to the Cloud, they may either be left in the local cache for quick restores or they can be removed (truncate cache).

Cloud Volume Architecture

The picture (here) shows two Volumes (Volume0001 and Volume0002) with their parts in the cache. Below the cache, one can see that Volume0002 has be uploaded or synchronized with the Cloud.

Note
: Normal Bacula disk Volumes are implemented as standard files that reside in the user-defined Archive Directory. On the other hand, Bacula Cloud Volumes are directories that reside in the user-defined Archive Directory. The directory contains the cloud Volume parts, implemented as numbered files (part.1, part.2, ...).

Cloud Restore

During a restore, if the needed parts are available in the local cache, they will immediately be used. Otherwise, they will be downloaded from cloud storage as needed. The restore starts with parts already in the local cache but will wait as needed for any part that must be downloaded. The download proceeds while the restore is running.

With most cloud providers uploads are free of charge, but downloads of data from the cloud are billed. By using the local cache and multiple small parts, Bacula can be configured to substantially reduce download costs.

The Maximum File Size Device directive is valid within the Storage Daemon's cloud device configuration and defines the granularity of a restore chunk. In order to minimize the number of volume parts to download during a restore (in particular when restoring single files), it is useful to set the Maximum File Size to a value smaller than or equal to the configured Maximum Part Size.

Compatibility

Since a Cloud Volume contains the same data an ordinary Bacula Volume does, all existing types of Bacula data may be stored in the cloud - that is client data encryption, client-side compreion, plugin usage are all available. In fact, all existing Bacula functionality, with the exception of deduplication, is compatible with the Bacula Cloud drivers.

Deduplication and the Cloud

At the current time, Bacula Global Endpoint Backup does not support writing to the cloud because cloud storage would be too slow to support large hashed and indexed containers of deduplication data.

Virtual Autochangers and Disk Autochangers

Bacula Virtual Autochangers are compatible with the Bacula Cloud drivers. However, if you use a third party disk autochanger script such as Vchanger, unless or until it is modified to handle Volume directories, it may not be compatible with Bacula Cloud drivers.

Security

All data that is sent to and received from the cloud by default uses the HTTPS protocol, so data is encrypted while being transmitted and received. However, data that resides in the cloud is not encrypted by default. If that extra security of backed up data is required, Bacula's PKI data encryption feature should be used during the backup.

New Commands, Resource, and Directives for Cloud

To support Cloud storage devices, some new bconsole commands, new Storage Daemon directives, and a new Cloud resource that is referenced in the Storage Daemon's Device resource are available as of Bacula Enterprise 8.8

Cache and Pruning

The Cache is treated much like a normal Disk based backup, so that in configuring Cloud the administrator should take care to set Archive Device in the Device resource to a dirctory that would also be suitable for storing backup data. Obviously, unless the truncate/prune cache commands are used, the Archive Device will continue to fill.

The cache retention can be controlled per Volume with the Cache Retention attribute. The default value is 0, meaning that pruning of the cache is disabled.

The Cache Retention value for a volume can be modified with the update command, or configured via the Pool directive Cache Retention for newly created volumes.

New Cloud Bacula Console Commands

  • truncate cache
  • upload
  • cloud The new cloud Bacula Console command allows inspecting and manipulating cloud volumes in different ways. The options are the following:
    • None. If you specify no arguments to the command, Bacula Console will prompt with:
            Cloud choice: 
            1: List Cloud Volumes in the Cloud
            2: Upload a Volume to the Cloud
            3: Prune the Cloud Cache
            4: Truncate a Volume Cache
            5: Done
            Select action to perform on Cloud (1-5):
      
      The different choices should be rather obvious.

    • truncate This command will attempt to truncate the local cache for the specified Volume. Bacula will prompt you for the information needed to determine the Volume name or names. To avoid the prompts, the following additional command line options may be specified:
      • Storage=xxx
      • Volume=xxx
      • AllPools
      • AllFromPool
      • Pool=xxx
      • Storage=xxx
      • MediaType=xxx
      • Drive=xxx
      • Slots=nnn
    • prune This command will attempt to prune the local cache for the specified Volume. Bacula will respect the Cache Retention volume attribute to determine if the cache can be truncated or not. Only parts that are uploaded to the cloud will be deleted from the cache. Bacula will prompt you for the information needed to determine the Volume name or names. To avoid the prompts, the following additional command line options may be specified:
      • Storage=xxx
      • Volume=xxx
      • AllPools
      • AllFromPool
      • Pool=xxx
      • Storage=xxx
      • MediaType=xxx
      • Drive=xxx
      • Slots=nnn
    • upload This command will attempt to upload the specified Volumes. It will prompt for the information needed to determine the Volume name or names. To avoid the prompts, any of the following additional command line options can be specified:
      • Storage=xxx
      • Volume=xxx
      • AllPools
      • AllFromPool
      • Storage=xxx
      • Pool=xxx
      • MediaType=xxx
      • Drive=xxx
      • Slots=nnn
    • list This command will list volumes stored in the Cloud. If a volume name is specified, the command will list all parts for the given volume. To avoid the prompts, the operator may specify any of the following additional command line options:
      • Storage=xxx
      • Volume=xxx
      • Storage=xxx

Cloud Additions to the DIR Pool Resource

In bacula-dir.conf Pool resources, the directive Cache Retention can be specified. It is only effective for cloud storage backed volumes, and ignored when used with volumes stored on any different storage device.

Cloud Additions to the SD Device Resource

Device resource configured in the bacula-sd.conf file can use the Cloud keyword in the Device Type directive, and the two directives Maximum Part Size and Cloud.

New Cloud SD Device Directives

Device Type
The Device Type has been extended to include the new keyword Cloud to specify that the device supports cloud Volumes. Example:
    Device Type = Cloud
Cloud
The new Cloud directive references a Cloud Resource. As with other Bacula resource references, the name of the Cloud resource is used as the value. Example:
    Cloud = S3Cloud
Maximum Part Size
This directive allows specification of the maximum size for each part of any volume written by the current device. Smaller part sizes will reduce restore costs, but will cause additional but small overhead to handle multiple parts. The maximum number of parts permitted per Cloud Volume is 524,288. The maximum size of any given part is approximately 17.5TB.

Example Cloud Device Specification

An example of a Cloud Device Resource might be:

Device {
  Name = CloudStorage
  Device Type = Cloud
  Cloud = S3Cloud
  Archive Device = /opt/bacula/backups
  Maximum Part Size = 10000000
  Media Type = File
  LabelMedia = yes
  Random Access = Yes;
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}

As can be seen above, the Cloud directive in the Device resource contains the name (S3Cloud), which references the Cloud resource that is shown below.

Note also that the Archive Device is specified in the same manner as used for a File device, i.e. by indicating a directory name. However, in place of containing regular files as Volumes, the archive device for the Cloud drivers will contain the local cache, which consists a directory per Volume, and these directories contain the parts associated with the particular Volume. So with the above Device resource, and the two cached Volumes shown in figure (here) above, the following layout on disk would result:

  /opt/bacula/backups
      /opt/bacula/backups/Volume0001
          /opt/bacula/backups/Volume0001/part.1
          /opt/bacula/backups/Volume0001/part.2
          /opt/bacula/backups/Volume0001/part.3
          /opt/bacula/backups/Volume0001/part.4
      /opt/bacula/backups/Volume0002
          /opt/bacula/backups/Volume0002/part.1
          /opt/bacula/backups/Volume0002/part.2
          /opt/bacula/backups/Volume0002/part.3

The Cloud Resource

The Cloud resource has a number of directives that may be specified as exemplified in the following example:

Default east USA location:

Cloud {
  Name = S3Cloud
  Driver = "S3"
  HostName = "s3.amazonaws.com"
  BucketName = "BaculaVolumes"
  AccessKey = "BZIXAIS39DP9YNER5DFZ"
  SecretKey = "beesheeg7iTe0Gaexee7aedie4aWohfuewohGaa0"
  Protocol = HTTPS
  URIStyle = VirtualHost
  Truncate Cache = No
  Upload = EachPart
  Region = "us-east-1"
  Maximum Upload Bandwidth = 5MB/s
}

For central europe location:

Cloud { 
  Name = S3Cloud 
  Driver = "S3" 
  HostName = "s3-eu-central-1.amazonaws.com" 
  BucketName = "BaculaVolumes" 
  AccessKey = "BZIXAIS39DP9YNER5DFZ" 
  SecretKey = "beesheeg7iTe0Gaexee7aedie4aWohfuewohGaa0" 
  Protocol = HTTPS 
  UriStyle = VirtualHost 
  Truncate Cache = No 
  Upload = EachPart 
  Region = "eu-central-1" 
  Maximum Upload Bandwidth = 4MB/s 
}

For Amazon Cloud, refer to http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_regionto get a complete list of regions and corresponding endpoints and use them respectively as Region and HostName directives.

or in the following example for CEPH S3 interface:

  Cloud {
    Name = CEPH_S3
    Driver = "S3"
    HostName = ceph.mydomain.lan
    BucketName = "CEPHBucket"
    AccessKey = "xxxXXXxxxx"
    SecretKey = "xxheeg7iTe0Gaexee7aedie4aWohfuewohxx0"
    Protocol = HTTPS
    Upload = EachPart
    UriStyle = Path            # Must be set for CEPH
  }

For Azure:

 
Cloud {
  Name = MyCloud 
  Driver = "Azure" 
  HostName = "MyCloud" #not used but needs to be specified 
  BucketName = "baculaAzureContainerName" 
  AccessKey = "baculaaccess" 
  SecretKey = "/Csw1SECRETUmZkfQ==" 
  Protocol = HTTPS 
  UriStyle = Path 
}

The directives of the above Cloud resource for the S3 driver are defined as follows:

Name = <Device-Name>
The name of the Cloud resource. This is the logical Cloud name, and may be any string up to 127 characters in length. Shown as S3Cloud above.

Description = <Text>
The description is used for display purposes as is the case with all resources.

Driver = <Driver-Name>

This defines which driver to use. At the moment, the only Cloud driver that is implemented is S3. There is also a File driver, which is used mostly for testing.

Host Name = <Name>
This directive specifies the hostname to be used in the URL. Each Cloud service provider has a different and unique hostname. The maximum size is 255 characters and may contain a TCP port specification.

Bucket Name = <Name>

This directive specifies the bucket name that you wish to use on the Cloud service. This name is normally a unique name that identifies where you want to place your Cloud Volume parts. With Amazon S3, the bucket must be created previously on the Cloud service. With Azure Storage, it is generaly refered as Container and it can be created automatically by Bacula when it does not exist. The maximum bucket name size is 255 characters.

Access Key = <String>

The access key is your unique user identifier given to you by your cloud service provider.

Secret Key = <String>
The secret key is the security key that was given to you by your cloud service provider. It is equivalent to a password.

Protocol = <HTTP | HTTPS>

The protocol defines the communications protocol to use with the cloud service provider. The two protocols currently supported are: HTTPS and HTTP. The default is HTTPS.

Uri Style = <VirtualHost | Path>

This directive specifies the URI style to use to communicate with the cloud service provider. The two Uri Styles currently supported are: VirtualHost and Path. The default is VirtualHost.

Truncate Cache = <truncate-kw>

This directive specifies when Bacula should automatically remove (truncate) the local cache parts. Local cache parts can only be removed if they have been uploaded to the cloud. The currently implemented values are:

No
Do not remove cache. With this option you must manually delete the cache parts with a bconsole truncate cache, or do so with an Admin Job that runs an truncate cache command. This is the default.
AfterUpload
Each part will be removed just after it is uploaded. Note, if this option is specified, all restores will require a download from the CloudnoteNot yet implemented .
AtEndOfJob
With this option, at the end of the Job, every part that has been uploaded to the Cloud will be removednoteNot yet implemented (truncated).

Upload = <upload-kw>

This directive specifies when local cache parts will be uploaded to the Cloud. The options are:

No
Do not upload cache parts. With this option you must manually upload the cache parts with a Bacula Console upload command, or do so with an Admin Job that runs an upload command. This is the default.
EachPart
With this option, each part will be uploaded when it is complete i.e. when the next part is created or at the end of the Job.
AtEndOfJob
With this option all parts that have not been previously uploaded will be uploaded at the end of the Job.noteNot yet implemented

Maximum Concurrent Uploads = <number>
The default is 3, but by using this directive, you may set it to any value you want.

Maximum Concurrent Downloads = <number>
The default is 3, but by using this directive, you may set it to any value you want.

Maximum Upload Bandwidth = <speed>

The default is unlimited, but by using this directive, you may limit the upload bandwidth used globally by all devices referencing this Cloud resource.

Maximum Download Bandwidth = <speed>

The default is unlimited, but by using this directive, you may limit the download bandwidth used globally by all devices referencing this Cloud resource.

Region = <String>
The Cloud resource can be configured to use a specific endpoint within a region. This directive is required for AWS-V4 regions. ex: Region = "eu-central-1"

BlobEndpoint = <String>
This resource can be used to specify a custom URL for Azure Blob (see https://docs.microsoft.com/en-us/azure/storage/blobs/storage-custom-domain-name).

EndpointSuffix = <String>
Use this resource to specify a custom URL postfix for Azure. ex: EnbpointSuffix = "core.chinacloudapi.cn"

File Driver for the Cloud

As mentioned above, one may specify the keyword File on the Driver directive of the Cloud resource. Instead of writing to the Cloud, Bacula will instead create a Cloud Volume but write it to disk. The rest of this section applies to the Cloud resource directives when the File driver is specified.

The following Cloud directives are ignored: Bucket Name, Access Key, Secret Key, Protocol, URI Style. The directives Truncate Cache and Upload work on the local cache in the same manner as they do for the S3 driver.

The main difference to note is that the Host Name, specifies the destination directory for the Cloud Volume files, and this Host Name must be different from the Archive Device name, or there will be a conflict between the local cache (in the Archive Device directory) and the destination Cloud Volumes (in the Host Name directory).

As noted above, the File driver is mostly used for testing purposes, and we do not particularly recommend using it. However, if you have a particularly slow backup device you might want to stage your backup data into an SSD or disk using the local cache feature of the Cloud device, and have your Volumes transferred in the background to a slow File device.

Progressive Virtual Full

Instead of the implementation of Perpetual Virtual Full backups with a Perl script which needs to be run regularly, with Bacula Enterprise version 8.8.0, a new job directive named Backups To Keep has been added. This permits implementation of Progressive Virtual Fulls fully within Bacula itself.

To use the Progressive Virtual Full feature, the Backups To Keep directive is added to a Job resource. The value specified for the directive indicates the number of backup jobs that should not be merged into the Virtual Full. The default is zero and behaves the same way the prior script pvf worked.

Backups To Keep Directive

The new BackupsToKeep directive is specified in the Job Resource and has the form:

  Backups To Keep = 30

where the value (30 in the figure (here) ) is the number of backups to retain. When this directive is present during a Virtual Full (it is ignored for any other Job types), Bacula will check if the latest Full backup has more subsequent backups than the value specified. In the above example, the Job would simply terminate unless there is a Full back followed by at least 31 backups of either Differential or Incremental level.

Assuming that the latest Full backup is followed by 32 Incremental backups, a Virtual Full will be run that consolidates the Full with the first two Incrementals that were run after the Full backup. The result is a Full backup followed by 30 Incremental ones. The Job Resource in bacula-dir.conf to accomplish this would be:

  Job {
    Name = "VFull"
    Type = Backup
    Level = VirtualFull
    Client = "my-fd"
    File Set = "FullSet"
    Accurate = Yes
    Backups To Keep = 10
  }

Delete Consolidated Jobs

The additional directive Delete Consolidated Jobs expects a <yes|no> value that, if set to yes, will cause any old Job that is consolidated during a Virtual Full to be deleted. In the example above we saw that a Full plus one other job (either an Incremental or Differential) were consolidated into a new Full backup. The original Full and the other Job consolidated would be deleted if this directive were set to yes. The default value is no.

Virtual Full Compatibility

Virtual Full as well as Progressive Virtual Full backups work with any standard backup Job including Jobs that use the Global Endpoint Deduplication.

However, it should be noted that Virtual Full jobs are not compatible with Windows backups using VSS writers (mostly plugins), nor are they compatible with a number of non-Windows Bacula Systems plugins. Please contact Bacula Systems Support team for more details Virtual Full compatibility.

TapeAlert Enhancements

There are some significant enhancements to the TapeAlert feature of Bacula. Several directives are used slightly differently, and there is a minor compatibility problem with the old TapeAlert implementation.

What is New

First, the Alert Command directive needs to be added in the Device resource that calls the new tapealert script that is installed in the scripts directory (normally: /opt/bacula/scripts):

  Device {
    Name = ...
    Archive Device = /dev/nst0
    Alert Command = "/opt/bacula/scripts/tapealert %l" 
    Control Device = /dev/sg1 # must be SCSI ctl for Archive Device
    ...
  }

The Control Device directive in the Storage Daemon's configuration was previously used only for the SAN Shared Storage feature. With Bacula version 8.8, it is also used for the TapeAlert command to permit Bacula to detect tape alerts on a specific device (normally only tape devices).

Once the above mentioned two directives (Alert Command and Control Device) are in place in all Device resources, Bacula will check for tape alerts at two points:

  • After the Drive is used and it becomes idle.
  • After each read or write error on the drive.

At each of the above times, Bacula will call the new tapealert script, which uses the tapeinfo program. The tapeinfo utility is part of the apt sg3-utils and rpm sg3_utils packages. Then for each tape alert that Bacula finds for that drive, it will emit a Job message that is either INFO, WARNING, or FATAL depending on the designation in the Tape Alert published by the SCSI Storage Interfaces">https://www.t10.org. For the specification, please see: http://www.t10.org/ftp/t10/document.02/02-142r0.pdf

As a somewhat extreme example, if tape alerts 3, 5, and 39 are set, you will get the following output in your backup job:

  17-Nov 13:37 rufus-sd JobId 1: Error: block.c:287
  Write error at 0:17 on device "tape"
  (/home/kern/bacula/k/regress/working/ach/drive0)
  Vol=TestVolume001.  ERR=Input/output error.
  
  17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert:
  Volume="TestVolume001" alert=3: ERR=The operation has stopped because
  an error has occurred while reading or writing data which the drive
  cannot correct.  The drive had a hard read or write error
  
  17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert:
  Volume="TestVolume001" alert=5: ERR=The tape is damaged or the drive
  is faulty.  Call the tape drive supplier helpline.  The drive can no
  longer read data from the tape
  
  17-Nov 13:37 rufus-sd JobId 1: Warning: Disabled Device "tape"
  (/home/kern/bacula/k/regress/working/ach/drive0) due to tape alert=39.
  
  17-Nov 13:37 rufus-sd JobId 1: Warning: Alert: Volume="TestVolume001"
  alert=39: ERR=The tape drive may have a fault.  Check for availability
  of diagnostic information and run extended diagnostics if applicable.
  The drive may have had a failure which may be identified by stored
  diagnostic information or by running extended diagnostics (eg Send
  Diagnostic).  Check the tape drive users manual for instructions on
  running extended diagnostic tests and retrieving diagnostic data.

Without the tape alert feature enabled, you would only get the first error message above, which is the error Bacula received. Notice also, in this case the alert number 5 is a critical error, which causes two things to happen: First, the tape drive is disabled, and second, the Job is failed.

If you attempt to run another Job using the Device that has been disabled, you will get a message similar to the following:

17-Nov 15:08 rufus-sd JobId 2: Warning: 
     Device "tape" requested by DIR is disabled.

and the Job may be failed if no other usable drive can be found.

Once the problem with the tape drive has been corrected, you can clear the tape alerts and re-enable the device with the Bacula Bacula Console command such as the following:

  enable Storage=Tape

Note, when you enable the device, the list of prior tape alerts for that drive will be discarded.

Since is is possible to miss tape alerts, Bacula maintains a temporary list of the last 8 alerts, and each time Bacula calls the tapealert script, it will keep up to 10 alert status codes. Normally there will only be one or two alert errors for each call to the tapealert script.

Once a drive has one or more tape alerts, they can be inspected by using the Bacula Console status command as follows:

status storage=Tape
which produces the following output:
Device Vtape is "tape" (/home/kern/bacula/k/regress/working/ach/drive0)
mounted with:
    Volume:      TestVolume001
    Pool:        Default
    Media type:  tape
    Device is disabled. User command.
    Total Bytes Read=0 Blocks Read=1 Bytes/block=0
    Positioned at File=1 Block=0
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
       alert=Hard Error
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
       alert=Read Failure
    Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
       alert=Diagnostics Required
If you want to see the long message associated with each of the alerts, simply set the debug level to 10 or more and re-issue the status command:
setdebug storage=Tape level=10
status storage=Tape
    ...
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
      flags=0x0 alert=The operation has stopped because an error has occurred
       while reading or writing data which the drive cannot correct. The drive had
       a hard read or write error
    Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001"
       flags=0x0 alert=The tape is damaged or the drive is faulty. Call the tape
       drive supplier helpline.  The drive can no longer read data from the tape
    Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" flags=0x1
       alert=The tape drive may have a fault. Check for availability of diagnostic
       information and run extended diagnostics if applicable.   The drive may
       have had a failure which may be identified by stored diagnostic information
       or by running extended diagnostics (eg Send Diagnostic). Check the tape
       drive users manual for instructions on running extended diagnostic tests
       and retrieving diagnostic data.
    ...
The next time you enable the Device by either using Bacula Console or you restart the Storage Daemon, all the saved alert messages will be discarded.

Handling of Alerts

Tape Alerts numbered 7, 8, 13, 14, 20, 22, 52, 53, and 54 will cause Bacula to disable the current Volume.

Tape Alerts numbered 14, 20, 29, 30, 31, 38, and 39 will cause Bacula to disable the drive.

Please note certain tape alerts such as 14 have multiple effects (disable the Volume and disable the drive).

Multi-Tenancy Enhancements

New BWeb Management Suite Self User Restore

The BWeb Management Suite can be configured to allow authorized users to restore their own files on their own Unix or Linux system through BWeb. More information can be found in the BWeb Management Suite user's guide.

New Console ACL Directives

By default, if a Console ACL directive is not set, Bacula will assume that the ACL list is empty. If the current Bacula configuration uses restricted Consoles and allows restore jobs, it is mandatory to configure the new directives.

Directory ACL

This directive is used to specify a list of directories that can be accessed by a restore session. Without this directive, the console cannot restore any file. Multiple directories names may be specified by separating them with commas, and/or by specifying multiple DirectoryACL directives. For example, the directive may be specified as:

  DirectoryACL = /home/bacula/, "/etc/", "/home/test/*"

With the above specification, the console can access the following files:

  • /etc/password
  • /etc/group
  • /home/bacula/.bashrc
  • /home/test/.ssh/config
  • /home/test/Desktop/Images/something.png

But not the following files or directories:

  • /etc/security/limits.conf
  • /home/bacula/.ssh/id_dsa.pub
  • /home/guest/something
  • /usr/bin/make

If a directory starts with a Windows pattern (ex: c:/), Bacula will automatically ignore the case when checking directories.

UserId ACL

This directive is used to specify a list of UID/GID that can be accessed from a restore session. Without this directive, the console cannot restore any file. During the restore session, the Director will compute the restore list and will exclude files and directories that cannot be accessed. Bacula uses the LStat database field to retrieve st_mode, st_uid and st_gid information for each file and compare them with the UserId ACL elements. If a parent directory doesn't have a proper catalog entry, access to this directory will be automatically granted.

UID/GID names are resolved with getpwnam() function within the Director. The UID/GID mapping might be different from one system to an other.

Windows systems are not compatible with the UserId ACL feature. The use of UserId ACL = *all* is required to restore Windows systems from a restricted Console.

Multiple UID/GID names may be specified by separating them with commas, and/or by specifying multiple UserId ACL directives. For example, the directive may be specified as:

  UserIdACL = "bacula", "100", "100:100", ":100", "bacula:bacula"

# ls -l /home
total 28
drwx------ 45 bacula bacula 12288 Oct 24 17:05 bacula
drwx------ 45 test   test   12288 Oct 24 17:05 test
drwx--x--x 45 test2  test2  12288 Oct 24 17:05 test2
drwx------  2 root   root   16384 Aug 30 14:57 backup
-rwxr--r--  1 root   root   1024  Aug 30 14:57 afile

In the example above, if the uid of the user test is 100, the following files will be accessible:

  • bacula/*
  • test/*
  • test2/*

The directory backup will not be accessible.

Restore Job Security Enhancement

The Bacula Console restore command can now accept the new jobuser= and jobgroup= parameters to restrict the restore process to a given user account. Files and directories created during the restore session will be restricted.

* restore jobuser=joe jobgroup=users

The Restore Job restriction can be used on Linux and on FreeBSD. If the restore Client OS doesn't support the needed thread-level user impersonation, the restore job will be aborted.

New Bconsole list Command Behavior

The Bacula Console list commands can now be used safely from a restricted bconsole session. The information displayed will respect the ACL configured for the Console session. For example, if a Console has access to JobA, JobB and JobC, information about JobD will not appear in the list jobs command.

Bacula Enterprise 8.6.3

New Console ACL Directives

It is now possible to configure a restricted Console to distinguish Backup and Restore jobs permissions. The Backup Client ACL can restrict backup jobs on a specific set of clients, while the Restore Client ACL can restrict restore jobs.

# cat /opt/bacula/etc/bacula-dir.conf
...

Console {
 Name = fd-cons             # Name of the FD Console
 Password = yyy
...
 ClientACL = localhost-fd           # everything allowed
 RestoreClientACL = test-fd         # restore only
 BackupClientACL = production-fd    # backup only
}

The Client ACL directive takes precedence over the Restore Client ACL and the Backup Client ACL settings. In the Console resource above, this means that the bconsole linked to the Console named fd-cons will be able to run:

  • backup and restore for localhost-fd
  • backup for production-fd
  • restore for test-fd

At restore time, jobs for client localhost-fd, test-fd and production-fd will be available.

If *all* is set for Client ACL, backup and restore will be allowed for all clients, despite the use of Restore Client ACL or Backup Client ACL.

Bacula Enterprise 8.6.0


Client Initiated Backup

A console program such as the new tray-monitor or bconsole can now be configured to connect a File Daemon. There are many new features available (see the New Tray Monitor (here)), but probably the most important one is the ability for the user to initiate a backup of her own machine. The connection established by the FD to the Director for the backup can be used by the Director for the backup, thus not only can clients (users) initiate backups, but a File Daemon that is NATed (cannot be reached by the Director) can now be backed up without using advanced tunneling techniques.

The flow of information is shown in the picture (here)

Configuring Client Initiated Backup

In order to ensure security, there are a number of new directives that must be enabled in the new tray-monitor, the File Daemon and in the Director. A typical configuration might look like the following:

# cat /opt/bacula/etc/bacula-dir.conf
...

Console {
        Name = fd-cons             # Name of the FD Console
        Password = yyy

        # These commands are used by the tray-monitor, it is possible to restrict
        CommandACL = run, restore, wait, .status, .jobs, .clients
        CommandACL = .storages, .pools, .filesets, .defaults, .info
        
        # Adapt for your needs
        jobacl = *all*
        poolacl = *all*
        clientacl = *all*
        storageacl = *all*
        catalogacl = *all*
        filesetacl = *all*
}

# cat /opt/bacula/etc/bacula-fd.conf
...

Console {              # Console to connect the Director
        Name = fd-cons
        DIRPort = 9101
        address = localhost
        Password = "yyy"
        }

Director {
         Name = remote-cons   # Name of the tray monitor/bconsole
         Password = "xxx"     # Password of the tray monitor/bconsole
         Remote = yes         # Allow to use send commands to the Console defined
}

cat /opt/bacula/etc/bconsole-remote.conf
....

Director {
         Name = localhost-fd
         address = localhost        # Specify the FD address
         DIRport = 9102             # Specify the FD Port
         Password = "notused"
}

Console {
        Name = remote-cons         # Name used in the auth process
        Password = "xxx"
}

cat ~/.bacula-tray-monitor.conf
  
Monitor {
        Name = remote-cons
}

Client {
       Name = localhost-fd
       address = localhost     # Specify the FD address
       Port = 9102             # Specify the FD Port
       Password = "xxx"
       Remote = yes
}

A more detailed description with complete examples is available in the Tray monitor chapter of this manual.


New Tray Monitor

A new tray monitor has been added to the 8.6 release, which offers the following features:

  • Director, File and Storage Daemon status page
  • Support for the Client Initiated Backup protocol (See (here)). To use the Client Initiated Backup option from the tray monitor, the Client option Remote should be checked in the configuration (Fig. (here)).
  • Wizard to run new job (Fig. (here))
  • Display an estimation of the number of files and the size of the next backup job (Fig. (here))
  • Ability to configure the tray monitor configuration file directly from the GUI (Fig. (here))
  • Ability to monitor a component and adapt the tray monitor task bar icon if a jobs are running.
  • TLS Support
  • Better network connection handling
  • Default configuration file is stored under $HOME/.bacula-tray-monitor.conf
  • Ability to schedule jobs
  • Available for Linux and Windows platforms

Scheduling Jobs via the Tray Monitor

The Tray Monitor can periodically scan a specific directory configured as Command Directory and process *.bcmd files to find jobs to run.

The format of the file.bcmd command file is the following:

<component name>:<run command>
<component name>:<run command>
...

<component name> = string
<run command>    = string (bconsole command line)

For example:

localhost-fd: run job=backup-localhost-fd level=full
localhost-dir: run job=BackupCatalog

A command file should contain at least one command. The component specified in the first part of the command line should be defined in the tray monitor. Once the command file is detected by the tray monitor, a popup is displayed to the user and it is possible for the user to cancel the job.

Command files can be created with tools such as cron or the task scheduler on Windows. It is possible and recommended to verify network connectivity at that time to avoid network errors:

#!/bin/sh
if ping -c 1 director &> /dev/null
then
   echo "my-dir: run job=backup" > /path/to/commands/backup.bcmd
fi

Concurrent VSS Snapshot Support

It is now possible to run multiple concurrent jobs that use VSS snapshots on the File Daemon for Microsoft Windows.

Accurate Option for Verify Volume Data Job

As of Bacula version 8.4.1, it has been possible to have a Verify Job configured with level = Data that will reread all records from a job and optionally check size and checksum of all files.

Starting with 8.6, it is now possible to use the accurate option to check catalog records at the same time. Using a Verify job with level = Data and accurate = yes can replace the level = VolumeToCatalog option.

For more information on how to setup a Verify Data job, see (here).

To run a Verify Job with the accurate option, it is possible to set the option in the Job definition or set use the accurate = yes on the command line.

* run job=VerifyData jobid=10 accurate=yes

Single Item Restore Optimisation

Bacula version 8.6.0 can generate indexes stored in the catalog to speed up file access during a Single Item Restore session for VMWare or for Exchange. The index can be displayed in bconsole with the list filemedia command.

* list filemedia jobid=1

FileDaemon Saved Messages Resource Destination

It is now possible to send the list of all saved files to a Messages resource with the saved message type. It is not recommended to send this flow of information to the Director and/or the Catalog when the client FileSet is large. To avoid side effects, the all keyword doesn't include the saved message type. The saved message type should be explicitly set.

# cat /opt/bacula/etc/bacula-fd.conf
...
Messages {
         Name = Standard
         director = mydirector-dir = all, !terminate, !restored, !saved
         append = /opt/bacula/working/bacula-fd.log = all, saved, restored
}

BWeb New Features

The 8.6 release adds some new BWeb features, such as:

  • Two sets of wizards to help users to configure Copy/Migration jobs (Figures (here) and (here))
  • A wizard to run jobs (Fig. (here))
  • SSH integration in BWeb Security Center to restart components remotely (Fig. (here))
  • Global Endpoint Deduplication Overview screen (Fig. (here))

Minor Enhancements

New Bconsole .estimate Command

The new .estimate command can be used to get statistics about a job to run. The command uses the database to estimate the size and the number of files of the next job. On a PostgreSQL database, the command uses regression slope to compute values. On SQLite or MySQL, where these statistical functions are not available, the command uses a simple average estimation. The correlation number is given for each value.

*.estimate job=backup 
level=I
nbjob=0
corrbytes=0
jobbytes=0
corrfiles=0
jobfiles=0
duration=0
job=backup

*.estimate job=backup level=F
level=F
nbjob=1
corrbytes=0
jobbytes=210937774
corrfiles=0
jobfiles=2545
duration=0
job=backup

Traceback and Lockdump

After the reception of a signal by any of the Bacula daemon binaries, traceback and lockdump information are now stored in the same file.

Bacula Enterprise 8.4.10

Plugin for Microsoft SQL Server

A plugin for Microsoft SQL Server (MSSQL) is now available. The plugin uses MSSQL advanced backup and restore features (like Point In Time Recovery, Log backup, Differential backup, ...).

Job {
    Name = MSSQLJob
    Type = Backup
    Client = windows1
    FileSet = MSSQL
    Pool = 1Month
    Storage = File
    Level = Incremental
}

FileSet {
        Name = MSSQL
        Enable VSS = no
        Include {
                Options {
                        Signature = MD5
                }
                Plugin = "mssql"
        }
}

FileSet {
        Name = MSSQL2
        Enable VSS = no
        Include {
                Options {
                        Signature = MD5
                }
                Plugin = "mssql: database=production"
        }
}

Bacula Enterprise 8.4.1


Verify Volume Data

It is now possible to have a Verify Job configured with level=Data to reread all records from a job and optionally check the size and the checksum of all files.

# Verify Job definition
Job {
    Name = VerifyData
    Type = Verify
    Level = Data
    Client = 127.0.0.1-fd     # Use local file daemon
    FileSet = Dummy           # Will be adapted during the job
    Storage = File            # Should be the right one
    Messages = Standard
    Pool = Default
}

# Backup Job definition
Job {
  Name = MyBackupJob
  Type = Backup
  Client = windows1
  FileSet = MyFileSet
  Pool = 1Month
  Storage = File
}

FileSet {
  Name = MyFileSet
  Include {
    Options {
      Verify = s5
      Signature = MD5
    }
  File = /
}

To run the Verify job, it is possible to use the jobid parameter of the run command.

*run job=VerifyData jobid=10
Run Verify Job
JobName:     VerifyData
Level:       Data
Client:      127.0.0.1-fd
FileSet:     Dummy
Pool:        Default (From Job resource)
Storage:     File (From Job resource)
Verify Job:  MyBackupJob.2015-11-11_09.41.55_03
Verify List: /opt/bacula/working/working/VerifyVol.bsr
When:        2015-11-11 09:47:38
Priority:    10
OK to run? (yes/mod/no): yes
Job queued. JobId=14

...

11-Nov 09:46 my-dir JobId 13: Bacula Enterprise 8.4.1 (13Nov15):
  Build OS:               x86_64-unknown-linux-gnu archlinux
  JobId:                  14
  Job:                    VerifyData.2015-11-11_09.46.29_03
  FileSet:                MyFileSet
  Verify Level:           Data
  Client:                 127.0.0.1-fd
  Verify JobId:           10
  Verify Job:q
  Start time:             11-Nov-2015 09:46:31
  End time:               11-Nov-2015 09:46:32
  Files Expected:         1,116
  Files Examined:         1,116
  Non-fatal FD errors:    0
  SD Errors:              0
  FD termination status:  Verify differences
  SD termination status:  OK
  Termination:            Verify Differences

The current Verify Data implementation requires specifying the correct Storage resource in the Verify job. The Storage resource can be changed with the bconsole command line and with the menu.

Bconsole list jobs Command Options

The list jobs bconsole command now accepts new command line options:

  • joberrors Display jobs with JobErrors
  • jobstatus=T Display jobs with the specified status code
  • client=client-name Display jobs for a specified client
  • order=asc/desc Change the output format of the job list. The jobs are sorted by start time and JobId, the sort can use ascending (asc) or descending (desc) order, the latter being the default.

Minor Enhancements

New Bconsole tee all Command

The @tall command allows logging all input and output of a console session.

*@tall /tmp/log
*st dir
...
@tall

MySQL Plugin Restore Options

It is now possible to specify the database name during a restore in the Plugin Option menu. It is still possible to use the Where parameter to specify the target database name.

PostgreSQL Plugin

We added a timeout option to the PostgreSQL plugin command line that is set to 60s by default. Users may want to change this value when the PostgreSQL cluster is slow to complete SQL queries used during the backup.

Bacula Enterprise 8.4

VMWare Single File Restore

It is now possible to explore VMWare virtual machines backup jobs (Full, Incremental and Differential) made with the Bacula Enterprise vSphere plugin to restore individual files and directories. The Single Item Restore feature comes with both a console interface and a BWeb Management Suite specific interface. See the VMWare Single File Restore whitepaper for more information.

Microsoft Exchange Single MailBox Restore

It is now possible to explore Microsoft Exchange databases backups made with the Bacula Enterprise VSS plugin to restore individual mailboxes. The Single Item Restore feature comes with both a console interface and a web interface. See the Exchange Single Mailbox Restore whitepaper for more information.

Bacula Enterprise 8.2.8

New Job Edit Codes %I

In various places such as RunScripts, you have now access to %I to get the JobId of the copy or migration job started by a migrate job.

Job {
  Name = Migrate-Job
  Type = Migrate
  ...
  RunAfter = "echo New JobId is %I"
}

Bacula Enterprise 8.2.2

New Job Edit Codes %E %R

In various places such as RunScripts, you have now access to %E to get the number of non-fatal errors for the current Job and %R to get the number of bytes read from disk or from the network during a job.

Enable/Disable commands

The bconsole enable and disable commands have been extended from enabling/disabling Jobs to include Clients, Schedule, and Storage devices. Examples:

disable Job=NightlyBackup Client=Windows-fd

will disable the Job named NightlyBackup as well as the client named Windows-fd.

disable Storage=LTO-changer Drive=1

will disable the first drive in the autochanger named LTO-changer.

Please note that doing a reload command will set any values changed by the enable/disable commands back to the values in the bacula-dir.conf file.

The Client and Schedule resources in the bacula-dir.conf file now permit the directive Enabled = yes or Enabled = no.

Bacula Enterprise 8.2

Snapshot Management

Bacula Enterprise 8.2 is now able to handle Snapshots on Linux/Unix systems. Snapshots can be automatically created and used to backup files. It is also possible to manage Snapshots from Bacula's bconsole tool through a unique interface.

Snapshot Backends

The following Snapshot backends are supported with Bacula Enterprise 8.2:

By default, Snapshots are mounted (or directly available) under .snapshots directory on the root filesystem. (On ZFS, the default is .zfs/snapshots).

The Snapshot backend program is called bsnapshot and is available in the bacula-enterprise-snapshot package. In order to use the Snapshot Management feature, the package must be installed on the Client.

The bsnapshot program can be configured using /opt/bacula/etc/bsnapshot.conf file. The following parameters can be adjusted in the configuration file:

  • trace=<file> Specify a trace file
  • debug=<num> Specify a debug level
  • sudo=<yes|no> Use sudo to run commands
  • disabled=<yes|no> Disable snapshot support
  • retry=<num> Configure the number of retries for some operations
  • snapshot_dir=<dirname> Use a custom name for the Snapshot directory. (.SNAPSHOT, .snapdir, etc.)
  • lvm_snapshot_size=<lvpath:size> Specify a custom snapshot size for a given LVM volume
  • mountopts=<devpath:options> Specify a custom mount option for a give device (avaiable in 10.0.4)

# cat /opt/bacula/etc/bsnapshot.conf
trace=/tmp/snap.log
debug=10
lvm_snapshot_size=/dev/ubuntu-vg/root:5%
mountopts=nouuid
mountopts=/dev/ubuntu-vg/root:nouuid,nosuid

Application Quiescing

When using Snapshots, it is very important to quiesce applications that are running on the system. The simplest way to quiesce an application is to stop it. Usually, taking the Snapshot is very fast, and the downtime is only about a couple of seconds. If downtime is not possible and/or the application provides a way to quiesce, a more advanced script can be used. An example is described on (here).

New Director Directives

The use of the Snapshot Engine on the FileDaemon is determined by the new Enable Snapshot FileSet directive. The default is no.

FileSet {
  Name = LinuxHome

  Enable Snapshot = yes

  Include {
    Options = { Compression = LZO }
    File = /home
  }
}

By default, Snapshots are deleted from the Client at the end of the backup. To keep Snapshots on the Client and record them in the Catalog for a determined period, it is possible to use the Snapshot Retention directive in the Client or in the Job resource. The default value is 0 secconds. If, for a given Job, both Client and Job Snapshot Retention directives are set, the Job directive will be used.

Client {
   Name = linux1
   ...

   Snapshot Retention = 5 days
}

To automatically prune Snapshots, it is possible to use the following RunScript command:

Job {
   ...
   Client = linux1
   ...
   RunScript {
      RunsOnClient = no
      Console = "prune snapshot client=%c yes"
      RunsAfter = yes
   }
}

In RunScripts, the AfterSnapshot keyword for the RunsWhen directive will allow a command to be run just after the Snapshot creation.

AfterSnapshot is a synonym for the AfterVSS keyword.

Job {
 ...
  RunScript {
    Command = "/etc/init.d/mysql start"
    RunsWhen = AfterSnapshot
    RunsOnClient = yes
  }
  RunScript {
    Command = "/etc/init.d/mysql stop"
    RunsWhen = Before
    RunsOnClient = yes
  }
}

Job Output Information

Information about Snapshots are displayed in the Job output. The list of all devices used by the Snapshot Engine is displayed, and the Job summary indicates if Snapshots were available.

JobId 3:    Create Snapshot of /home/build
JobId 3:    Create Snapshot of /home/build/subvol
JobId 3:    Delete snapshot of /home/build
JobId 3:    Delete snapshot of /home/build/subvol
...
JobId 3: Bacula 127.0.0.1-dir 8.2.0 (23Feb15):
  Build OS:               x86_64-unknown-linux-gnu archlinux 
  JobId:                  3
  Job:                    Incremental.2015-02-24_11.20.27_08
  Backup Level:           Full
...
  Snapshot/VSS:           yes
...
  Termination:            Backup OK

New snapshot Bconsole Commands

The new snapshot command will display by default the following menu:

*snapshot
Snapshot choice:
     1: List snapshots in Catalog
     2: List snapshots on Client
     3: Prune snapshots
     4: Delete snapshot
     5: Update snapshot parameters
     6: Update catalog with Client snapshots
     7: Done
Select action to perform on Snapshot Engine (1-7):

The snapshot command can also have the following parameters:

[client=<client-name> | job=<job-name> | jobid=<jobid>]
 [delete | list | listclient | prune | sync | update]

It is also possible to use traditional list, llist, update, prune or delete commands on Snapshots.

*llist snapshot jobid=5
 snapshotid: 1
       name: NightlySave.2015-02-24_12.01.00_04
 createdate: 2015-02-24 12:01:03
     client: 127.0.0.1-fd
    fileset: Full Set
      jobid: 5
     volume: /home/.snapshots/NightlySave.2015-02-24_12.01.00_04
     device: /home/btrfs
       type: btrfs
  retention: 30
    comment:

* snapshot listclient
Automatically selected Client: 127.0.0.1-fd
Connecting to Client 127.0.0.1-fd at 127.0.0.1:9102
Snapshot      NightlySave.2015-02-24_12.01.00_04:
  Volume:     /home/.snapshots/NightlySave.2015-02-24_12.01.00_04
  Device:     /home
  CreateDate: 2015-02-24 12:01:03
  Type:       btrfs
  Status:     OK
  Error:

With the Update catalog with Client snapshots option (or snapshot sync), the Director contacts the FileDaemon, lists snapshots of the system and creates catalog records of the Snapshots.

*snapshot sync
Automatically selected Client: 127.0.0.1-fd
Connecting to Client 127.0.0.1-fd at 127.0.0.1:9102
Snapshot      NightlySave.2015-02-24_12.35.47_06:
  Volume:     /home/.snapshots/NightlySave.2015-02-24_12.35.47_06
  Device:     /home
  CreateDate: 2015-02-24 12:35:47
  Type:       btrfs
  Status:     OK
  Error:
Snapshot added in Catalog

*llist snapshot
 snapshotid: 13
       name: NightlySave.2015-02-24_12.35.47_06
 createdate: 2015-02-24 12:35:47
     client: 127.0.0.1-fd
    fileset:
      jobid: 0
     volume: /home/.snapshots/NightlySave.2015-02-24_12.35.47_06
     device: /home
       type: btrfs
  retention: 0
    comment:


LVM Backend Restrictions

LVM Snapshots are quite primitive compared to ZFS, BTRFS, NetApp and other systems. For example, it is not possible to use Snapshots if the Volume Group (VG) is full. The administrator must keep some free space in the VG to create Snapshots. The amount of free space required depends on the activity of the Logical Volume (LV). bsnapshot uses 10% of the LV by default. This number can be configured per LV in the bsnapshot.conf file (See (here)).

[root@system1]# vgdisplay
  --- Volume group ---
  VG Name               vg_ssd
...
  VG Size               29,81 GiB
...
  Alloc PE / Size       125 / 500,00 MiB
  Free  PE / Size       7507 / 29,32 GiB   <---- Free Space
...

It is also not advisable to leave snapshots on the LVM backend. Having multiple snapshots of the same LV on LVM will slow down the system.

Only Ext4, XFS and EXT3 noteXFS and EXT3 are available in 8.2.7 and later filesystems are supported with the Snapshot LVM backend.

Debug Options

To get low level information about the Snapshot Engine, the debug tag snapshot should be used in the setdebug command.

* setdebug level=10 tags=snapshot client
* setdebug level=10 tags=snapshot dir

Global Endpoint Deduplication(TM)

Storage to Storage Copy/Migration

Copy and Migration Jobs now use the Global Endpoint Deduplication protocol if the destination Device Type is dedup.

Performance Enhancements

A new automatic Deduplication index optimization has been added to the Vacuum procedure.

Part of the Deduplication index can be locked into memory to improve performance.

Users can now configure parameters related to the size of the Deduplication index and the amount of memory that can be used to cache the index.

Hypervisor Plugins

Hyper-V VSS Plugin

Backing up and restoring Hyper-V virtual machines is supported with Full level backups using the VSS API. Use of the Global Endpoint Deduplication plugin and the bothsides FileSet option minimizes the amount of data transfered and the amount of storage used.

KVM Plugin

The KVM plugin provides the following main features:

  • File level backup
  • Automatic virtual machine discovery
  • Full, Differential, Incremental backup level support
  • The ability to handle inclusion/exclusion of files

The KVM plugin is designed to be used when the hypervisor uses local storage for virtual machine disks and libvirtd for virtual machine management.

Windows Encrypted File System (EFS) Support

The Bacula Enterprise Windows File Daemon now automatically supports files and directories that are encrypted on Windows filesystem.

BWeb Management Suite

Minor Enhancements

Copy/Migration/VirtualFull Performance Enhancements

The Copy, Migration and VirtualFull performance on large jobs with millions of files has been greatly enhanced.

Storage Daemon Reports Disk Usage

The status storage command now reports the space available on disk devices:

...
Device status:

Device file: "FileStorage" (/bacula/arch1) is not open.
    Available Space=5.762 GB
==

Device file: "FileStorage1" (/bacula/arch2) is not open.
    Available Space=5.862 GB

Bacula Enterprise 8.0

Global Endpoint DeduplicationTM

The Global Endpoint Deduplication solution minimizes network transfers and Bacula Volume size using deduplication technology.

The new Global Endpoint Deduplication Storage daemon directives are:

Device Type = Dedup
sets the Storage device for deduplication. Deduplication is performed only on disk volumes.
Dedup Directory =
this directive specifies where the deduplicated blocks will be stored. Blocks that are deduplicated will be placed in this directory rather than in the Bacula Volume, which will only contain a reference pointer to the deduplicated blocks.
Dedup Index Directory
in addition to the deduplicated blocks, when deduplication is enabled, the Storage daemon keeps an index of the deduplicated block locations. This index will be frequently consulted during the deduplication backup process, so it should be placed on the fastest device possible (e.g. an SSD).

See below for a FileSet example using the new dedup directive.

Configuration Example

In the Storage Daemon configuration file, you must define a Device with the DeviceType = Dedup. It is also possible to configure where the Storage Daemon will store blocks and indexes. Blocks will be stored in the Dedup Directory, the directory is common for all Dedup devices and should have a large amount of free space. Indexes will be stored in the Dedup Index Directory, indexes will have a lot of random update access, and can benefit from SSD drives.

# from bacula-sd.conf
 Storage {
   Name = my-sd
   Working Directory = /opt/bacula/working
   Pid Directory = /opt/bacula/working

   Plugin Directory = /opt/bacula/plugins
   Dedup Directory = /opt/bacula/dedup
   Dedup Index Directory = /opt/bacula/ssd  # default for Dedup Directory
 }

 Device {
    Name = DedupDisk
    Archive Device = /opt/bacula/storage
    Media Type = DedupVolume
    Label Media = yes
    Random Access = yes
    Automatic Mount = yes
    Removable Media = no
    Always Open = no

    Device Type = Dedup    # Required
 }

The Global Endpoint Deduplication Client cache system can speed up restore jobs by getting blocks from the local client disk instead of requesting them over the network. Note that if blocks are not available locally, the FileDaemon will get blocks from the Storage Daemon. This feature can be enabled with the Dedup Index Directory directive in the FileDaemon resource. When using this option, the File Daemon will have to maintain the cache during Backup jobs.

# from bacula-fd.conf
 FileDaemon {
   Name = my-fd
   Working Directory = /opt/bacula/working
   Pid Directory = /opt/bacula/working

   # Optional, Keep indexes on the client for faster restores
   Dedup Index Directory = /opt/bacula/dedupindex
 }

It is possible to configure the Global Endpoint Deduplication system in the Director with a FileSet directive called Dedup. Each FileSet Include section can specify a different deduplication behavior depending on your needs.

 FileSet {
   Name = FS_BASE

   # Send everything to the Storage Daemon as usual
   # and let the Storage Daemon do the deduplication
   Include {
     Options {
       Dedup = storage
     }
     File = /opt/bacula/etc
   }

   # Send only references and new blocks to the Storage Daemon
   Include {
     Options {
       Dedup = bothsides
     }
     File = /VirtualBox
   }

   # Do not try to dedup my encrypted directory
   Include {
     Options {
       Dedup = none
     }
     File = /encrypted
   }
 }

The FileSet Dedup directive accepts the following values:

  • [*] storage All the deduplication work is done on the SD side if the device type is dedup (default value). This option is useful if you want to avoid the extra client-side disk space overhead that will occur with the bothsides option.
  • none Force FD and SD to not use deduplication
  • bothsides The deduplication work is done on both the FD and the SD. Only references and new blocks will be transfered over the network.

Storage Daemon to Storage Daemon

Bacula Enterprise version 8.0 now permits SD to SD transfer of Copy and Migration Jobs. This permits what is commonly referred to as replication or off-site transfer of Bacula backups. It occurs automatically if the source SD and destination SD of a Copy or Migration job are different. That is, the SD to SD transfers need no additional configuration directives. The following picture shows how this works.

Windows Mountpoint Support

Bacula Enterprise version 8.0 is now able to detect Windows mountpoints and include volumes automatically in the VSS snapshot set. To backup all local disks on a Windows server, the following FileSet is now accepted. It depreciates the alldrives plugin.

  FileSet {
   Name = "All Drives"
   Include {
     Options {
       Signature = MD5
     }

     File = /
   }
  }

If you have mountpoints, the onefs=no option should be used as it is with Unix systems.

  FileSet {
   Name = "All Drives with mountpoints"
   Include {
     Options {
       Signature = MD5
       OneFS = no
     }
     File = C:/                # will include mountpoint C:/mounted/...
   }
  }

To exclude a mountpoint from a backup when OneFS = no, use the Exclude block as usual:

  FileSet {
   Name = "All Drives with mountpoints"
   Include {
     Options {
       Signature = MD5
       OneFS = no
     }
     File = C:/           # will include all mounted mountpoints under C:/
                          # including C:/mounted  (see Exclude below)
   }

   Exclude {
     File = C:/mounted    # will not include C:/mounted
   }
  }

SD Calls Client

If the SD Calls Client directive is set to true in a Client resource any Backup, Restore, Verify Job where the client is involved, the client will wait for the Storage daemon to contact it. By default this directive is set to false, and the Client will call the Storage daemon as it always has. This directive can be useful if your Storage daemon is behind a firewall that permits outgoing connections but not incoming connections. The picture (here) shows the communications connection paths in both cases.

Data Encryption Cipher Configuration

Bacula Enterprise version 8.0 and later now allows configuration of the data encryption cipher and the digest algorithm. Previously, the cipher was forced to AES 128, but it is now possible to choose between the following ciphers:

  • AES128 (default)
  • AES192
  • AES256
  • blowfish

The digest algorithm was set to SHA1 or SHA256 depending on the local OpenSSL options. We advise you to not modify the PkiDigest default setting. Please, refer to the OpenSSL documentation to understand the pros and cons regarding these options.

FileDaemon {
    ...
    PkiCipher = AES256
}

Minor Enhancements

New Option Letter M for Accurate Directive in FileSet

Added in version 8.0.5, the new M option letter for the Accurate directive in the FileSet Options block, which allows comparing the modification time and/or creation time against the last backup timestamp. This is in contrast to the existing options letters m and/or c, mtime and ctime, which are checked against the stored catalog values, which can vary accross different machines when using the BaseJob feature.

The advantage of the new M option letter for Jobs that refer to BaseJobs is that it will instruct Bacula to backup files based on the last backup time, which is more useful because the mtime/ctime timestamps may differ on various Clients, causing files to be needlessly backed up.

  Job {
    Name = USR
    Level = Base
    FileSet = BaseFS
...
  }

  Job {
    Name = Full
    FileSet = FullFS
    Base = USR
...
  }

  FileSet {
    Name = BaseFS
    Include {
      Options {
        Signature = MD5
      }
      File = /usr
    }
  }

  FileSet {
    Name = FullFS
    Include {
      Options {
        Accurate = Ms      # check for mtime/ctime of last backup timestamp and Size
        Signature = MD5
      }
      File = /home
      File = /usr
    }
  }

.api version 2

In Bacula Enterprise version 8.0 and later, we introduced a new .api version to help external tools to parse various Bacula bconsole output.

The api_opts option can use the following arguments:

C
Clear current options
tn
Use a specific time format (1 ISO format, 2 Unix Timestamp, 3 Default Bacula time format)
sn
Use a specific separator between items (new line by default).
Sn
Use a specific separator between objects (new line by default).
o
Convert all keywords to lowercase and convert all non isalpha characters to _

  .api 2 api_opts=t1s43S35
  .status dir running
==================================
jobid=10
job=AJob
...

New Debug Options

In Bacula Enterprise version 8.0 and later, we introduced a new options parameter for the setdebug bconsole command.

The following arguments to the new option parameter are available to control debug functions.

0
Clear debug flags
i
Turn off, ignore bwrite() errors on restore on File Daemon
d
Turn off decomp of BackupRead() streams on File Daemon
t
Turn on timestamps in traces
T
Turn off timestamps in traces

c
Truncate trace file if trace file is activated

l
Turn on recoding events on P() and V()
p
Turn on the display of the event ring when doing a backtrace

The following command will enable debugging for the File Daemon, truncate an existing trace file, and turn on timestamps when writing to the trace file.

* setdebug level=10 trace=1 options=ct fd

It is now possible to use a class of debug messages called tags to control the debug output of Bacula daemons.

all
Display all debug messages
bvfs
Display BVFS debug messages
sql
Display SQL related debug messages
memory
Display memory and poolmem allocation messages
scheduler
Display scheduler related debug messages

* setdebug level=10 tags=bvfs,sql,memory
* setdebug level=10 tags=!bvfs

# bacula-dir -t -d 200,bvfs,sql

The tags option is composed of a list of tags. Tags are separated by , or + or - or !. To disable a specific tag, use - or ! in front of the tag. Note that more tags are planned for future versions.

Debug tag option table
Component Tag Debug Level Comment
director scheduler 100 information about job queue mangement
director scheduler 20 information about resources in job queue
director bvfs 10 information about bvfs
director sql 15 information about bvfs queries
all memory 40-60 information about smartalloc

Bacula Enterprise 6.6.0

Communication Line Compression

Bacula Enterprise version 6.6.0 and later now includes communication line compression. It is turned on by default, and if the two Bacula components (DIR, FD, SD, bconsole) are both version 6.6.0 or greater, communication line compression) will be enabled, by default. If for some reason, you do not want communication line compression, you may disable it with the following directive:

Comm Compression = no

This directive can appear in the following resources:

  • bacula-dir.conf: Director resource
  • bacula-fd.conf: Client (or FileDaemon) resource
  • bacula-sd.conf: Storage resource
  • bconsole.conf: Console resource
  • bat.conf: Console resource

In many cases, the volume of data transmitted across the communications line can be reduced by a factor of three when this directive is enabled. In the case that the compression is not effective, Bacula turns it off on a record by record basis.

If you are backing up data that is already compressed the comm line compression will not be effective, and you are likely to end up with an average compression ratio that is very small. In this case, Bacula reports None in the Job report.

Read Only Storage Devices

This version of Bacula allows you to define a Storage deamon device to be read-only. If the Read Only directive is specified and enabled, the drive can only be used for read operations. The Read Only directive can be defined in any bacula-sd.conf Device resource, and is most useful for reserving one or more drives for restores. An example is:

Read Only = yes

Catalog Performance Improvements

There is a new Bacula database format (schema) in this version of Bacula that eliminates the FileName table by placing the Filename into the File record of the File table. This substantiallly improves performance, particularly for large (1GB or greater) databases.

The update_xxx_catalog script will automatically update the Bacula database format, but you should realize that for very large databases (greater than 1GB), it may take some time, and there are several different options for doing the update:

  1. Shudown the database and update it
  2. Update the database while production jobs are running.
See the Bacula Systems White Paper Migration-to-6.6 on this subject.

This database format change can provide very significant improvements in the speed of metadata insertion into the database, and in some cases (backup of large email servers) can significantly reduce the size of the database.

Plugin Restore Options

This version of Bacula permits user configuration of Plugins at restore time. For example, it is now possible to choose the datastore where your VMware image will be restored, or to choose pg_restore options directly. See specific Plugin whitepapers for more information about new restore options.

The restore options, if implemented in a plugin, will be presented to you during initiation of a restore either by command line or if available by a GUI such as BWeb. For examples of the command line interface and the GUI interface, please see below:

*run restore jobid=11766
Run Restore job
JobName:         RestoreFiles
Bootstrap:       /tmp/regress/working/my-dir.restore.1.bsr
Where:           /tmp/regress/tmp/bacula-restores
...
Plugin Options:  *None*
OK to run? (yes/mod/no): mod
Parameters to modify:
     1: Level
...
    13: Plugin Options
Select parameter to modify (1-13): 13
Automatically selected : vsphere: host=squeeze2
Plugin Restore Options
datastore:           *None*
restore_host:        *None*
new_hostname:        *None*
Use above plugin configuration? (yes/mod/no): mod
You have the following choices:
     1: datastore (Datastore to use for restore) 
     2: restore_host (ESX host to use for restore) 
     3: new_hostname (Restore host to specified name) 
Select parameter to modify (1-3): 3
Please enter a value for new_hostname: test
Plugin Restore Options
datastore:           *None*
restore_host:        *None*
new_hostname:        test
Use above plugin configuration? (yes/mod/no): yes

Or via the BWeb restore interface (see Fig (here))

Alldrives Plugin Improvements

The alldrives plugin simplifies the FileSet creation of Windows Clients by automatically generating a FileSet which includes all local drives.

The alldrives plugin now accepts the snapshot option that generates snapshots for all local Windows drives, but without explicitly adding them to the FileSet. It may be combined with the VSS plugin. For example:

FileSet {
 ...
  Include {
    Plugin = "vss:/@MSSQL/"
    Plugin = "alldrives: snapshot"     # should be placed after vss plugin
  }
}

New Truncate Command

We have added a new truncate command to bconsole which will truncate a volume if the volume is purged, and if the volume is also marked Action On Purge = Truncate. This feature was originally added in Bacula version 5.0.1, but the mechanism for actually doing the truncate required the user to enter a complicated command such as:

purge volume action=truncate storage=File pool=Default

The above command is now simplified to be:

truncate storage=File pool=Default

Bacula Enterprise 6.4.x

The following features were added during the 6.4.x life cycle.

SAP Plugin

The Bacula Enterprise SAP Plugin is designed to implement the official SAP Backint interface to simplify the backup and restore procedure through your traditional SAP database tools. See SAP-Backint whitepaper for more information.

Oracle SBT Plugin

By default, the Oracle backup Manager, RMAN, sends all backups to an operating system specific directory on disk. You can also configure RMAN to make backups to media such as tape using the SBT module. Bacula will act as Media Manager, and the data will be transfered directly from RMAN to Bacula. See Oracle Plugin whitepaper for more information.

MySQL Plugin

The MySQL plugin is designed to simplify the backup and restore of your MySQL database, the backup administrator doesn't need to know about the internals of MySQL backup techniques or how to write complex scripts. This plugin will automatically backup essential information such as configurations and user definitions. The MySQL plugin supports both dump (with support for Incremental backup) and binary backup techniques. See the MySQL Plugin whitepaper for more information.

Bacula Enterprise 6.4.0

Deduplication Optimized Volumes

This version of Bacula includes a new alternative (or additional) volume format that optimizes the placement of files so that an underlying deduplicating filesystem such as ZFS can optimally deduplicate the backup data that is written by Bacula. These are called Deduplication Optimized Volumes or Aligned Volumes for short. The details of how to use this feature and its considerations are in the Bacula Systems Deduplication Optimized Volumes whitepaper.

Migration/Copy/VirtualFull Performance Enhancements

The Bacula Storage daemon now permits multiple jobs to simultaneously read from the same disk volume which gives substantial performance enhancements when running Migration, Copy, or VirtualFull jobs that read disk volumes. Our testing shows that when running multiple simultaneous jobs, the jobs can finish up to ten times faster with this version of Bacula. This is built-in to the Storage daemon, so it happens automatically and transparently.

VirtualFull Backup Consolidation Enhancements

By default Bacula selects jobs automatically for a VirtualFull backup. However, you may want to create the virtual backup based on a particular backup (point in time) that exists.

For example, if you have the following backup Jobs in your catalog:

+-------+---------+-------+----------+----------+-----------+
| JobId | Name    | Level | JobFiles | JobBytes | JobStatus |
+-------+---------+-------+----------+----------+-----------+
| 1     | Vbackup | F     | 1754     | 50118554 | T         |
| 2     | Vbackup | I     | 1        | 4        | T         |
| 3     | Vbackup | I     | 1        | 4        | T         |
| 4     | Vbackup | D     | 2        | 8        | T         |
| 5     | Vbackup | I     | 1        | 6        | T         |
| 6     | Vbackup | I     | 10       | 60       | T         |
| 7     | Vbackup | I     | 11       | 65       | T         |
| 8     | Save    | F     | 1758     | 50118564 | T         |
+-------+---------+-------+----------+----------+-----------+

and you want to consolidate only the first 3 jobs and create a virtual backup equivalent to Job 1 + Job 2 + Job 3, you will use jobid=3 in the run command, then Bacula will select the previous Full backup, the previous Differential (if any) and all subsequent Incremental jobs.

run job=Vbackup jobid=3 level=VirtualFull

If you want to consolidate a specific job list, you must specify the exact list of jobs to merge in the run command line. For example, to consolidate the last Differential and all subsequent Incrementals, you will use jobid=4,5,6,7 or jobid=4-7 on the run command line. Because one of the Jobs in the list is a Differential backup, Bacula will set the new job level to Differential. If the list is composed of only Incremental jobs, the new job will have its level set to Incremental.

run job=Vbackup jobid=4-7 level=VirtualFull

When using this feature, Bacula will automatically discard jobs that are not related to the current Job. For example, specifying jobid=7,8, Bacula will discard JobId 8 because it is not part of the same backup Job.

We do not recommend it, but if you really want to consolidate jobs that have different names (so probably different clients, filesets, etc...), you must use alljobid= keyword instead of jobid=.

run job=Vbackup alljobid=1-3,6-8 level=VirtualFull

New Prune Expired Volume Command

In Bacula Enterprise 6.4, it is now possible to prune all volumes (from a pool, or globally) that are expired. This option can be scheduled after or before the backup of the catalog and can be combined with the Truncate On Purge option. The prune expired volme command may be used instead of the manual_prune.pl script.

* prune expired volume

* prune expired volume pool=FullPool

To schedule this option automatically, it can be added to the Catalog backup job definition.

Job {
   Name = CatalogBackup
   ...
   RunScript {
     Console = "prune expired volume yes"
     RunsWhen = Before
   }
}

Bacula Enterprise 6.2.3

New Job Edit Codes %P %C

In various places such as RunScripts, you have now access to %P to get the current Bacula process ID (PID) and %C to know if the current job is a cloned job.

Bacula Enterprise 6.2.0

BWeb Bacula Configuration GUI

In Bacula Enterprise version 6.2, the BWeb Management Suite integrates a Bacula configuration GUI module which is designed to help you create and modify the Bacula configuration files such as bacula-dir.conf, bacula-sd.conf, bacula-fd.conf and bconsole.conf.

The BWeb Management Suite offers a number of Wizards which support the Administrator in his daily work. The wizards provide a step by step set of required actions that graphically guide the Administrator to perform quick and easy creation and modification of configuration files.

BWeb also provides diagnostic tools that enable the Administrator to check that the Catalog Database is well configured, and that BWeb is installed properly.

The new Online help mode displays automatic help text suggestions when the user searches data types.

This project was funded by Bacula Systems and is available with the .

Performance Improvements

Bacula Enterprise 6.2 has a number of new performance improvements:

  • An improved way of storing Bacula Resources (as defined in the .conf files). This new handling permits much faster loading or reloading of the conf files, and permits larger numbers of resources.

  • Improved performance when inserting large numbers of files in the DB catalog by breaking the insertion into smaller chunks, thus allowing better sharing when running multiple simultaneous jobs.

  • Performance enhancements in BVFS concerning eliminating duplicate path records.

  • Performance improvement when getting Pool records.

  • Pruning performance enhancements.

Enhanced Status and Error Messages

We have enhanced the Storage daemon status output to be more readable. This is important when there are a large number of devices. In addition to formatting changes, it also includes more details on which devices are reading and writing.

A number of error messages have been enhanced to have more specific data on what went wrong.

If a file changes size while being backed up the old and new size are reported.

WinBMR 3

The Windows Bare Metal Recovery (BMR) plugin enables you to do safe, reliable Disaster Recovery with on Windows and allows you to get critical systems up and running again quickly. The Enterprise Edition Windows BMR is a toolkit that allows the Administrator to perform the restore of a complete operating system to the same or similar hardware without actually going through the operating system's installation procedure.

The WinBMR 3 version is a major rewrite of the product that support all x86 Windows versions and technologies. Especially UEFI and secure boot systems. The WinBMR 3 File Daemon plugin is now part of the plugins included with the Bacula File Daemon package. The rescue CD or USB key is available separately.

Miscellaneous New Features

  • Allow unlimited line lengths in .conf files (previously limited to 2000 characters).

  • Allow /dev/null in ChangerCommand to indicated a Virtual Autochanger.

  • Add a -fileprune option to the manual_prune.pl script.

  • Add a -m option to make_catalog_backup.pl to do maintenance on the catalog.

  • Safer code that cleans up the working directory when starting the daemons. It limits what files can be deleted, hence enhances security.

  • Added a new .ls command in bconsole to permit browsing a client's filesystem.

  • Fixed a number of bugs, includes some obscure seg faults, and a race condition that occurred infrequently when running Copy, Migration, or Virtual Full backups.

  • Included a new vSphere library version, which will hopefully fix some of the more obscure bugs.

  • Upgraded to a newer version of Qt4 for BAT. All indications are that this will improve BAT's stability on Windows machines.

  • The Windows installers now detect and refuse to install on an OS that does not match the 32/64 bit value of the installer.

Bacula Enterprise 6.0.6

Incremental Accelerator Plugin for NetApp

The Incremental Accelerator for NetApp Plugin is designed to simplify the backup and restore procedure of your NetApp NAS hosting a huge number of files.

When using the NetApp HFC Plugin, Bacula Enterprise will query the NetApp device to get the list of all files modified since the last backup instead of having to walk through the entire filesystem. Once Bacula have the list of all files to back's up, it will use a standard network share (such as NFS or CIFS) to access files.

This project was funded by Bacula Systems and is available with the .

PostgreSQL Plugin

The PostgreSQL plugin is designed to simplify the backup and restore procedure of your PostgreSQL cluster, the backup administrator doesn't need to learn about internals of PostgreSQL backup techniques or write complex scripts. The plugin will automatically take care for you to backup essential information such as configuration, users definition or tablespaces. The PostgreSQL plugin supports both dump and PITR backup techniques.

This project was funded by Bacula Systems and is available with the .

Maximum Reload Requests

The new Director directive Maximum Reload Requests permits to configure the number of reload requests that can be done while jobs are running.

Director {
  Name = localhost-dir
  Maximum Reload Requests = 64
  ...
}

FD Storage Address

When the Director is behind a NAT, in a WAN area, to connect tothe StorageDaemon, the Director uses an external ip address, and the FileDaemon should use an internal IP address to contact the StorageDaemon.

The normal way to handle this situation is to use a canonical name such as storage-server that will be resolved on the Director side as the WAN address and on the Client side as the LAN address. This is now possible to configure this parameter using the new directive FDStorageAddress in the Storage or Client resource.

Storage {
     Name = storage1
     Address = 65.1.1.1
     FD Storage Address = 10.0.0.1
     SD Port = 9103
     ...
}

 Client {
      Name = client1
      Address = 65.1.1.2
      FD Storage Address = 10.0.0.1
      FD Port = 9102
      ...
 }

Note that using the Client FDStorageAddress directive will not allow to use multiple Storage Daemon, all Backup or Restore requests will be sent to the specified FDStorageAddress.

Maximum Concurrent Read Jobs

This is a new directive that can be used in the bacula-dir.conf file in the Storage resource. The main purpose is to limit the number of concurrent Copy, Migration, and VirtualFull jobs so that they don't monopolize all the Storage drives causing a deadlock situation where all the drives are allocated for reading but none remain for writing. This deadlock situation can occur when running multiple simultaneous Copy, Migration, and VirtualFull jobs.

The default value is set to 0 (zero), which means there is no limit on the number of read jobs. Note, limiting the read jobs does not apply to Restore jobs, which are normally started by hand. A reasonable value for this directive is one half the number of drives that the Storage resource has rounded down. Doing so, will leave the same number of drives for writing and will generally avoid over committing drives and a deadlock.

Bacula Enterprise 6.0.4

VMWare vSphere VADP Plugin

The Bacula Enterprise vSphere plugin provides virtual machine bare metal recovery, while the backup at the guest level simplify data protection of critical applications.

The plugin integrates the VMware's CBT technology to ensure only blocks that have changed since the initial Full, and/or the last Incremental or Differential Backup are sent to the current Incremental or Differential backup stream to give you more efficient backups and reduced network load.

Oracle RMAN Plugin

The Bacula Enterprise Oracle Plugin is designed to simplify the backup and restore procedure of your Oracle Database instance, the backup administrator don't need to learn about internals of Oracle backup techniques or write complex scripts. The Bacula Enterprise Oracle plugin supports both dump and PITR with RMAN backup techniques.

Bacula Enterprise 6.0.2

To make Bacula function properly with multiple Autochanger definitions, in the Director's configuration, you must adapt your bacula-dir.conf Storage directives.

Each autochanger that you have defined in an Autochanger resource in the Storage daemon's bacula-sd.conf file, must have a corresponding Autochanger resource defined in the Director's bacula-dir.conf file. Normally you will already have a Storage resource that points to the Storage daemon's Autochanger resource. Thus you need only to change the name of the Storage resource to Autochanger. In addition the Autochanger = yes directive is not needed in the Director's Autochanger resource, since the resource name is Autochanger, the Director already knows that it represents an autochanger.

In addition to the above change (Storage to Autochanger), you must modify any additional Storage resources that correspond to devices that are part of the Autochanger device. Instead of the previous Autochanger = yes directive they should be modified to be Autochanger = xxx where you replace the xxx with the name of the Autochanger.

For example, in the bacula-dir.conf file:

Autochanger {             # New resource
  Name = Changer-1
  Address = cibou.company.com
  SDPort = 9103
  Password = "xxxxxxxxxx"
  Device = LTO-Changer-1
  Media Type = LTO-4
  Maximum Concurrent Jobs = 50
}

Storage {
  Name = Changer-1-Drive0
  Address = cibou.company.com
  SDPort = 9103
  Password = "xxxxxxxxxx"
  Device = LTO4_1_Drive0
  Media Type = LTO-4
  Maximum Concurrent Jobs = 5
  Autochanger = Changer-1  # New directive
}

Storage {
  Name = Changer-1-Drive1
  Address = cibou.company.com
  SDPort = 9103
  Password = "xxxxxxxxxx"
  Device = LTO4_1_Drive1
  Media Type = LTO-4
  Maximum Concurrent Jobs = 5
  Autochanger = Changer-1  # New directive
}

...

Note that Storage resources Changer-1-Drive0 and Changer-1-Drive1 are not required since they make up part of an autochanger, and normally, Jobs refer only to the Autochanger resource. However, by referring to those Storage definitions in a Job, you will use only the indicated drive. This is not normally what you want to do, but it is very useful and often used for reserving a drive for restores. See the Storage daemon example .conf below and the use of AutoSelect = no.

So, in summary, the changes are:

  • Change Storage to Autochanger in the LTO4 resource.
  • Remove the Autochanger = yes from the Autochanger LTO4 resource.
  • Change the Autochanger = yes in each of the Storage device that belong to the Autochanger to point to the Autochanger resource with for the example above the directive Autochanger = LTO4.

Bacula Enterprise 6.0.0

Incomplete Jobs

During a backup, if the Storage daemon experiences disconnection with the File daemon during backup (normally a comm line problem or possibly an FD failure), under conditions that the SD determines to be safe it will make the failed job as Incomplete rather than failed. This is done only if there is sufficient valid backup data that was written to the Volume. The advantage of an Incomplete job is that it can be restarted by the new bconsole restart command from the point where it left off rather than from the beginning of the jobs as is the case with a cancel.

The stop Command

Bacula has been enhanced to provide a stop command, very similar to the cancel command with the main difference that the Job that is stopped is marked as Incomplete so that it can be restarted later by the restart command where it left off (see below). The stop command with no arguments, will like the cancel command, prompt you with the list of running jobs allowing you to select one, which might look like the following:

*stop
Select Job:
     1: JobId=3 Job=Incremental.2012-03-26_12.04.26_07
     2: JobId=4 Job=Incremental.2012-03-26_12.04.30_08
     3: JobId=5 Job=Incremental.2012-03-26_12.04.36_09
Choose Job to stop (1-3): 2
2001 Job "Incremental.2012-03-26_12.04.30_08" marked to be stopped.
3000 JobId=4 Job="Incremental.2012-03-26_12.04.30_08" marked to be stopped.

The restart Command

The new restart command allows console users to restart a canceled, failed, or incomplete Job. For canceled and failed Jobs, the Job will restart from the beginning. For incomplete Jobs the Job will restart at the point that it was stopped either by a stop command or by some recoverable failure.

If you enter the restart command in bconsole, you will get the following prompts:

*restart
You have the following choices:
     1: Incomplete
     2: Canceled
     3: Failed
     4: All
Select termination code:  (1-4):

If you select the All option, you may see something like:

Select termination code:  (1-4): 4
+-------+-------------+---------------------+------+-------+----------+-----------+-----------+
| jobid | name        | starttime           | type | level | jobfiles | jobbytes  | jobstatus |
+-------+-------------+---------------------+------+-------+----------+-----------+-----------+
|     1 | Incremental | 2012-03-26 12:15:21 | B    | F     |        0 |         0 | A         |
|     2 | Incremental | 2012-03-26 12:18:14 | B    | F     |      350 | 4,013,397 | I         |
|     3 | Incremental | 2012-03-26 12:18:30 | B    | F     |        0 |         0 | A         |
|     4 | Incremental | 2012-03-26 12:18:38 | B    | F     |      331 | 3,548,058 | I         |
+-------+-------------+---------------------+------+-------+----------+-----------+-----------+
Enter the JobId list to select:

Then you may enter one or more JobIds to be restarted, which may take the form of a list of JobIds separated by commas, and/or JobId ranges such as 1-4, which indicates you want to restart JobIds 1 through 4, inclusive.

Support for Exchange Incremental Backups

The Bacula Enterprise version 6.0 VSS plugin now supports Full and Incremental backups for Exchange. We strongly recommend that you do not attempt to run Differential jobs with Exchange as it is likely to produce a situation where restores will no longer select the correct jobs, and thus the Windows Exchange VSS writer will fail when applying log files. There is a Bacula Systems Enterprise white paper that provides the details of backup and restore of Exchange 2010 with the Bacula VSS plugin.

Restores can be done while Exchange is running, but you must first unmount (dismount in Microsoft terms) any database you wish to restore and explicitly mark them to permit a restore operation (see the white paper for details).

This project was funded by Bacula Systems and is available with the .

Support for MSSQL Block Level Backups

With the addition of block level backup support to the Bacula Enterprise VSS MSSQL component, you can now do Differential backups in addition to Full backups. Differential backups use Microsoft's partial block backup (a block differencing or deduplication that we call Delta). This partial block backup permits backing up only those blocks that have changed. Database restores can be made while the MSSQL server is running, but any databases selected for restore will be automatically taken offline by the MSSQL server during the restore process.

Incremental backups for MSSQL are not support by Microsoft. We strongly recommend that you not perform Incremental backups with MSSQL as they will probably produce a situation where restore will no longer work correctly.

We are currently working on producing a white paper that will give more details of backup and restore with MSSQL. One point to note is that during a restore, you will normally not want to restore the master database. You must exclude it from the backup selections that you have made or the restore will fail.

It is possible to restore the master database, but you must first shutdown the MSSQL server, then you must perform special recovery commands. Please see Microsoft documentation on how to restore the master database.

This project was funded by Bacula Systems and is available with the .

Job Bandwidth Limitation

The new Job Bandwidth Limitation directive may be added to the File daemon's and/or Director's configuration to limit the bandwidth used by a Job on a Client. It can be set in the File daemon's conf file for all Jobs run in that File daemon, or it can be set for each Job in the Director's conf file. The speed is always specified in bytes per second.

For example:

FileDaemon {
  Name = localhost-fd
  Working Directory = /some/path
  Pid Directory = /some/path
  ...
  Maximum Bandwidth Per Job = 5Mb/s
}

The above example would cause any jobs running with the FileDaemon to not exceed 5 megabytes per second of throughput when sending data to the Storage Daemon. Note, the speed is always specified in bytes per second (not in bits per second), and the case (upper/lower) of the specification characters is ignored (i.e. 1MB/s = 1Mb/s).

You may specify the following speed parameter modifiers: k/s (1,024 bytes per second), kb/s (1,000 bytes per second), m/s (1,048,576 bytes per second), or mb/s (1,000,000 bytes per second).

For example:

Job {
  Name = locahost-data
  FileSet = FS_localhost
  Accurate = yes
  ...
  Maximum Bandwidth = 5Mb/s
  ...
}

The above example would cause Job localhost-data to not exceed 5MB/s of throughput when sending data from the File daemon to the Storage daemon.

A new console command setbandwidth permits to set dynamically the maximum throughput of a running Job or for future jobs of a Client.

* setbandwidth limit=1000 jobid=10

Number of bytes can be expressed using modifiers mentioned above (k/s, kb/s, m/s or mb/s).


This project was funded by Bacula Systems and is available in .

Incremental/Differential Block Level Difference Backup

The new delta Plugin is able to compute and apply signature-based file differences. It can be used to backup only changes in a big binary file like Outlook PST, VirtualBox/VMware images or database files.

It supports both Incremental and Differential backups and stores signatures database in the File Daemon working directory. This plugin is available on all platform including Windows 32 and 64bit.

Accurate option should be turned on in the Job resource.

Job {
 Accurate = yes
 FileSet = DeltaFS
 ...
}

FileSet {
 Name = DeltaFS
 ...
 Include {
   # Specify one file
   Plugin = "delta:/home/eric/.VirtualBox/HardDisks/lenny-i386.vdi"
 }
}

FileSet {
 Name = DeltaFS-Include
 ...
 Include {
   Options {
      Compression = GZIP1
      Signature = MD5
      Plugin = delta
   }
   # Use the Options{} filtering and options
   File = /home/user/.VirtualBox
 }
}

Please contact Bacula Systems support to get Delta Plugin specific documentation.


This project was funded by Bacula Systems and is available with the .

SAN Shared Tape Storage Plugin

The problem with backing up multiple servers at the same time to the same tape library (or autoloader) is that if both servers access the same tape drive same time, you will very likely get data corruption. This is where the Bacula Systems shared tape storage plugin comes into play. The plugin ensures that only one server at a time can connect to each device (tape drive) by using the SPC-3 SCSI reservation protocol. Please contact Bacula Systems support to get SAN Shared Storage Plugin specific documentation.


This project was funded by Bacula Systems and is available with .

Advanced Autochanger Usage

The new Shared Storage Director's directive is a Bacula Enterprise feature that allows you to share volumes between different Storage resources. This directive should be used only if all Media Type are correctly set across all Devices.

The Shared Storage directive should be used when using the SAN Shared Storage plugin or when accessing from the Director Storage resources directly to Devices of an Autochanger.

When sharing volumes between different Storage resources, you will need also to use the reset-storageid script before using the update slots command. This script can be scheduled once a day in an Admin job.

 $ /opt/bacula/scripts/reset-storageid MediaType StorageName
 $ bconsole
 * update slots storage=StorageName drive=0

Please contact Bacula Systems support to get help on this advanced configuration.


This project was funded by Bacula Systems and is available with .


The reset-storageid procedure is no longer required when using the appropriate Autochanger configuration in the Director configuration side.

Enhancement of the NDMP Plugin

The previous NDMP Plugin 4.0 was fully supporting only the NetApp hardware, the new NDMP Plugin should now be able to support all NAS vendors with the volume_format plugin command option.

On some NDMP devices such as Celera or Blueray, the administrator can use arbitrary volume structure name, ex:

/dev/volume_home
/rootvolume/volume_tmp
/VG/volume_var

The NDMP plugin should be aware of the structure organization in order to detect if the administrator wants to restore in a new volume (where=/dev/vol_tmp) or inside a subdirectory of the targeted volume (where=/tmp).

FileSet {
 Name = NDMPFS
 ...
 Include {
   Plugin = "ndmp:host=nasbox user=root pass=root file=/dev/vol1 volume_format=/dev/"
 }
}

Please contact Bacula Systems support to get NDMP Plugin specific documentation.


This project was funded by Bacula Systems and is available with the

Always Backup a File

When the Accurate mode is turned on, you can decide to always backup a file by using then new A Accurate option in your FileSet. For example:

Job {
   Name = ...
   FileSet = FS_Example
   Accurate = yes
   ...
}

FileSet {
 Name = FS_Example
 Include {
   Options {
     Accurate = A
   }
   File = /file
   File = /file2
 }
 ...
}

This project was funded by Bacula Systems based on an idea of James Harper and is available with the .

Setting Accurate Mode at Runtime

You are now able to specify the Accurate mode on the run command and in the Schedule resource.

* run accurate=yes job=Test

Schedule {
  Name = WeeklyCycle
  Run = Full 1st sun at 23:05
  Run = Differential accurate=yes 2nd-5th sun at 23:05
  Run = Incremental  accurate=no  mon-sat at 23:05
}

It can allow you to save memory and and CPU resources on the catalog server in some cases.


These advanced tuning options are available with the .

Additions to RunScript variables

You can have access to JobBytes, JobFiles and Director name using %b, %F and %D in your runscript command. The Client address is now available through %h.

RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h Dir=%D"


LZO Compression

LZO compression was added in the Unix File Daemon. From the user point of view, it works like the GZIP compression (just replace compression=GZIP with compression=LZO).

For example:

Include {
   Options { compression=LZO }
   File = /home
   File = /data
}

LZO provides much faster compression and decompression speed but lower compression ratio than GZIP. It is a good option when you backup to disk. For tape, the built-in compression may be a better option.

LZO is a good alternative for GZIP1 when you don't want to slow down your backup. On a modern CPU it should be able to run almost as fast as:

  • your client can read data from disk. Unless you have very fast disks like SSD or large/fast RAID array.
  • the data transfers between the file daemon and the storage daemon even on a 1Gb/s link.

Note that bacula only use one compression level LZO1X-1.


The code for this feature was contributed by Laurent Papier.

New Tray Monitor

Since the old integrated Windows tray monitor doesn't work with recent Windows versions, we have written a new Qt Tray Monitor that is available for both Linux and Windows. In addition to all the previous features, this new version allows you to run Backups from the tray monitor menu.

To be able to run a job from the tray monitor, you need to allow specific commands in the Director monitor console:

Console {
    Name = win2003-mon
    Password = "xxx"
    CommandACL = status, .clients, .jobs, .pools, .storage, .filesets, .messages, run
    ClientACL = *all*               # you can restrict to a specific host
    CatalogACL = *all*
    JobACL = *all*
    StorageACL = *all*
    ScheduleACL = *all*
    PoolACL = *all*
    FileSetACL = *all*
    WhereACL = *all*
}


This project was funded by Bacula Systems and is available with and .

Purge Migration Job

The new Purge Migration Job directive may be added to the Migration Job definition in the 's configuration file. When it is enabled the Job that was migrated during a migration will be purged at the end of the migration job.

For example:

Job {
  Name = "migrate-job"
  Type = Migrate
  Level = Full
  Client = localhost-fd
  FileSet = "Full Set"
  Messages = Standard
  Storage = DiskChanger
  Pool = Default
  Selection Type = Job
  Selection Pattern = ".*Save"
...
  Purge Migration Job = yes
}


This project was submitted by Dunlap Blake; testing and documentation was funded by Bacula Systems.

Changes in the Pruning Algorithm

We rewrote the job pruning algorithm in this version. Previously, in some users reported that the pruning process at the end of jobs was very long. It should not be longer the case. Now, Bacula won't prune automatically a Job if this particular Job is needed to restore data. Example:

JobId: 1  Level: Full
JobId: 2  Level: Incremental
JobId: 3  Level: Incremental
JobId: 4  Level: Differential
.. Other incrementals up to now

In this example, if the Job Retention defined in the Pool or in the Client resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will detect that JobId 1 and 4 are essential to restore data at the current state and will prune only JobId 2 and 3.

Important, this change affect only the automatic pruning step after a Job and the prune jobs Bacula Console command. If a volume expires after the VolumeRetention period, important jobs can be pruned.

Ability to Verify any specified Job

You now have the ability to tell Bacula which Job should verify instead of automatically verify just the last one.

This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level.

To verify a given job, just specify the Job jobid in argument when starting the job.

*run job=VerifyVolume jobid=1 level=VolumeToCatalog
Run Verify job
JobName:     VerifyVolume
Level:       VolumeToCatalog
Client:      127.0.0.1-fd
FileSet:     Full Set
Pool:        Default (From Job resource)
Storage:     File (From Job resource)
Verify Job:  VerifyVol.2010-09-08_14.17.17_03
Verify List: /tmp/regress/working/VerifyVol.bsr
When:        2010-09-08 14:17:31
Priority:    10
OK to run? (yes/mod/no):


This project was funded by Bacula Systems and is available with Bacula Enterprise Edition and Community Edition.