Subsections

New Features in 5.2.x

This chapter presents the new features that have been added to the next Community version of Bacula that is not yet released.

New Features in 5.2.2

This chapter presents the new features that have been added to the current Community version of Bacula that is now released.

Additions to RunScript variables

You can have access to Director name using %D in your runscript command.

RunAfterJob = "/bin/echo Director=%D 

New Features in 5.2.1

This chapter presents the new features were added in the Community release version 5.2.1.

There are additional features (plugins) available in the Enterprise version that are described in another chapter. A subscription to Bacula Systems is required for the Enterprise version.

LZO Compression

LZO compression has been to the File daemon. From the user's point of view, it works like the GZIP compression (just replace compression=GZIP with compression=LZO).

For example:

Include {
   Options { compression=LZO }
   File = /home
   File = /data
}

LZO provides a much faster compression and decompression speed but lower compression ratio than GZIP. It is a good option when you backup to disk. For tape, the hardware compression is almost always a better option.

LZO is a good alternative for GZIP1 when you don't want to slow down your backup. With a modern CPU it should be able to run almost as fast as:

Note, Bacula uses compression level LZO1X-1.


The code for this feature was contributed by Laurent Papier.

New Tray Monitor

Since the old integrated Windows tray monitor doesn't work with recent Windows versions, we have written a new Qt Tray Monitor that is available for both Linux and Windows. In addition to all the previous features, this new version allows you to run Backups from the tray monitor menu.

Figure 2.1: New tray monitor
\includegraphics[width=10cm]{tray-monitor}

Figure 2.2: Run a Job through the new tray monitor
\includegraphics[width=10cm]{tray-monitor1}

To be able to run a job from the tray monitor, you need to allow specific commands in the Director monitor console:

Console {
    Name = win2003-mon
    Password = "xxx"
    CommandACL = status, .clients, .jobs, .pools, .storage, .filesets, .messages, run
    ClientACL = *all*               # you can restrict to a specific host
    CatalogACL = *all*
    JobACL = *all*
    StorageACL = *all*
    ScheduleACL = *all*
    PoolACL = *all*
    FileSetACL = *all*
    WhereACL = *all*
}


This project was funded by Bacula Systems and is available with Bacula the Enterprise Edition and the Community Edition.

Purge Migration Job

The new Purge Migration Job directive may be added to the Migration Job definition in the Director's configuration file. When it is enabled the Job that was migrated during a migration will be purged at the end of the migration job.

For example:

Job {
  Name = "migrate-job"
  Type = Migrate
  Level = Full
  Client = localhost-fd
  FileSet = "Full Set"
  Messages = Standard
  Storage = DiskChanger
  Pool = Default
  Selection Type = Job
  Selection Pattern = ".*Save"
...
  Purge Migration Job = yes
}


This project was submitted by Dunlap Blake; testing and documentation was funded by Bacula Systems.

Changes in Bvfs (Bacula Virtual FileSystem)

Bat has now a bRestore panel that uses Bvfs to display files and directories.

Figure 2.3: Bat Brestore Panel
\includegraphics[width=12cm]{bat-brestore}

the Bvfs module works correctly with BaseJobs, Copy and Migration jobs.


This project was funded by Bacula Systems.

General notes

Get dependent jobs from a given JobId

Bvfs allows you to query the catalog against any combination of jobs. You can combine all Jobs and all FileSet for a Client in a single session.

To get all JobId needed to restore a particular job, you can use the .bvfs_get_jobids command.

.bvfs_get_jobids jobid=num [all]

.bvfs_get_jobids jobid=10
1,2,5,10
.bvfs_get_jobids jobid=10 all
1,2,3,5,10

In this example, a normal restore will need to use JobIds 1,2,5,10 to compute a complete restore of the system.

With the all option, the Director will use all defined FileSet for this client.

Generating Bvfs cache

The .bvfs_update command computes the directory cache for jobs specified in argument, or for all jobs if unspecified.

.bvfs_update [jobid=numlist]

Example:

.bvfs_update jobid=1,2,3

You can run the cache update process in a RunScript after the catalog backup.

Get all versions of a specific file

Bvfs allows you to find all versions of a specific file for a given Client with the .bvfs_version command. To avoid problems with encoding, this function uses only PathId and FilenameId. The jobid argument is mandatory but unused.

.bvfs_versions client=filedaemon pathid=num filenameid=num jobid=1
PathId FilenameId FileId JobId LStat Md5 VolName Inchanger
PathId FilenameId FileId JobId LStat Md5 VolName Inchanger
...

Example:

.bvfs_versions client=localhost-fd pathid=1 fnid=47 jobid=1
1  47  52  12  gD HRid IGk D Po Po A P BAA I A   /uPgWaxMgKZlnMti7LChyA  Vol1  1

List directories

Bvfs allows you to list directories in a specific path.

.bvfs_lsdirs pathid=num path=/apath jobid=numlist limit=num offset=num
PathId  FilenameId  FileId  JobId  LStat  Path
PathId  FilenameId  FileId  JobId  LStat  Path
PathId  FilenameId  FileId  JobId  LStat  Path
...

You need to pathid or path. Using path="" will list ``/'' on Unix and all drives on Windows. If FilenameId is 0, the record listed is a directory.

.bvfs_lsdirs pathid=4 jobid=1,11,12
4       0       0       0       A A A A A A A A A A A A A A     .
5       0       0       0       A A A A A A A A A A A A A A     ..
3       0       0       0       A A A A A A A A A A A A A A     regress/

In this example, to list directories present in regress/, you can use

.bvfs_lsdirs pathid=3 jobid=1,11,12
3       0       0       0       A A A A A A A A A A A A A A     .
4       0       0       0       A A A A A A A A A A A A A A     ..
2       0       0       0       A A A A A A A A A A A A A A     tmp/

List files

Bvfs allows you to list files in a specific path.

.bvfs_lsfiles pathid=num path=/apath jobid=numlist limit=num offset=num
PathId  FilenameId  FileId  JobId  LStat  Path
PathId  FilenameId  FileId  JobId  LStat  Path
PathId  FilenameId  FileId  JobId  LStat  Path
...

You need to pathid or path. Using path="" will list ``/'' on Unix and all drives on Windows. If FilenameId is 0, the record listed is a directory.

.bvfs_lsfiles pathid=4 jobid=1,11,12
4       0       0       0       A A A A A A A A A A A A A A     .
5       0       0       0       A A A A A A A A A A A A A A     ..
1       0       0       0       A A A A A A A A A A A A A A     regress/

In this example, to list files present in regress/, you can use

.bvfs_lsfiles pathid=1 jobid=1,11,12
1   47   52   12    gD HRid IGk BAA I BMqcPH BMqcPE BMqe+t A     titi
1   49   53   12    gD HRid IGk BAA I BMqe/K BMqcPE BMqe+t B     toto
1   48   54   12    gD HRie IGk BAA I BMqcPH BMqcPE BMqe+3 A     tutu
1   45   55   12    gD HRid IGk BAA I BMqe/K BMqcPE BMqe+t B     ficheriro1.txt
1   46   56   12    gD HRie IGk BAA I BMqe/K BMqcPE BMqe+3 D     ficheriro2.txt

Restore set of files

Bvfs allows you to create a SQL table that contains files that you want to restore. This table can be provided to a restore command with the file option.

.bvfs_restore fileid=numlist dirid=numlist hardlink=numlist path=b2num
OK
restore file=?b2num ...

To include a directory (with dirid), Bvfs needs to run a query to select all files. This query could be time consuming.

hardlink list is always composed of a series of two numbers (jobid, fileindex). This information can be found in the LinkFI field of the LStat packet.

The path argument represents the name of the table that Bvfs will store results. The format of this table is b2[0-9]+. (Should start by b2 and followed by digits).

Example:

.bvfs_restore fileid=1,2,3,4 hardlink=10,15,10,20 jobid=10 path=b20001
OK

Cleanup after Restore

To drop the table used by the restore command, you can use the .bvfs_cleanup command.

.bvfs_cleanup path=b20001

Clearing the BVFS Cache

To clear the BVFS cache, you can use the .bvfs_clear_cache command.

.bvfs_clear_cache yes
OK

Changes in the Pruning Algorithm

We rewrote the job pruning algorithm in this version. Previously, in some users reported that the pruning process at the end of jobs was very long. It should not be longer the case. Now, Bacula won't prune automatically a Job if this particular Job is needed to restore data. Example:

JobId: 1  Level: Full
JobId: 2  Level: Incremental
JobId: 3  Level: Incremental
JobId: 4  Level: Differential
.. Other incrementals up to now

In this example, if the Job Retention defined in the Pool or in the Client resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will detect that JobId 1 and 4 are essential to restore data at the current state and will prune only JobId 2 and 3.

Important, this change affect only the automatic pruning step after a Job and the prune jobs Bconsole command. If a volume expires after the VolumeRetention period, important jobs can be pruned.

Ability to Verify any specified Job

You now have the ability to tell Bacula which Job should verify instead of automatically verify just the last one.

This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level.

To verify a given job, just specify the Job jobid in argument when starting the job.

*run job=VerifyVolume jobid=1 level=VolumeToCatalog
Run Verify job
JobName:     VerifyVolume
Level:       VolumeToCatalog
Client:      127.0.0.1-fd
FileSet:     Full Set
Pool:        Default (From Job resource)
Storage:     File (From Job resource)
Verify Job:  VerifyVol.2010-09-08_14.17.17_03
Verify List: /tmp/regress/working/VerifyVol.bsr
When:        2010-09-08 14:17:31
Priority:    10
OK to run? (yes/mod/no):


This project was funded by Bacula Systems and is available with Bacula Enterprise Edition and Community Edition.

Additions to RunScript variables

You can have access to JobBytes and JobFiles using %b and %F in your runscript command. The Client address is now available through %h.

RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h"

Additions to the Plugin API

The bfuncs structure has been extended to include a number of new entrypoints.

bfuncs

The bFuncs structure defines the callback entry points within Bacula that the plugin can use register events, get Bacula values, set Bacula values, and send messages to the Job output or debug output.

The exact definition as of this writing is:

typedef struct s_baculaFuncs {
   uint32_t size;
   uint32_t version;
   bRC (*registerBaculaEvents)(bpContext *ctx, ...);
   bRC (*getBaculaValue)(bpContext *ctx, bVariable var, void *value);
   bRC (*setBaculaValue)(bpContext *ctx, bVariable var, void *value);
   bRC (*JobMessage)(bpContext *ctx, const char *file, int line,
       int type, utime_t mtime, const char *fmt, ...);
   bRC (*DebugMessage)(bpContext *ctx, const char *file, int line,
       int level, const char *fmt, ...);
   void *(*baculaMalloc)(bpContext *ctx, const char *file, int line,
       size_t size);
   void (*baculaFree)(bpContext *ctx, const char *file, int line, void *mem);
   
   /* New functions follow */
   bRC (*AddExclude)(bpContext *ctx, const char *file);
   bRC (*AddInclude)(bpContext *ctx, const char *file);
   bRC (*AddIncludeOptions)(bpContext *ctx, const char *opts);
   bRC (*AddRegex)(bpContext *ctx, const char *item, int type);
   bRC (*AddWild)(bpContext *ctx, const char *item, int type);
   bRC (*checkChanges)(bpContext *ctx, struct save_pkt *sp);

} bFuncs;

AddExclude
can be called to exclude a file. The file string passed may include wildcards that will be interpreted by the fnmatch subroutine. This function can be called multiple times, and each time the file specified will be added to the list of files to be excluded. Note, this function only permits adding excludes of specific file or directory names, or files matched by the rather simple fnmatch mechanism. See below for information on doing wild-card and regex excludes.

NewPreInclude
can be called to create a new Include block. This block will be added after the current defined Include block. This function can be called multiple times, but each time, it will create a new Include section (not normally needed). This function should be called only if you want to add an entirely new Include block.

NewInclude
can be called to create a new Include block. This block will be added before any user defined Include blocks. This function can be called multiple times, but each time, it will create a new Include section (not normally needed). This function should be called only if you want to add an entirely new Include block.

AddInclude
can be called to add new files/directories to be included. They are added to the current Include block. If NewInclude has not been included, the current Include block is the last one that the user created. This function should be used only if you want to add totally new files/directories to be included in the backup.

NewOptions
adds a new Options block to the current Include in front of any other Options blocks. This permits the plugin to add exclude directives (wild-cards and regexes) in front of the user Options, and thus prevent certain files from being backed up. This can be useful if the plugin backs up files, and they should not be also backed up by the main Bacula code. This function may be called multiple times, and each time, it creates a new prepended Options block. Note: normally you want to call this entry point prior to calling AddOptions, AddRegex, or AddWild.

AddOptions
allows the plugin it set options in the current Options block, which is normally created with the NewOptions call just prior to adding Include Options. The permitted options are passed as a character string, where each character has a specific meaning as defined below:

a
always replace files (default).
e
exclude rather than include.
h
no recursion into subdirectories.
H
do not handle hard links.
i
ignore case in wildcard and regex matches.
M
compute an MD5 sum.
p
use a portable data format on Windows (not recommended).
R
backup resource forks and Findr Info.
r
read from a fifo
S1
compute an SHA1 sum.
S2
compute an SHA256 sum.
S3
comput an SHA512 sum.
s
handle sparse files.
m
use st_mtime only for file differences.
k
restore the st_atime after accessing a file.
A
enable ACL backup.
Vxxx:
specify verify options. Must terminate with :
Cxxx:
specify accurate options. Must terminate with :
Jxxx:
specify base job Options. Must terminate with :
Pnnn:
specify integer nnn paths to strip. Must terminate with :
w
if newer
Zn
specify gzip compression level n.
K
do not use st_atime in backup decision.
c
check if file changed during backup.
N
honor no dump flag.
X
enable backup of extended attributes.

AddRegex
adds a regex expression to the current Options block. The following options are permitted:
(a blank) regex applies to whole path and filename.
F
regex applies only to the filename (directory or path stripped).
D
regex applies only to the directory (path) part of the name.

AddWild
adds a wildcard expression to the current Options block. The following options are permitted:
(a blank) regex applies to whole path and filename.
F
regex applies only to the filename (directory or path stripped).
D
regex applies only to the directory (path) part of the name.

checkChanges
call the check_changes() function in Bacula code that can use Accurate code to compare the file information in argument with the previous file information. The delta_seq attribute of the save_pkt will be updated, and the call will return bRC_Seen if the core code wouldn't decide to backup it.

Bacula events

The list of events has been extended to include:

typedef enum {
  bEventJobStart        = 1,
  bEventJobEnd          = 2,
  bEventStartBackupJob  = 3,
  bEventEndBackupJob    = 4,
  bEventStartRestoreJob = 5,
  bEventEndRestoreJob   = 6,
  bEventStartVerifyJob  = 7,
  bEventEndVerifyJob    = 8,
  bEventBackupCommand   = 9,
  bEventRestoreCommand  = 10,
  bEventLevel           = 11,
  bEventSince           = 12,
   
  /* New events */
  bEventCancelCommand                   = 13,
  bEventVssBackupAddComponents          = 14,
  bEventVssRestoreLoadComponentMetadata = 15,
  bEventVssRestoreSetComponentsSelected = 16,
  bEventRestoreObject                   = 17,
  bEventEndFileSet                      = 18,
  bEventPluginCommand                   = 19,
  bEventVssBeforeCloseRestore           = 20,
  bEventVssPrepareSnapshot              = 21

} bEventType;

bEventCancelCommand
is called whenever the currently running Job is canceled */

bEventVssBackupAddComponents

bEventVssPrepareSnapshot
is called before creating VSS snapshots, it provides a char[27] table where the plugin can add Windows drives that will be used during the Job. You need to add them without duplicates, and you can use in fd_common.h add_drive() and copy_drives() for this purpose.

ACL enhancements

The following enhancements are made to the Bacula Filed with regards to Access Control Lists (ACLs)


This project was funded by Planets Communications B.V. and ELM Consultancy B.V. and is available with Bacula Enterprise Edition and Community Edition.

XATTR enhancements

The following enhancements are made to the Bacula Filed with regards to Extended Attributes (XATTRs)


This project was funded by Planets Communications B.V. and ELM Consultancy B.V. and is available with Bacula Enterprise Edition and Community Edition.

Class Based Database Backend Drivers

The main Bacula Director code is independent of the SQL backend in version 5.2.0 and greater. This means that the Bacula Director can be packaged by itself, then each of the different SQL backends supported can be packaged separately. It is possible to build all the DB backends at the same time by including multiple database options at the same time.

./configure can be run with multiple database configure options.

   --with-sqlite3
   --with-mysql
   --with-postgresql

Order of testing for databases is:

Each configured backend generates a file named: libbaccats-<sql_backend_name>-<version>.so A dummy catalog library is created named libbaccats-version.so

At configure time the first detected backend is used as the so called default backend and at install time the dummy libbaccats-<version>.so is replaced with the default backend type.

If you configure all three backends you get three backend libraries and the postgresql gets installed as the default.

When you want to switch to another database, first save any old catalog you may have then you can copy one of the three backend libraries over the libbaccats-<version>.so e.g.

An actual command, depending on your Bacula version might be:

   cp libbaccats-postgresql-5.2.2.so libbaccats-5.2.2.so

where the 5.2.2 must be replaced by the Bacula release version number.

Then you must update the default backend in the following files:

  create_bacula_database
  drop_bacula_database
  drop_bacula_tables
  grant_bacula_privileges
  make_bacula_tables
  make_catalog_backup
  update_bacula_tables

And re-run all the above scripts. Please note, this means you will have a new empty database and if you had a previous one it will be lost.

All current database backend drivers for catalog information are rewritten to use a set of multi inherited C++ classes which abstract the specific database specific internals and make sure we have a more stable generic interface with the rest of SQL code. From now on there is a strict boundary between the SQL code and the low-level database functions. This new interface should also make it easier to add a new backend for a currently unsupported database. As part of the rewrite the SQLite 2 code was removed (e.g. only SQLite 3 is now supported). An extra bonus of the new code is that you can configure multiple backends in the configure and build all backends in one compile session and select the correct database backend at install time. This should make it a lot easier for packages maintainers.


We also added cursor support for PostgreSQL backend, this improves memory usage for large installation.


This project was implemented by Planets Communications B.V. and ELM Consultancy B.V. and Bacula Systems and is available with both the Bacula Enterprise Edition and the Community Edition.

Hash List Enhancements

The htable hash table class has been extended with extra hash functions for handling next to char pointer hashes also 32 bits and 64 bits hash keys. Also the hash table initialization routines have been enhanced with support for passing a hint as to the number of initial pages to use for the size of the hash table. Until now the hash table always used a fixed value of 10 Mb. The private hash functions of the mountpoint entry cache have been rewritten to use the new htable class with a small memory footprint.


This project was funded by Planets Communications B.V. and ELM Consultancy B.V. and Bacula Systems and is available with Bacula Enterprise Edition and Community Edition.

Release Version 5.0.3

There are no new features in version 5.0.2. This version simply fixes a number of bugs found in version 5.0.1 during the ongoing development process.

Release Version 5.0.2

There are no new features in version 5.0.2. This version simply fixes a number of bugs found in version 5.0.1 during the ongoing development process.

New Features in 5.0.1

This chapter presents the new features that are in the released Bacula version 5.0.1. This version mainly fixes a number of bugs found in version 5.0.0 during the ongoing development process.


Truncate Volume after Purge

The Pool directive ActionOnPurge=Truncate instructs Bacula to truncate the volume when it is purged with the new command purge volume action. It is useful to prevent disk based volumes from consuming too much space.

Pool {
  Name = Default
  Action On Purge = Truncate
  ...
}

As usual you can also set this property with the update volume command

*update volume=xxx ActionOnPurge=Truncate
*update volume=xxx actiononpurge=None

To ask Bacula to truncate your Purged volumes, you need to use the following command in interactive mode or in a RunScript as shown after:

*purge volume action=truncate storage=File allpools
# or by default, action=all
*purge volume action storage=File pool=Default

This is possible to specify the volume name, the media type, the pool, the storage, etc...(see help purge) Be sure that your storage device is idle when you decide to run this command.

Job {
 Name = CatalogBackup
 ...
 RunScript {
   RunsWhen=After
   RunsOnClient=No
   Console = "purge volume action=all allpools storage=File"
 }
}

Important note: This feature doesn't work as expected in version 5.0.0. Please do not use it before version 5.0.1.

Allow Higher Duplicates

This directive did not work correctly and has been depreciated (disabled) in version 5.0.1. Please remove it from your bacula-dir.conf file as it will be removed in a future release.

Cancel Lower Level Duplicates

This directive was added in Bacula version 5.0.1. It compares the level of a new backup job to old jobs of the same name, if any, and will kill the job which has a lower level than the other one. If the levels are the same (i.e. both are Full backups), then nothing is done and the other Cancel XXX Duplicate directives will be examined.

New Features in 5.0.0


Maximum Concurrent Jobs for Devices

Maximum Concurrent Jobs is a new Device directive in the Storage Daemon configuration permits setting the maximum number of Jobs that can run concurrently on a specified Device. Using this directive, it is possible to have different Jobs using multiple drives, because when the Maximum Concurrent Jobs limit is reached, the Storage Daemon will start new Jobs on any other available compatible drive. This facilitates writing to multiple drives with multiple Jobs that all use the same Pool.

This project was funded by Bacula Systems.

Restore from Multiple Storage Daemons

Previously, you were able to restore from multiple devices in a single Storage Daemon. Now, Bacula is able to restore from multiple Storage Daemons. For example, if your full backup runs on a Storage Daemon with an autochanger, and your incremental jobs use another Storage Daemon with lots of disks, Bacula will switch automatically from one Storage Daemon to an other within the same Restore job.

You must upgrade your File Daemon to version 3.1.3 or greater to use this feature.

This project was funded by Bacula Systems with the help of Equiinet.

File Deduplication using Base Jobs

A base job is sort of like a Full save except that you will want the FileSet to contain only files that are unlikely to change in the future (i.e. a snapshot of most of your system after installing it). After the base job has been run, when you are doing a Full save, you specify one or more Base jobs to be used. All files that have been backed up in the Base job/jobs but not modified will then be excluded from the backup. During a restore, the Base jobs will be automatically pulled in where necessary.

This is something none of the competition does, as far as we know (except perhaps BackupPC, which is a Perl program that saves to disk only). It is big win for the user, it makes Bacula stand out as offering a unique optimization that immediately saves time and money. Basically, imagine that you have 100 nearly identical Windows or Linux machine containing the OS and user files. Now for the OS part, a Base job will be backed up once, and rather than making 100 copies of the OS, there will be only one. If one or more of the systems have some files updated, no problem, they will be automatically restored.

See the Base Job Chapterbasejobs for more information.

This project was funded by Bacula Systems.

AllowCompression = yesno

This new directive may be added to Storage resource within the Director's configuration to allow users to selectively disable the client compression for any job which writes to this storage resource.

For example:

Storage {
  Name = UltriumTape
  Address = ultrium-tape
  Password = storage_password # Password for Storage Daemon
  Device = Ultrium
  Media Type = LTO 3
  AllowCompression = No # Tape drive has hardware compression
}
The above example would cause any jobs running with the UltriumTape storage resource to run without compression from the client file daemons. This effectively overrides any compression settings defined at the FileSet level.

This feature is probably most useful if you have a tape drive which supports hardware compression. By setting the AllowCompression = No directive for your tape drive storage resource, you can avoid additional load on the file daemon and possibly speed up tape backups.

This project was funded by Collaborative Fusion, Inc.


Accurate Fileset Options

In previous versions, the accurate code used the file creation and modification times to determine if a file was modified or not. Now you can specify which attributes to use (time, size, checksum, permission, owner, group, ...), similar to the Verify options.

FileSet {
  Name = Full
  Include = {
    Options {
       Accurate = mcs
       Verify   = pin5
    }
    File = /
  }
}

i compare the inodes
p compare the permission bits
n compare the number of links
u compare the user id
g compare the group id
s compare the size
a compare the access time
m compare the modification time (st_mtime)
c compare the change time (st_ctime)
d report file size decreases
5 compare the MD5 signature
1 compare the SHA1 signature

Important note: If you decide to use checksum in Accurate jobs, the File Daemon will have to read all files even if they normally would not be saved. This increases the I/O load, but also the accuracy of the deduplication. By default, Bacula will check modification/creation time and size.

This project was funded by Bacula Systems.


Tab-completion for Bconsole

If you build bconsole with readline support, you will be able to use the new auto-completion mode. This mode supports all commands, gives help inside command, and lists resources when required. It works also in the restore mode.

To use this feature, you should have readline development package loaded on your system, and use the following option in configure.

./configure --with-readline=/usr/include/readline --disable-conio ...

The new bconsole won't be able to tab-complete with older directors.

This project was funded by Bacula Systems.


Pool File and Job Retention

We added two new Pool directives, FileRetention and JobRetention, that take precedence over Client directives of the same name. It allows you to control the Catalog pruning algorithm Pool by Pool. For example, you can decide to increase Retention times for Archive or OffSite Pool.

It seems obvious to us, but apparently not to some users, that given the definition above that the Pool File and Job Retention periods is a global override for the normal Client based pruning, which means that when the Job is pruned, the pruning will apply globally to that particular Job.

Currently, there is a bug in the implementation that causes any Pool retention periods specified to apply to all Pools for that particular Client. Thus we suggest that you avoid using these two directives until this implementation problem is corrected.


Read-only File Daemon using capabilities

This feature implements support of keeping ReadAll capabilities after UID/GID switch, this allows FD to keep root read but drop write permission.

It introduces new bacula-fd option (-k) specifying that ReadAll capabilities should be kept after UID/GID switch.

root@localhost:~# bacula-fd -k -u nobody -g nobody

The code for this feature was contributed by our friends at AltLinux.


Bvfs API

To help developers of restore GUI interfaces, we have added new dot commands that permit browsing the catalog in a very simple way.

You can use limit=xxx and offset=yyy to limit the amount of data that will be displayed.

* .bvfs_update jobid=1,2
* .bvfs_update
* .bvfs_lsdir path=/ jobid=1,2

This project was funded by Bacula Systems.


Testing your Tape Drive

To determine the best configuration of your tape drive, you can run the new speed command available in the btape program.

This command can have the following arguments:

file_size=n
Specify the Maximum File Size for this test (between 1 and 5GB). This counter is in GB.
nb_file=n
Specify the number of file to be written. The amount of data should be greater than your memory ( $file\_size*nb\_file$).
skip_zero
This flag permits to skip tests with constant data.
skip_random
This flag permits to skip tests with random data.
skip_raw
This flag permits to skip tests with raw access.
skip_block
This flag permits to skip tests with Bacula block access.

*speed file_size=3 skip_raw
btape.c:1078 Test with zero data and bacula block structure.
btape.c:956 Begin writing 3 files of 3.221 GB with blocks of 129024 bytes.
++++++++++++++++++++++++++++++++++++++++++
btape.c:604 Wrote 1 EOF to "Drive-0" (/dev/nst0)
btape.c:406 Volume bytes=3.221 GB. Write rate = 44.128 MB/s
...
btape.c:383 Total Volume bytes=9.664 GB. Total Write rate = 43.531 MB/s

btape.c:1090 Test with random data, should give the minimum throughput.
btape.c:956 Begin writing 3 files of 3.221 GB with blocks of 129024 bytes.
+++++++++++++++++++++++++++++++++++++++++++
btape.c:604 Wrote 1 EOF to "Drive-0" (/dev/nst0)
btape.c:406 Volume bytes=3.221 GB. Write rate = 7.271 MB/s
+++++++++++++++++++++++++++++++++++++++++++
...
btape.c:383 Total Volume bytes=9.664 GB. Total Write rate = 7.365 MB/s

When using compression, the random test will give your the minimum throughput of your drive . The test using constant string will give you the maximum speed of your hardware chain. (CPU, memory, SCSI card, cable, drive, tape).

You can change the block size in the Storage Daemon configuration file.

New Block Checksum Device Directive

You may now turn off the Block Checksum (CRC32) code that Bacula uses when writing blocks to a Volume. This is done by adding:

Block Checksum = no

doing so can reduce the Storage daemon CPU usage slightly. It will also permit Bacula to read a Volume that has corrupted data.

The default is yes - i.e. the checksum is computed on write and checked on read.

We do not recommend to turn this off particularly on older tape drives or for disk Volumes where doing so may allow corrupted data to go undetected.

New Bat Features

Those new features were funded by Bacula Systems.

Media List View

By clicking on ``Media'', you can see the list of all your volumes. You will be able to filter by Pool, Media Type, Location,...And sort the result directly in the table. The old ``Media'' view is now known as ``Pool''.

\includegraphics[width=13cm]{bat-mediaview.eps}

Media Information View

By double-clicking on a volume (on the Media list, in the Autochanger content or in the Job information panel), you can access a detailed overview of your Volume. (cf 2.4.)

Figure 2.4: Media information
\includegraphics[width=13cm]{bat11.eps}

Job Information View

By double-clicking on a Job record (on the Job run list or in the Media information panel), you can access a detailed overview of your Job. (cf 2.5.)

Figure 2.5: Job information
\includegraphics[width=13cm]{bat12.eps}

Autochanger Content View

By double-clicking on a Storage record (on the Storage list panel), you can access a detailed overview of your Autochanger. (cf 2.5.)

Figure 2.6: Autochanger content
\includegraphics[width=13cm]{bat13.eps}

To use this feature, you need to use the latest mtx-changer script version. (With new listall and transfer commands)

Bat on Windows

We have ported bat to Windows and it is now installed by default when the installer is run. It works quite well on Win32, but has not had a lot of testing there, so your feedback would be welcome. Unfortunately, even though it is installed by default, it does not yet work on 64 bit Windows operating systems.

New Win32 Installer

The Win32 installer has been modified in several very important ways.

Win64 Installer

We have corrected a number of problems that required manual editing of the conf files. In most cases, it should now install and work. bat is by default installed in c:/Program Files/Bacula/bin32 rather than c:/Program Files/Bacula as is the case with the 32 bit Windows installer.

Linux Bare Metal Recovery USB Key

We have made a number of significant improvements in the Bare Metal Recovery USB key. Please see the README files it the rescue release for more details.

We are working on an equivalent USB key for Windows bare metal recovery, but it will take some time to develop it (best estimate 3Q2010 or 4Q2010)

bconsole Timeout Option

You can now use the -u option of bconsole to set a timeout in seconds for commands. This is useful with GUI programs that use bconsole to interface to the Director.


Important Changes

Truncate volume after purge

Note that the Truncate Volume after purge feature doesn't work as expected in 5.0.0 version. Please, don't use it before version 5.0.1.

Custom Catalog queries

If you wish to add specialized commands that list the contents of the catalog, you can do so by adding them to the query.sql file. This query.sql file is now empty by default. The file examples/sample-query.sql has an a number of sample commands you might find useful.

Deprecated parts

The following items have been deprecated for a long time, and are now removed from the code.


Misc Changes

Kern Sibbald 2013-08-18