Barman for the cloud#
Barman offers two primary methods for backing up Postgres servers to the cloud:
Creating disk volume snapshots as base backups.
This can be achieved through 2 different approaches:
Setting up a Barman server to store the backup metadata and the WAL files, while your backups are created as disk volume snapshots in the cloud. This is an integrated feature of Barman. If you choose this approach, please consult the cloud snapshots backups section for details.
Interacting and managing backups directly with the command line utility provided by the barman cloud client package without the need of a Barman server. The backup metadata and the WAL files are stored in the cloud object storage, while your base backup is created as disk volume snapshots in the cloud.
Creating and transferring base backups to a cloud object storage.
This can also be achieved through 2 different approaches:
Using the utility provided by the barman cloud client package in the Postgres host, without a Barman server. Both the base backup and the WALs are read from the local host (Postgres host), and they are stored along with the backup metadata in the cloud object storage.
Setting up a Barman server to take base backups and store the backup metadata and the WAL files, then use the utility provided by the barman cloud client package as hook scripts to copy them to the cloud object storage. If you choose this approach, please consult the Using barman-cloud-* scripts as hooks in barman section for details.
This section of the documentation is focused in the barman-cloud-* commands that
can be used to manage and interact with backups without the need of a dedicated barman
server. To start working with it, you will need to install the barman cloud client
package on the same machine as your Postgres server.
Understanding these options will help you select the right approach for your cloud backup and recovery needs, ensuring you leverage Barman’s full potential.
Barman cloud client package#
The barman cloud client package provides commands for managing cloud backups, both in object storage and as disk volume snapshots, without requiring a Barman server.
With this utility, you can:
Create and manage snapshot backups directly.
Create and transfer backups to cloud object storage.
While it offers additional functionality for handling backups in cloud storage and disk volumes independently, it does not fully extend Barman’s native capabilities. It has limitations compared to the integrated features of Barman and some operations may differ.
Note
Barman supports AWS S3 (and S3 compatible object stores), Azure Blob Storage and Google Cloud Storage.
Installation#
To back up Postgres servers directly to a cloud provider, you need to install the barman cloud client package on those servers. Keep in mind that the installation process varies based on the distribution you are using.
Refer to the installation section for the installation process, and make sure to note the important information for each distribution.
Commands Reference#
You have several commands available to manage backup and recovery in the cloud using
this utility. The exit statuses for them are SUCCESS (0), FAILURE (1),
FAILED CONNECTION (2) and INPUT_ERROR (3). Any other non-zero is FAILURE.
barman-cloud-backup#
Synopsis
barman-cloud-backup
[ { -V | --version } ]
[ --help ]
[ { { -v | --verbose } | { -q | --quiet } } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ { { -z | --gzip } | { -j | --bzip2 } | --snappy } ]
[ { -h | --host } HOST ]
[ { -p | --port } PORT ]
[ { -U | --user } USER ]
[ { -d | --dbname } DBNAME ]
[ { -n | --name } BACKUP_NAME ]
[ { -J | --jobs } JOBS ]
[ { -S | --max-archive-size } MAX_ARCHIVE_SIZE ]
[ --immediate-checkpoint ]
[ --min-chunk-size MIN_CHUNK_SIZE ]
[ --max-bandwidth MAX_BANDWIDTH ]
[ --snapshot-instance SNAPSHOT_INSTANCE ]
[ --snapshot-disk NAME ]
[ --snapshot-zone GCP_ZONE ]
[ -snapshot-gcp-project GCP_PROJECT ]
[ --tags TAG [ TAG ... ] ]
[ --endpoint-url ENDPOINT_URL ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --profile AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { -e | --encryption } { AES256 | aws:kms } ]
[ --sse-kms-key-id SSE_KMS_KEY_ID ]
[ --aws-region AWS_REGION ]
[ --aws-await-snapshots-timeout AWS_AWAIT_SNAPSHOTS_TIMEOUT ]
[ --aws-snapshot-lock-mode { compliance | governance } ]
[ --aws-snapshot-lock-duration DAYS ]
[ --aws-snapshot-lock-cool-off-period HOURS ]
[ --aws-snapshot-lock-expiration-date DATETIME ]
[ { --azure-credential | --credential } { azure-cli | managed-identity | default } ]
[ --encryption-scope ENCRYPTION_SCOPE ]
[ --azure-subscription-id AZURE_SUBSCRIPTION_ID ]
[ --azure-resource-group AZURE_RESOURCE_GROUP ]
[ --gcp-project GCP_PROJECT ]
[ --kms-key-name KMS_KEY_NAME ]
[ --gcp-zone GCP_ZONE ]
DESTINATION_URL SERVER_NAME
Description
The barman-cloud-backup script is used to create a local backup of a Postgres
server and transfer it to a supported cloud provider, bypassing the Barman server. It
can also be utilized as a hook script for copying Barman backups from the Barman server
to one of the supported clouds (post_backup_retry_script).
This script requires read access to PGDATA and tablespaces, typically run as the
postgres user. When used on a Barman server, it requires read access to the directory
where Barman backups are stored. If --snapshot- arguments are used and snapshots are
supported by the selected cloud provider, the backup will be performed using snapshots
of the specified disks (--snapshot-disk). The backup label and metadata will also be
uploaded to the cloud.
Note
For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.
Important
The cloud upload may fail if any file larger than the configured --max-archive-size
is present in the data directory or tablespaces. However, Postgres files up to
1GB are always allowed, regardless of the --max-archive-size setting.
Parameters
SERVER_NAMEName of the server to be backed up.
DESTINATION_URLURL of the cloud destination, such as a bucket in AWS S3. For example:
s3://bucket/path/to/folder.-V/--versionShow version and exit.
--helpshow this help message and exit.
-v/--verboseIncrease output verbosity (e.g.,
-vvis more than-v).-q/--quietDecrease output verbosity (e.g.,
-qqis less than-q).-t/--testTest cloud connectivity and exit.
--cloud-providerThe cloud provider to use as a storage backend.
Allowed options:
aws-s3.azure-blob-storage.google-cloud-storage.
-z/--gzipgzip-compress the backup while uploading to the cloud (should not be used with python < 3.2).
-j/--bzip2bzip2-compress the backup while uploading to the cloud (should not be used with python < 3.3).
--snappysnappy-compress the backup while uploading to the cloud (requires optional
python-snappylibrary).-h/--hostHost or Unix socket for Postgres connection (default: libpq settings).
-p/--portPort for Postgres connection (default: libpq settings).
-U/--userUser name for Postgres connection (default: libpq settings).
-d/--dbnameDatabase name or conninfo string for Postgres connection (default: “postgres”).
-n/--nameA name which can be used to reference this backup in commands such as
barman-cloud-restoreandbarman-cloud-backup-delete.-J/--jobsNumber of subprocesses to upload data to cloud storage (default:
2).-S/--max-archive-sizeMaximum size of an archive when uploading to cloud storage (default:
100GB).--immediate-checkpointForces the initial checkpoint to be done as quickly as possible.
--min-chunk-sizeMinimum size of an individual chunk when uploading to cloud storage (default:
5MBforaws-s3,64KBforazure-blob-storage, not applicable forgoogle-cloud-storage).--max-bandwidthThe maximum amount of data to be uploaded per second when backing up to object storages (default:
0- no limit).--snapshot-instanceInstance where the disks to be backed up as snapshots are attached.
--snapshot-diskName of a disk from which snapshots should be taken.
--tagsTags to be added to all uploaded files in cloud storage, and/or to snapshots created, if snapshots are used.
Extra options for the AWS cloud provider
--endpoint-urlOverride default S3 endpoint URL with the given one.
-P/--aws-profileProfile name (e.g.
INIsection in AWS credentials file).--profile(deprecated)Profile name (e.g.
INIsection in AWS credentials file) - replaced by--aws-profile.--read-timeoutThe time in seconds until a timeout is raised when waiting to read from a connection (defaults to
60seconds).-e/--encryptionThe encryption algorithm used when storing the uploaded data in S3.
Allowed options:
AES256.aws:kms.
--sse-kms-key-idThe AWS KMS key ID that should be used for encrypting the uploaded data in S3. Can be specified using the key ID on its own or using the full ARN for the key. Only allowed if
-e/--encryptionis set toaws:kms.--aws-regionThe name of the AWS region containing the EC2 VM and storage volumes defined by the
--snapshot-instanceand--snapshot-diskarguments.--aws-await-snapshots-timeoutThe length of time in seconds to wait for snapshots to be created in AWS before timing out (default: 3600 seconds).
--aws-snapshot-lock-modeThe lock mode for the snapshot. This is only valid if
--snapshot-instanceand--snapshot-diskare set.Allowed options:
compliance.governance.
--aws-snapshot-lock-durationThe lock duration is the period of time (in days) for which the snapshot is to remain locked, ranging from 1 to 36,500. Set either the lock duration or the expiration date (not both).
--aws-snapshot-lock-cool-off-periodThe cooling-off period is an optional period of time (in hours) that you can specify when you lock a snapshot in
compliancemode, ranging from 1 to 72.--aws-snapshot-lock-expiration-dateThe lock duration is determined by an expiration date in the future. It must be at least 1 day after the snapshot creation date and time, using the format
YYYY-MM-DDTHH:MM:SS.sssZ. Set either the lock duration or the expiration date (not both).
Extra options for the Azure cloud provider
--azure-credential / --credentialOptionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.
Allowed options:
azure-cli.managed-identity.default.
--encryption-scopeThe name of an encryption scope defined in the Azure Blob Storage service which is to be used to encrypt the data in Azure.
--azure-subscription-idThe ID of the Azure subscription which owns the instance and storage volumes defined by the
--snapshot-instanceand--snapshot-diskarguments.--azure-resource-groupThe name of the Azure resource group to which the compute instance and disks defined by the
--snapshot-instanceand--snapshot-diskarguments belong.
Extra options for GCP cloud provider
--gcp-projectGCP project under which disk snapshots should be stored.
--snapshot-gcp-project(deprecated)GCP project under which disk snapshots should be stored - replaced by
--gcp-project.--kms-key-nameThe name of the GCP KMS key which should be used for encrypting the uploaded data in GCS.
--gcp-zoneZone of the disks from which snapshots should be taken.
--snapshot-zone(deprecated)Zone of the disks from which snapshots should be taken - replaced by
--gcp-zone.
barman-cloud-backup-delete#
Synopsis
barman-cloud-backup-delete
[ { -V | --version } ]
[ --help ]
[ { { -v | --verbose } | { -q | --quiet } } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ --endpoint-url ENDPOINT_URL ]
[ { -r | --retention-policy } RETENTION_POLICY ]
[ { -m | --minimum-redundancy } MINIMUM_REDUNDANCY ]
[ { -b | --backup-id } BACKUP_ID]
[ --dry-run ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --profile AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { --azure-credential | --credential } { azure-cli | managed-identity | default } ]
[--batch-size DELETE_BATCH_SIZE]
SOURCE_URL SERVER_NAME
Description
The barman-cloud-backup-delete script is used to delete one or more backups created
with the barman-cloud-backup command from cloud storage and to remove the associated
WAL files.
Backups can be specified for deletion either by their backup ID
(as obtained from barman-cloud-backup-list) or by a retention policy. Retention
policies mirror those used by the Barman server, deleting all backups that are not required to
meet the specified policy. When a backup is deleted, any unused WAL files associated with
that backup are also removed.
WALs are considered unused if:
The WALs predate the begin_wal value of the oldest remaining backup.
The WALs are not required by any archival backups stored in the cloud.
Note
For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.
Important
Each backup deletion involves three separate requests to the cloud provider: one for
the backup files, one for the backup.info file, and one for the associated WALs.
Deleting by retention policy may result in a high volume of delete requests if a
large number of backups are accumulated in cloud storage.
Parameters
SERVER_NAMEName of the server that holds the backup to be deleted.
SOURCE_URLURL of the cloud source, such as a bucket in AWS S3. For example:
s3://bucket/path/to/folder.-V/--versionShow version and exit.
--helpshow this help message and exit.
-v/--verboseIncrease output verbosity (e.g.,
-vvis more than-v).-q/--quietDecrease output verbosity (e.g.,
-qqis less than-q).-t/--testTest cloud connectivity and exit.
--cloud-providerThe cloud provider to use as a storage backend.
Allowed options are:
aws-s3.azure-blob-storage.google-cloud-storage.
-b/--backup-idID of the backup to be deleted
-m/--minimum-redundancyThe minimum number of backups that should always be available.
-r/--retention-policyIf specified, delete all backups eligible for deletion according to the supplied retention policy.
Syntax:
REDUNDANCY value | RECOVERY WINDOW OF value { DAYS | WEEKS | MONTHS }--batch-sizeThe maximum number of objects to be deleted in a single request to the cloud provider. If unset then the maximum allowed batch size for the specified cloud provider will be used (
1000for aws-s3,256for azure-blob-storage and100for google-cloud-storage).--dry-runFind the objects which need to be deleted but do not delete them.
Extra options for the AWS cloud provider
--endpoint-urlOverride default S3 endpoint URL with the given one.
-P/--aws-profileProfile name (e.g.
INIsection in AWS credentials file).--profile(deprecated)Profile name (e.g.
INIsection in AWS credentials file) - replaced by--aws-profile.--read-timeoutThe time in seconds until a timeout is raised when waiting to read from a connection (defaults to
60seconds).
Extra options for the Azure cloud provider
--azure-credential / --credentialOptionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.
Allowed options are:
azure-cli.managed-identity.default.
barman-cloud-backup-show#
Synopsis
barman-cloud-backup-show
[ { -V | --version } ]
[ --help ]
[ { { -v | --verbose } | { -q | --quiet } } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ --endpoint-url ENDPOINT_URL ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --profile AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { --azure-credential | --credential } { azure-cli | managed-identity | default } ]
[ --format FORMAT ]
SOURCE_URL SERVER_NAME BACKUP_ID
Description
This script displays detailed information about a specific backup created with the
barman-cloud-backup command. The output is similar to the barman show-backup
from the barman show-backup command reference,
but it has fewer information.
Note
For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.
Parameters
BACKUP_IDThe ID of the backup.
SERVER_NAMEName of the server that holds the backup to be displayed.
SOURCE_URLURL of the cloud source, such as a bucket in AWS S3. For example:
s3://bucket/path/to/folder.-V/--versionShow version and exit.
--helpshow this help message and exit.
-v/--verboseIncrease output verbosity (e.g.,
-vvis more than-v).-q/--quietDecrease output verbosity (e.g.,
-qqis less than-q).-t/--testTest cloud connectivity and exit.
--cloud-providerThe cloud provider to use as a storage backend.
Allowed options are:
aws-s3.azure-blob-storage.google-cloud-storage.
--formatOutput format (
consoleorjson). Defaultconsole.
Extra options for the AWS cloud provider
--endpoint-urlOverride default S3 endpoint URL with the given one.
-P/--aws-profileProfile name (e.g.
INIsection in AWS credentials file).--profile(deprecated)Profile name (e.g.
INIsection in AWS credentials file) - replaced by--aws-profile.--read-timeoutThe time in seconds until a timeout is raised when waiting to read from a connection (defaults to
60seconds).
Extra options for the Azure cloud provider
--azure-credential / --credentialOptionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.
Allowed options are:
azure-cli.managed-identity.default.
barman-cloud-backup-list#
Synopsis
barman-cloud-backup-list
[ { -V | --version } ]
[ --help ]
[ { { -v | --verbose } | { -q | --quiet } } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ --endpoint-url ENDPOINT_URL ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --profile AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { --azure-credential | --credential } { azure-cli | managed-identity | default } ]
[ --format FORMAT ]
SOURCE_URL SERVER_NAME
Description
This script lists backups stored in the cloud that were created using the
barman-cloud-backup command.
Note
For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.
Parameters
SERVER_NAMEName of the server that holds the backup to be listed.
SOURCE_URLURL of the cloud source, such as a bucket in AWS S3. For example:
s3://bucket/path/to/folder.-V/--versionShow version and exit.
--helpshow this help message and exit.
-v/--verboseIncrease output verbosity (e.g.,
-vvis more than-v).-q/--quietDecrease output verbosity (e.g.,
-qqis less than-q).-t/--testTest cloud connectivity and exit.
--cloud-providerThe cloud provider to use as a storage backend.
Allowed options are:
aws-s3.azure-blob-storage.google-cloud-storage.
--formatOutput format (
consoleorjson). Defaultconsole.
Extra options for the AWS cloud provider
--endpoint-urlOverride default S3 endpoint URL with the given one.
-P/--aws-profileProfile name (e.g.
INIsection in AWS credentials file).--profile(deprecated)Profile name (e.g.
INIsection in AWS credentials file) - replaced by--aws-profile.--read-timeoutThe time in seconds until a timeout is raised when waiting to read from a connection (defaults to
60seconds).
Extra options for the Azure cloud provider
--azure-credential / --credentialOptionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.
Allowed options are:
azure-cli.managed-identity.default.
barman-cloud-backup-keep#
Synopsis
barman-cloud-backup-keep
[ { -V | --version } ]
[ --help ]
[ { { -v | --verbose } | { -q | --quiet } } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ --endpoint-url ENDPOINT_URL ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --profile AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { --azure-credential | --credential } { azure-cli | managed-identity | default } ]
[ { { -r | --release } | { -s | --status } | --target { full | standalone } } ]
SOURCE_URL SERVER_NAME BACKUP_ID
Description
Use this script to designate backups in cloud storage as archival backups, ensuring their indefinite retention regardless of retention policies.
This script allows you to mark backups previously created with barman-cloud-backup
as archival backups. Once flagged as archival, these backups are preserved indefinitely
and are not subject to standard retention policies.
Note
For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.
Parameters
SERVER_NAMEName of the server that holds the backup to be kept.
SOURCE_URLURL of the cloud source, such as a bucket in AWS S3. For example:
s3://bucket/path/to/folder.BACKUP_IDThe ID of the backup to be kept.
-V/--versionShow version and exit.
--helpshow this help message and exit.
-v/--verboseIncrease output verbosity (e.g.,
-vvis more than-v).-q/--quietDecrease output verbosity (e.g.,
-qqis less than-q).-t/--testTest cloud connectivity and exit.
--cloud-providerThe cloud provider to use as a storage backend.
Allowed options are:
aws-s3.azure-blob-storage.google-cloud-storage.
-r/--releaseIf specified, the command will remove the keep annotation and the backup will be eligible for deletion.
-s/--statusPrint the keep status of the backup.
--targetSpecify the recovery target for this backup. Allowed options are:
fullstandalone
Extra options for the AWS cloud provider
--endpoint-urlOverride default S3 endpoint URL with the given one.
-P/--aws-profileProfile name (e.g.
INIsection in AWS credentials file).--profile(deprecated)Profile name (e.g.
INIsection in AWS credentials file) - replaced by--aws-profile.--read-timeoutThe time in seconds until a timeout is raised when waiting to read from a connection (defaults to
60seconds).
Extra options for the Azure cloud provider
--azure-credential / --credentialOptionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.
Allowed options are:
azure-cli.managed-identity.default.
barman-cloud-check-wal-archive#
Synopsis
barman-cloud-check-wal-archive
[ { -V | --version } ]
[ --help ]
[ { { -v | --verbose } | { -q | --quiet } } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ --endpoint-url ENDPOINT_URL ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --profile AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { --azure-credential | --credential }
{ azure-cli | managed-identity | default } ]
[ --timeline TIMELINE ]
DESTINATION_URL SERVER_NAME
Description
Verify that the WAL archive destination for a server is suitable for use with a new Postgres cluster. By default, the check will succeed if the WAL archive is empty or if the target bucket is not found. Any other conditions will result in a failure.
Note
For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.
Parameters
SERVER_NAMEName of the server that needs to be checked.
DESTINATION_URLURL of the cloud destination, such as a bucket in AWS S3. For example:
s3://bucket/path/to/folder.-V/--versionShow version and exit.
--helpshow this help message and exit.
-v/--verboseIncrease output verbosity (e.g.,
-vvis more than-v).-q/--quietDecrease output verbosity (e.g.,
-qqis less than-q).-t/--testTest cloud connectivity and exit.
--cloud-providerThe cloud provider to use as a storage backend.
Allowed options are:
aws-s3.azure-blob-storage.google-cloud-storage.
--timelineThe earliest timeline whose WALs should cause the check to fail.
Extra options for the AWS cloud provider
--endpoint-urlOverride default S3 endpoint URL with the given one.
-P/--aws-profileProfile name (e.g.
INIsection in AWS credentials file).--profile(deprecated)Profile name (e.g.
INIsection in AWS credentials file) - replaced by--aws-profile.--read-timeoutThe time in seconds until a timeout is raised when waiting to read from a connection (defaults to
60seconds).
Extra options for the Azure cloud provider
--azure-credential / --credentialOptionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.
Allowed options are:
azure-cli.managed-identity.default.
barman-cloud-restore#
Synopsis
barman-cloud-restore
[ { -V | --version } ]
[ --help ]
[ { { -v | --verbose } | { -q | --quiet } } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ --endpoint-url ENDPOINT_URL ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --profile AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { --azure-credential | --credential } { azure-cli | managed-identity | default } ]
[ --snapshot-recovery-instance SNAPSHOT_RECOVERY_INSTANCE ]
[ --snapshot-recovery-zone GCP_ZONE ]
[ --aws-region AWS_REGION ]
[ --gcp-zone GCP_ZONE ]
[ --azure-resource-group AZURE_RESOURCE_GROUP ]
[ --tablespace NAME:LOCATION [ --tablespace NAME:LOCATION ... ] ]
[ --target-lsn LSN ]
[ --target-time TIMESTAMP ]
[ --target-tli TLI ]
SOURCE_URL SERVER_NAME BACKUP_ID RECOVERY_DESTINATION
Description
Use this script to restore a backup directly from cloud storage that was created with
the barman-cloud-backup command. Additionally, this script can prepare for recovery
from a snapshot backup by verifying that attached disks were cloned from the correct
snapshots and by downloading the backup label from object storage.
This command does not automatically prepare Postgres for recovery. You must manually
manage any PITR options, custom restore_command values, signal files, or
required WAL files to ensure Postgres starts, either manually or using external tools.
Note
For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.
Parameters
SERVER_NAMEName of the server that holds the backup to be restored.
SOURCE_URLURL of the cloud source, such as a bucket in AWS S3. For example:
s3://bucket/path/to/folder.BACKUP_IDThe ID of the backup to be restored. Use
autoto have Barman automatically find the most suitable backup for the restore operation.RECOVERY_DESTINATIONThe path to a directory for recovery.
-V/--versionShow version and exit.
--helpshow this help message and exit.
-v/--verboseIncrease output verbosity (e.g.,
-vvis more than-v).-q/--quietDecrease output verbosity (e.g.,
-qqis less than-q).-t/--testTest cloud connectivity and exit.
--cloud-providerThe cloud provider to use as a storage backend.
Allowed options are:
aws-s3.azure-blob-storage.google-cloud-storage.
--snapshot-recovery-instanceInstance where the disks recovered from the snapshots are attached.
--tablespaceTablespace relocation rule.
--target-lsnThe recovery target lsn, e.g.,
3/64000000.--target-timeThe recovery target timestamp with or without timezone, in the format
%Y-%m-%d %H:%M:%S.--target-tliThe recovery target timeline.
Extra options for the AWS cloud provider
--endpoint-urlOverride default S3 endpoint URL with the given one.
-P/--aws-profileProfile name (e.g.
INIsection in AWS credentials file).--profile(deprecated)Profile name (e.g.
INIsection in AWS credentials file) - replaced by--aws-profile.--read-timeoutThe time in seconds until a timeout is raised when waiting to read from a connection (defaults to
60seconds).--aws-regionThe name of the AWS region containing the EC2 VM and storage volumes defined by the
--snapshot-instanceand--snapshot-diskarguments.
Extra options for the Azure cloud provider
--azure-credential / --credentialOptionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.
Allowed options are:
azure-cli.managed-identity.default.
--azure-resource-groupThe name of the Azure resource group to which the compute instance and disks defined by the
--snapshot-instanceand--snapshot-diskarguments belong.
Extra options for GCP cloud provider
--gcp-zoneZone of the disks from which snapshots should be taken.
--snapshot-recovery-zone(deprecated)Zone containing the instance and disks for the snapshot recovery - replaced by
--gcp-zone.
barman-cloud-wal-archive#
Synopsis
barman-cloud-wal-archive
[ { -V | --version } ]
[ --help ]
[ { { -v | --verbose } | { -q | --quiet } } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ { { -z | --gzip } | { -j | --bzip2 } | --xz | --snappy | --zstd | --lz4 } ]
[ --compression-level COMPRESSION_LEVEL ]
[ --tags TAG [ TAG ... ] ]
[ --history-tags HISTORY_TAG [ HISTORY_TAG ... ] ]
[ --endpoint-url ENDPOINT_URL ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --profile AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { -e | --encryption } ENCRYPTION ]
[ --sse-kms-key-id SSE_KMS_KEY_ID ]
[ { --azure-credential | --credential } { azure-cli | managed-identity |
default } ]
[ --encryption-scope ENCRYPTION_SCOPE ]
[ --max-block-size MAX_BLOCK_SIZE ]
[ --max-concurrency MAX_CONCURRENCY ]
[ --max-single-put-size MAX_SINGLE_PUT_SIZE ]
[ --kms-key-name KMS_KEY_NAME ]
DESTINATION_URL SERVER_NAME [ WAL_PATH ]
Description
The barman-cloud-wal-archive command is designed to be used in the
archive_command of a Postgres server to directly ship WAL files to cloud storage.
Note
If you are using Python 2 or unsupported versions of Python 3, avoid using the
compression options --gzip or --bzip2. The script cannot restore
gzip-compressed WALs on Python < 3.2 or bzip2-compressed WALs on Python < 3.3.
This script enables the direct transfer of WAL files to cloud storage, bypassing the Barman server. Additionally, it can be utilized as a hook script for WAL archiving (pre_archive_retry_script).
Note
For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.
Parameters
SERVER_NAMEName of the server that will have the WALs archived.
DESTINATION_URLURL of the cloud destination, such as a bucket in AWS S3. For example:
s3://bucket/path/to/folder.WAL_PATHThe value of the ‘%p’ keyword (according to
archive_command).-V/--versionShow version and exit.
--helpshow this help message and exit.
-v/--verboseIncrease output verbosity (e.g.,
-vvis more than-v).-q/--quietDecrease output verbosity (e.g.,
-qqis less than-q).-t/--testTest cloud connectivity and exit.
--cloud-providerThe cloud provider to use as a storage backend.
Allowed options are:
aws-s3.azure-blob-storage.google-cloud-storage.
-z/--gzipgzip-compress the WAL while uploading to the cloud.
-j/--bzip2bzip2-compress the WAL while uploading to the cloud.
--xzxz-compress the WAL while uploading to the cloud.
--snappysnappy-compress the WAL while uploading to the cloud (requires the
python-snappyPython library to be installed).--zstdzstd-compress the WAL while uploading to the cloud (requires the
zstandardPython library to be installed).--lz4lz4-compress the WAL while uploading to the cloud (requires the
lz4Python library to be installed).--compression-levelA compression level to be used by the selected compression algorithm. Valid values are integers within the supported range of the chosen algorithm or one of the predefined labels:
low,medium, andhigh. The range of each algorithm as well as what level each predefined label maps to can be found in compression_level.--tagsTags to be added to archived WAL files in cloud storage.
--history-tagsTags to be added to archived history files in cloud storage.
Extra options for the AWS cloud provider
--endpoint-urlOverride default S3 endpoint URL with the given one.
-P/--aws-profileProfile name (e.g.
INIsection in AWS credentials file).--profile(deprecated)Profile name (e.g.
INIsection in AWS credentials file) - replaced by--aws-profile.--read-timeoutThe time in seconds until a timeout is raised when waiting to read from a connection (defaults to
60seconds).-e/--encryptionThe encryption algorithm used when storing the uploaded data in S3.
Allowed options:
AES256.aws:kms.
--sse-kms-key-idThe AWS KMS key ID that should be used for encrypting the uploaded data in S3. Can be specified using the key ID on its own or using the full ARN for the key. Only allowed if
-e/--encryptionis set toaws:kms.
Extra options for the Azure cloud provider
--azure-credential / --credentialOptionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.
Allowed options are:
azure-cli.managed-identity.default.
--encryption-scopeThe name of an encryption scope defined in the Azure Blob Storage service which is to be used to encrypt the data in Azure.
--max-block-sizeThe chunk size to be used when uploading an object via the concurrent chunk method (default:
4MB).--max-concurrencyThe maximum number of chunks to be uploaded concurrently (default:
1).--max-single-put-sizeMaximum size for which the Azure client will upload an object in a single request (default:
64MB). If this is set lower than the Postgres WAL segment size after any applied compression then the concurrent chunk upload method for WAL archiving will be used.
Extra options for GCP cloud provider
--kms-key-nameThe name of the GCP KMS key which should be used for encrypting the uploaded data in GCS.
barman-cloud-wal-restore#
Synopsis
barman-cloud-wal-restore
[ { -V | --version } ]
[ --help ]
[ { { -v | --verbose } | { -q | --quiet } } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ --endpoint-url ENDPOINT_URL ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --profile AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { --azure-credential | --credential } { azure-cli | managed-identity
| default } ]
[ --no-partial ]
SOURCE_URL SERVER_NAME WAL_NAME WAL_DEST
Description
The barman-cloud-wal-restore script functions as the restore_command for
retrieving WAL files from cloud storage and placing them directly into a Postgres
standby server, bypassing the Barman server.
This script is used to download WAL files that were previously archived with the
barman-cloud-wal-archive command. Disable automatic download of .partial files by
calling --no-partial option.
Important
On the target Postgres node, when pg_wal and the spool directory are on the
same filesystem, files are moved via renaming, which is faster than copying and
deleting. This speeds up serving WAL files significantly. If the directories are on
different filesystems, the process still involves copying and deleting, so there’s
no performance gain in that case.
Note
For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.
Parameters
SERVER_NAMEName of the server that will have WALs restored.
SOURCE_URLURL of the cloud source, such as a bucket in AWS S3. For example:
s3://bucket/path/to/folder.WAL_NAMEThe value of the ‘%f’ keyword (according to
restore_command).WAL_DESTThe value of the ‘%p’ keyword (according to
restore_command).-V/--versionShow version and exit.
--helpshow this help message and exit.
-v/--verboseIncrease output verbosity (e.g.,
-vvis more than-v).-q/--quietDecrease output verbosity (e.g.,
-qqis less than-q).-t/--testTest cloud connectivity and exit.
--cloud-providerThe cloud provider to use as a storage backend.
Allowed options are:
aws-s3.azure-blob-storage.google-cloud-storage.
--no-partialDo not download partial WAL files
Extra options for the AWS cloud provider
--endpoint-urlOverride default S3 endpoint URL with the given one.
-P/--aws-profileProfile name (e.g.
INIsection in AWS credentials file).--profile(deprecated)Profile name (e.g.
INIsection in AWS credentials file) - replaced by--aws-profile.--read-timeoutThe time in seconds until a timeout is raised when waiting to read from a connection (defaults to
60seconds).
Extra options for the Azure cloud provider
--azure-credential / --credentialOptionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.
Allowed options are:
azure-cli.managed-identity.default.