pgsync
Sync data from one Postgres database to another (like pg_dump
/pg_restore
). Designed for:
- speed - tables are transferred in parallel
- security - built-in methods to prevent sensitive data from ever leaving the server
- flexibility - gracefully handles schema differences, like missing columns and extra columns
- convenience - sync partial tables, groups of tables, and related records
Installation
pgsync is a command line tool. To install, run:
gem install pgsync
This will give you the pgsync
command. If installation fails, you may need to install dependencies.
Setup
In your project directory, run:
pgsync --init
This creates .pgsync.yml
for you to customize. We recommend checking this into your version control (assuming it doesn’t contain sensitive information). pgsync
commands can be run from this directory or any subdirectory.
How to Use
Sync all tables
pgsync
Note: pgsync assumes your schema is setup in your to
database. See the schema section if that’s not the case.
Sync specific tables
pgsync table1,table2
Works with wildcards as well
pgsync "public.*"
Sync specific rows (existing rows are overwritten)
pgsync products "where store_id = 1"
You can also preserve existing rows
pgsync products "where store_id = 1" --preserve
Or truncate them
pgsync products "where store_id = 1" --truncate
Exclude Tables
pgsync --exclude users
To always exclude, add to .pgsync.yml
.
exclude:
- table1
- table2
For Rails, you probably want to exclude schema migrations and Active Record metadata.
exclude:
- schema_migrations
- ar_internal_metadata
Groups
Define groups in .pgsync.yml
:
groups:
group1:
- table1
- table2
And run:
pgsync group1
You can also use groups to sync a specific record and associated records in other tables.
To get product 123
with its reviews, last 10 coupons, and store, use:
groups:
product:
products: "where id = {1}"
reviews: "where product_id = {1}"
coupons: "where product_id = {1} order by created_at desc limit 10"
stores: "where id in (select store_id from products where id = {1})"
And run:
pgsync product:123
Schema
Note: pgsync is designed to sync data. You should use a schema migration tool to manage schema changes. The methods in this section are provided for convenience but not recommended.
Sync schema before the data
pgsync --schema-first
Note: This wipes out existing data
Specify tables
pgsync table1,table2 --schema-first
Or just the schema
pgsync --schema-only
pgsync does not try to sync Postgres extensions.
Sensitive Data
Prevent sensitive data like email addresses from leaving the remote server.
Define rules in .pgsync.yml
:
data_rules:
email: unique_email
last_name: random_letter
birthday: random_date
users.auth_token:
value: secret
visits_count:
statement: "(RANDOM() * 10)::int"
encrypted_*: null
last_name
matches all columns named last_name
and users.last_name
matches only the users table. Wildcards are supported, and the first matching rule is applied.
Options for replacement are:
unique_email
unique_phone
unique_secret
random_letter
random_int
random_date
random_time
random_ip
value
statement
null
untouched
Rules starting with unique_
require the table to have a primary key. unique_phone
requires a numeric primary key.
Foreign Keys
Foreign keys can make it difficult to sync data. Three options are:
- Manually specify the order of tables
- Use deferrable constraints
- Disable foreign key triggers, which can silently break referential integrity
When manually specifying the order, use --jobs 1
so tables are synced one-at-a-time.
pgsync table1,table2,table3 --jobs 1
If your tables have deferrable constraints, use:
pgsync --defer-constraints
To disable foreign key triggers and potentially break referential integrity, use:
pgsync --disable-integrity
Triggers
Disable user triggers with:
pgsync --disable-user-triggers
Append-Only Tables
For extremely large, append-only tables, sync in batches.
pgsync large_table --in-batches
The script will resume where it left off when run again, making it great for backfills.
Connection Security
Always make sure your connection is secure when connecting to a database over a network you don’t fully trust. Your best option is to connect over SSH or a VPN. Another option is to use sslmode=verify-full
. If you don’t do this, your database credentials can be compromised.
Safety
To keep you from accidentally overwriting production, the destination is limited to localhost
or 127.0.0.1
by default.
To use another host, add to_safe: true
to your .pgsync.yml
.
Multiple Databases
To use with multiple databases, run:
pgsync --init db2
This creates .pgsync-db2.yml
for you to edit. Specify a database in commands with:
pgsync --db db2
Other Commands
Help
pgsync --help
Version
pgsync --version
List tables
pgsync --list
Scripts
Use groups when possible to take advantage of parallelism.
For Ruby scripts, you may need to do:
Bundler.with_unbundled_env do
system "pgsync ..."
end
Dependencies
If installation fails, your system may be missing Ruby or libpq.
On Mac, run:
brew install postgresql
On Ubuntu, run:
sudo apt-get install ruby-dev libpq-dev build-essential
Upgrading
Run:
gem install pgsync
To use master, run:
gem install specific_install
gem specific_install https://github.com/ankane/pgsync.git
Related Projects
Also check out:
- Dexter - The automatic indexer for Postgres
- PgHero - A performance dashboard for Postgres
- pgslice - Postgres partitioning as easy as pie
Thanks
Inspired by heroku-pg-transfer.
Contributing
Everyone is encouraged to help improve this project. Here are a few ways you can help:
- Report bugs
- Fix bugs and submit pull requests
- Write, clarify, or fix documentation
- Suggest or add new features
To get started with development:
git clone https://github.com/ankane/pgsync.git
cd pgsync
bundle install
createdb pgsync_test1
createdb pgsync_test2
createdb pgsync_test3
bundle exec rake test