"pipe-streaming not supported for S3." received when path is supplied?
luma opened this issue · 5 comments
I took the basic template the Safe generates and added my usernames/passwords/etc, but whenever I attempt to run it I receive the "pipe-streaming not supported for S3." error. From looking at the code it seems like this should only occur when a path isn't supplied, but I'm supplying a path for both local and s3.
Here's the error output
XXX [~]# astrails-safe -v --dry-run sites_backup.conf
command: mysqldump --defaults-extra-file=/tmp/d20090704-17510-r5v8e1 /mysqldump.17510.0 -ceKq --single-transaction --create-options e2go_store|gzip
listing files /backup/mysqldump/e2go_store/090704-1955/mysqldump-e2go_store
/usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe/s3.rb:16:in save': pipe-streaming not supported for S3. (RuntimeError) from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe/sink.rb:8:in
process'
from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe/backup.rb:15:in run' from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe/backup.rb:12:in
each'
from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe/backup.rb:12:in run' from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe.rb:49:in
safe'
from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe/config/node.rb:41:in each' from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe/config/node.rb:41:in
each'
from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe.rb:48:in safe' from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe.rb:42:in
each'
from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/lib/astrails/safe.rb:42:in safe' from /home/e2go/sites_backup.conf:1 from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/bin/astrails-safe:50:in
load'
from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/bin/astrails-safe:50:in main' from /usr/lib/ruby/gems/1.8/gems/astrails-safe-0.2.0/bin/astrails-safe:53 from /usr/bin/astrails-safe:19:in
load'
from /usr/bin/astrails-safe:19
Here's the configuration:
safe do
# backup file path (not including filename)
# supported substitutions:
# :kind -> backup 'engine' kind, e.g. "mysqldump" or "archive"
# :id -> backup 'id', e.g. "blog", "production", etc.
# :timestamp -> current run timestamp (same for all the backups in the same 'run')
# you can set separate :path for all backups (or once globally here)
local do
path "/backup/:kind/:id/:timestamp/"
end
## uncomment to enable uploads to Amazon S3
## Amazon S3 auth (optional)
## don't forget to add :s3 to the 'store' list
s3 do
key "XXX"
secret "XXX"
bucket "backups.luma.co.nz"
# path for uploads to S3. supports same substitution like :local/:path
path ":kind/:id/:timestamp/" # this is default
end
## alternative style:
# s3 :key => YOUR_S3_KEY, :secret => YOUR_S3_SECRET, :bucket => S3_BUCKET
## uncomment to enable GPG encryption.
## Note: you can use public 'key' or symmetric password but not both!
# gpg do
# # key "backup@astrails.com"
# password "astrails"
# end
## uncomment to enable backup rotation. keep only given number of latest
## backups. remove the rest
keep do
local 4 # keep 4 local backups
s3 20 # keep 20 S3 backups
end
# backup mysql databases with mysqldump
mysqldump do
# you can override any setting from parent in a child:
options "-ceKq --single-transaction --create-options"
#user "XXX"
#password "XXX"
host "127.0.0.1"
# port 3306
socket "/var/run/mysqld/mysqld.sock"
# database is a 'collection' element. it must have a hash or block parameter
# it will be 'collected' in a 'databases', with database id (1st arg) used as hash key
# the following code will create mysqldump/databases/blog and mysqldump/databases/mysql ocnfiguration 'nodes'
# backup database with default values
# database :e2go_store
# database :divvi_divvideals
# backup overriding some values
database :e2go_store do
user "e2go_store"
password "XXX"
# # you can override 'partially'
# keep :local => 3
# # keep/local is 3, and keep/s3 is 20 (from parent)
# # local override for gpg password
# gpg do
# password "custom-production-pass"
# end
# skip_tables [:logger_exceptions, :request_logs] # skip those tables during backup
end
end
# # uncomment to enable
# # backup PostgreSQL databases with pg_dump
# pgdump do
# option "-i -x -O"
#
# user "markmansour"
# # password "" - leave this out if you have ident setup
#
# # database is a 'collection' element. it must have a hash or block parameter
# # it will be 'collected' in a 'databases', with database id (1st arg) used as hash key
# database :blog
# database :production
# end
tar do
# 'archive' is a collection item, just like 'database'
# archive "git-repositories" do
# # files and directories to backup
# files "/home/git/repositories"
# end
# archive "etc-files" do
# files "/etc"
# # exlude those files/directories
# exclude "/etc/puppet/other"
# end
# archive "dot-configs" do
# files "/home/*/.[^.]*"
# end
# archive "blog" do
# files "/var/www/blog.astrails.com/"
# # specify multiple files/directories as array
# exclude ["/var/www/blog.astrails.com/log", "/var/www/blog.astrails.com/tmp"]
# end
# archive "site" do
# files "/var/www/astrails.com/"
# exclude ["/var/www/astrails.com/log", "/var/www/astrails.com/tmp"]
# end
# archive :misc do
# files [ "/backup/*.rb" ]
# end
end
end
please try it with the latest version and see if it still happens.
Updated to 0.2.2, no change.
same problem.. even when I pass the -L flag for local it raises the same error
thanks, that works
thanks for tracking this down :)
I was trying to reproduce it for quite some time, but I was running it for real and so didn't get the error (and I missed the --dry-run in the example above).
A little explanation:
Originally I tried to make the 'local' storage optional and allow direct streaming (using unix pipes) to S3.
But the problem is that S3 requires to know the file size before you start uploading,
so I had to write the backup output into a file anyway.
The missing 'path' it complains about is not a path in s3 storage section (it defaults to the local's path).
So using alternative style of S3 configuration shouldn't (and doesn't :) do any problems.
The problem was that 'path' in the 'backup' object was only set when the actual file is saved, so it wasn't set on the dry-run, and so s3 check for the @backup.path failed during dry-run.
Fixed by assigning @backup.path in the 'local' storage class even on dry-run.
There was actually a test stub for this case that wasn't yet implemented. I did implement it now :)
Updated 0.2.3 gem should be on github in a moment.