benbjohnson/litestream

Limited support for S3-compatible storage

chumaumenze opened this issue ยท 7 comments

Hello @benbjohnson, first of all, thank you for your fantastic work on this project.

I am working with S3-compatible storage called Storj. I have tried to restore a replica.

Litestream config:
$ cat /etc/litestream.yml 
dbs:
  - path: ${database_filename}
    replicas:
      - type: s3
        access-key-id: ${s3_accessKeyId}
        secret-access-key: ${s3_secretAccessKey}
        region: ${s3_region}
        endpoint: ${s3_endpoint}
        bucket: ${s3_bucket}
        path: "${database_filename_backup_path}"
        force-path-style: ${s3_forcePathStyle}
        retention: 48h
        snapshot-interval: 1h
Litestream restore:
database_filename=/path/to/data.db
replica_url=s3://gateway.storjshare.io/mybucket/path/to/data.db
litestream restore -v -replica s3 -o "$database_filename" $replica_url
cannot fetch generations: cannot lookup bucket region: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors

I think the following lines may contribute to the problem.

if c.URL != "" {
_, host, upath, err := ParseReplicaURL(c.URL)
if err != nil {
return nil, err
}
ubucket, uregion, uendpoint, uforcePathStyle := s3.ParseHost(host)
// Only apply URL parts to field that have not been overridden.
if path == "" {
path = upath
}
if bucket == "" {
bucket = ubucket
}
if region == "" {
region = uregion
}
if endpoint == "" {
endpoint = uendpoint
}
if !forcePathStyle {
forcePathStyle = uforcePathStyle
}
}

func ParseHost(s string) (bucket, region, endpoint string, forcePathStyle bool) {
// Extract port if one is specified.
host, port, err := net.SplitHostPort(s)
if err != nil {
host = s
}
// Default to path-based URLs, except for with AWS S3 itself.
forcePathStyle = true
// Extract fields from provider-specific host formats.
scheme := "https"
if a := localhostRegex.FindStringSubmatch(host); a != nil {
bucket, region = a[1], "us-east-1"
scheme, endpoint = "http", "localhost"
} else if a := backblazeRegex.FindStringSubmatch(host); a != nil {
bucket, region = a[1], a[2]
endpoint = fmt.Sprintf("s3.%s.backblazeb2.com", region)
} else if a := filebaseRegex.FindStringSubmatch(host); a != nil {
bucket, endpoint = a[1], "s3.filebase.com"
} else if a := digitalOceanRegex.FindStringSubmatch(host); a != nil {
bucket, region = a[1], a[2]
endpoint = fmt.Sprintf("%s.digitaloceanspaces.com", region)
} else if a := linodeRegex.FindStringSubmatch(host); a != nil {
bucket, region = a[1], a[2]
endpoint = fmt.Sprintf("%s.linodeobjects.com", region)
} else {
bucket = host
forcePathStyle = false
}
// Add port back to endpoint, if available.
if endpoint != "" && port != "" {
endpoint = net.JoinHostPort(endpoint, port)
}
// Prepend scheme to endpoint.
if endpoint != "" {
endpoint = scheme + "://" + endpoint
}
return bucket, region, endpoint, forcePathStyle
}

AFAIK, forcePathStyle should be true for non-AWS S3 storage. However, the above function will overwrite forcePathStyle for any other S3-compatible service that is not Localhost, Backblazeb2, Filebase, Digital Ocean or Linode.

I also use Storj and reach the same problem

+1 for this

I have litestream working with Oracle Cloud S3-compatible storage. The key is to use https endpoint, like this in my config:

dbs:
    - path: app.db
      replicas:
          - type: s3
            bucket: litestream
            path: appdb
            endpoint: https://<namespace>.compat.objectstorage.<region>.oci.customer-oci.com

Try doing the same with Storj.

hifi commented

The issue is valid, though, as there shouldn't be an allow/deny list for third party providers but instead just allow the user to configure forcePathStyle in all cases. The https endpoint trick is what seems to be best workaround for now.

On IDrive e2 S3-compatible storage, this works for me. (No https:// nor s3:// in endpoint.)

dbs:
 - path: /path/to/data
   replicas:
     - type:     s3
       region:   region_code
       endpoint: ABC.region_code.idrivee2-XX.com
       bucket:   bucket_name
       path:     /path/to/replica

litestream v0.3.13

Anyone googling for Cloudflare R2 Storage for Litestream may run into the issue I kept having, where it seems to require a region to be happy. R2 has documentation on this, and effectively says just use us-east-1 which will alias to the auto region.

My configs that are working as of today, 2024-02-02:

# R2 
dbs:
  - path: /home/richsmith/workspace/<projectname>/storage/development.sqlite3
    replicas:
     - type: s3
       bucket: <projectname>-db-backup
       endpoint: https://XXXXXXXXXXXXXXXXXXXX.r2.cloudflarestorage.com
       region: us-east-1
       access-key-id: XXXXXXXXXXXXXXXXXXXXXXXX
       secret-access-key: XXXXXXXXXXXXXXXXXXXX

Hope this helps the lost googler!