cloudflare/utahfs

Implementing a non-Cloud Persistent/RemoteStorage?

prologic opened this issue · 3 comments

The README says:

Interchangeable Storage Providers. For storing data in the cloud, UtahFS uses Object Storage, which is cheap and commodified. Example providers include: AWS S3, Google Cloud Storage, Backblaze B2, and Wasabi.

I've had a look at the persisent sub-package for either a "local persistent backend" or how to implement one. Can you share some hints on how this would be done? Do I implement the Persistent or RemoteStorage interfaces or both?

Are there plan to support for example minio for example?

Thanks!

Yes, you can use a local database to store data. You'd configure it as:

storage-provider:
  disk-path: /path/to/db

keep-metadata: true

# password: password123
# archive: true
# oram: true

It's undocumented because it's really only meant for testing. You'd want to setup a RAID or something for reliability if you intend to use it seriously.

I haven't tried Minio, but it looks like they have an S3 API which should be compatible?

Update: I tested Minio and it is in fact compatible.

$ cat utahfs.yaml
storage-provider:
  s3-app-id: minioadmin
  s3-app-key: minioadmin
  s3-bucket: utahfs
  s3-url: http://127.0.0.1:9000
  s3-region: local

Nice! Thank you for this. Can we update the docs in this case? I actually have a ZFS file system with lots of disk parity/redundancy, so I can run things there. But since Minio works nicely too that's also a nice option.