Design: Subscribe to bucket changes
tjholm opened this issue · 6 comments
Design a way to subscribe to changes in buckets as part of application code.
Documenting previous proposal:
import { bucket } from '@nitric/sdk';
// create a bucket called files
const files = bucket("files");
// using pattern of "<event_type>:<file_pattern>"
// subscribe to write events for all files on the bucket
files.on('write:*', async (ctx) => {
// do something with the event
});
qq's
Will we support all access permissions 'read', 'write', 'delete'?
What format will the file pattern be?
@raksiv good questions, for permissions I think we should just impute them from the subscription, so regardless of whether or not a function actually has access to read/write to a bucket, it can still know about things that have happended with it (even if it can't get access to the contents).
Need to do some research on patterns based on how we plan to support this for each platform. Initially I was thinking a basic glob(ish) pattern.
However we may be better off doing a simple prefix match as well.
Ways we can support this functionality:
AWS:
- S3 Bucket notifications: https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketnotification/#aws-s3-bucketnotification
GCP:
- Storage Notifications: https://www.pulumi.com/registry/packages/gcp/api-docs/storage/notification
- Events Arcs: https://www.pulumi.com/registry/packages/gcp/api-docs/eventarc/trigger/
- Pubsub: https://cloud.google.com/storage/docs/pubsub-notifications
Azure:
- Event Grid (setting the appropriate scope): https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketnotification/#aws-s3-bucketnotification
Three things that need to be designed for this:
- SDK interfaces (core developer experience).
- Runtime interfaces (how we register workers and route events at runtime).
- Deployment interfaces (how we express these in the nitric cloud spec, for providers to deploy).