traceloop/hub

Feature Request: Add AWS Bedrock Provider Support

Opened this issue · 11 comments

Overview

Add support for AWS Bedrock as a new provider in the Hub. This will allow users to route their LLM requests to AWS Bedrock models like Claude and Titan through our unified API interface.

Background

Currently, Hub supports OpenAI, Anthropic, Azure OpenAI and other providers (see src/providers/mod.rs). Adding AWS Bedrock support will expand the available model options and give users more flexibility in choosing their LLM providers, especially those already using AWS infrastructure.

Technical Details

New Provider Implementation

Create a new provider implementation in src/providers/bedrock/ following the existing provider pattern. Reference implementation can be found here.

Required Components

  1. Create a new module structure (support streaming, non-streaming, tools and function calling, multi-modality)

    • src/providers/bedrock/mod.rs
    • src/providers/bedrock/provider.rs
    • src/providers/bedrock/models.rs
  2. Implement the Provider trait with the following methods:

    • chat_completions
    • completions
    • embeddings
  3. Add necessary request/response models for Bedrock API in models.rs

  4. Update the provider registry to include Bedrock.

Configuration

  1. Update the configuration structure to support bedrock, like:
providers:
  key: bedrock
type: bedrock
region: "<your-aws-region>"
access_key_id: "<your-aws-access-key-id>" # Optional if using IAM roles
secret_access_key: "<your-aws-secret-access-key>" # Optional if using IAM roles
models:
   key: claude-v2
type: anthropic.claude-v2
provider: bedrock
  key: titan
type: amazon.titan-text
provider: bedrock
  1. Update the configuration example to include Bedrock settings.

Requirements

  • Implement AWS authentication using either:
    • IAM role-based authentication
    • Access key/secret based authentication
  • Support all major Bedrock models:
    • Anthropic Claude models
    • Amazon Titan models
    • AI21 Jurassic models
    • Stability.ai models
  • Handle proper error mapping from AWS API responses to our status codes
  • Implement proper request/response mapping for all endpoints
  • Add comprehensive tests for the new provider
  • Update documentation to include Bedrock setup instructions

Resources

Definition of Done

  • Implementation of Bedrock provider
  • Unit tests with good coverage
  • Integration tests with actual API calls (using test credentials)
  • Documentation updates
  • Example configuration
  • Successful chat completion, completion, and embedding requests
  • Error handling and proper status code mapping
  • PR review and approval

Additional Notes

  • Feel free to ask questions in the issue comments
  • We recommend creating a draft PR early to get feedback during implementation
  • Make sure to follow the existing code style and patterns
  • Don't commit any actual AWS credentials

/bounty $300

💎 $300 bounty • traceloop

Steps to solve:

  1. Start working: Comment /attempt #20 with your implementation plan
  2. Submit work: Create a pull request including /claim #20 in the PR body to claim the bounty
  3. Receive payment: 100% of the bounty is received 2-5 days post-reward. Make sure you are eligible for payouts

Thank you for contributing to traceloop/hub!

Add a bountyShare on socials

Attempt Started (GMT+0) Solution
🔴 @akhilender-bongirwar Dec 23, 2024, 6:13:21 PM WIP
🟢 @Gmin2 Dec 24, 2024, 10:42:34 AM #23
🟢 @aayushdhiman01 Dec 24, 2024, 11:32:07 AM WIP
🔴 @5war00p Dec 25, 2024, 7:31:22 AM WIP
🟢 @detunjiSamuel Dec 30, 2024, 8:07:59 PM WIP
Gmin2 commented

/attempt #20
can i get assigned @nirga ?

Algora profile Completed bounties Tech Active attempts Options
@Gmin2 14 bounties from 7 projects
TypeScript, JavaScript,
Go & more
Cancel attempt

/attempt #20

Algora profile Completed bounties Tech Active attempts Options
@5war00p    5 traceloop bounties
+ 3 bounties from 3 projects
TypeScript, JavaScript,
Python
Cancel attempt

💡 @Gmin2 submitted a pull request that claims the bounty. You can visit your bounty board to reward.

hey @galkleinman , I'd like to extend the EmbeddingsRequest struct in src/models/embeddings.rs to support Amazon Titan's embedding format. The current structure doesn't account for Titan-specific fields like dimensions and normalize flags. will this be fine ?

nirga commented

sounds good @detunjiSamuel

Hey @nirga @galkleinman , just following up on my previous question. I just realised that including normalize and dimensions might defeat the purpose of the Hub project since OpenAI’s format doesn’t include those fields in their specification.
Would it make more sense to leave those fields with default values instead or you believe it is best to carry on with including new field?

nirga commented

hmmm good point @detunjiSamuel! I think for simplicity I would just add them with default values. Can you open a separate issue for that so we can address it in the future?