Call-02 Agenda 2024-09-12
Opened this issue · 4 comments
Intro
Status Checks From Last Call
- Early Communication for Releases
- SDK Release
- Error handling in selection (gateway getting stuck sending jobs to Os)
Open Discussion
- Coordination of Explorer work
- Multiple PRs (across multiple repos) to review and merge
- Coordinating Devops deployments and troubleshooting
- Need to get API Key for our AI Job Tester to post to Leaderboard Serverless API
- On-chain ticket distinction using the auxData field
- Live/real-time AI experiments and iterating on go-livepeer design to support these workflows
- Continue discussion on Gateway, Selection, and Orch Roles.
Wrapup & Goal-Setting
Shareout Template:
Team
Priorities
Outline your top 3 development priorities (1-3 sentences max)
Blockers
Are there any external factors preventing you from achieving your top 3 priorities? If so, please outline them (1-3 sentences max)
Discussion topics
Got a topic you’d like to cover in open discussion?
We may not be able to cover every single topic that’s raised, but we will make sure that anything we miss is added as an agenda item for the next time.
Livepeer.Cloud SPE
Priorities
Outline your top 3 development priorities (1-3 sentences max)
- Livepeer.Cloud SPE Proposal (AI Metrics & Visibility)
- Coding / Implementation - This phase is underway
- Testing and Validation - This phase is underway
- Deployment & DevOps - This phase is underway
- Livepeer.Cloud SPE - AI & Transcoding Gateway Support / Maintenance
- Updated the Gateways to 0.7.8.ai2 and AI Runner 0.2.0 (Done)
- Continue to support AI Spe releases and troubleshooting
- Planning For Upcoming Deployment
- Early October Release: Staging
- Late October Release: Production
Blockers
Are there any external factors preventing you from achieving your top 3 priorities? If so, please outline them (1-3 sentences max)
- Gateway Selection Algorithm
- Not a real blocker, but consensus is needed and a short-term and long-term fix will be needed.
- Livepeer.Cloud SPE is not "Blocked" but managing a custom fork for this is not ideal
Discussion topics
Got a topic you’d like to cover in open discussion?
- Process to get new proposal changes "Merged" and in production
- Multiple PRs (across multiple repos) to review and merge
- Coordinating Devops deployments and troubleshooting
- Ensure all Data is backedup - Postgres DB
- Deploying Leaderboard API to Vercel
- Verifying Database Migration
- Deploy Explorer Release
- Need to get API Key for our AI Job Tester to post to Leaderboard Serverless API
Livepeer Studio
Priorities
- Running some experiments with real-time live AI inference workflows
- Releasing AI SDK and Studio SDK with AI endpoints
Blockers
- Merging livepeer/ai-worker#191 for AI SDK setup
Discussion topics
- Live/real-time AI experiments and iterating on
go-livepeer
design to support these workflows
Team
Priorities
Outline your top 3 development priorities (1-3 sentences max)
- Optimizing base capacity deployment to handle supermodel load and coordinating orchestrators.
- Merging the AI remote worker integration.
- Finalizing the merges for Loras, suspense fixes, and Lipsync/Text-to-audio pipelines.
Blockers
Are there any external factors preventing you from achieving your top 3 priorities? If so, please outline them (1-3 sentences max)
- Time constraints, as always.
Discussion Topics
Got a topic you’d like to cover in open discussion?
- On-chain ticket distinction using the auxData ticket field, which is currently 64 bytes long.
AuxData info
We’ve previously discussed this in this Notion doc. The minimum data we need on-chain is 1 bit (AI vs Transcoding). To future-proof this, we could consider adding more bytes. For example, 4 bytes would allow for 4.3 billion distinguishing numbers, which may be useful if we ever need to communicate capabilities. If we anticipate adding more complex data like text in the future, increasing the byte length could make sense. Also we should keep in mind that MLOAD is priced per 32 byte chunks so we might as well use 32 bytes.
I’m curious to hear your thoughts. Although the protocol contract changes are minimal and would only take minutes, they would require a LIP submission. As we move into the public phase, I believe this feature will be essential, given the challenge of tracking Gateways based solely on known ETH addresses (see this example).
From a gas calculation I do not think it will cost much gas to add 1-32 bytes as it wil add MLOAD gas cost of (See eth gas table). A quick calculation would give us (3+
Yes, you can certainly add that to a GitHub issue. The summary is clear and concise, providing a reasonable estimate of the gas overhead for adding 4 bytes to the contract execution. It outlines the individual gas components (calldata, MLOAD
, and processing) and provides a total range based on non-zero and zero bytes, making it easy to understand the expected impact.
Total Estimated Gas Overhead
Using the ETH operations GAS table we get the following estimate for adding 4 bytes.
Calldata cost: ~64 gas (for 4 non-zero bytes - 16 gas each).
Memory load (MLOAD
): 3 gas.
Additional processing: ~5-10 gas.
The total estimated additional gas cost for adding 4 bytes to the contract execution, focusing on calldata and memory (MLOAD
), is roughly 72 - 77 gas (rounded to ~70-80 gas) for non-zero data. If the data includes zero bytes, the calldata cost drops, bringing the total closer to the 20-30 gas range. For 32 bytes it will be something like:
Calldata cost: ~512 gas (for 32 non-zero bytes - 16 gas each).
Memory load (MLOAD): 3 gas.
Additional processing: ~5-10 gas.
This rough estimate assumes minimal logic is performed on the extra data.
We may not be able to cover every topic raised, but anything missed will be added as an agenda item for next time.