aleph-im/pyaleph

Feature: Endpoint to fetch item_hashes only

Opened this issue · 3 comments

Problem

For some use cases like caching and checking for new messages, fetching the whole message is too much.

Solution

Provide an endpoint for fetching messages, with the exact same query parameters, as the GET /messages endpoint. This should allow filters to be applied for any use case, especially getting the latest item_hashes up until some timestamp, to check for new messages.

Two typical solutions here:

In any case, I would avoid having an endpoint dedicated to this specific problem.

GraphQL API would definitely be a smooth solution. But one should also consider that many blockchain RPCs (Ethereum, Solana) have methods entirely dedicated to fetching signatures/hashes in high performance. This way we could also implement any optimizations for this endpoint, without impacting the implementation of the original endpoints.

At the moment there's no such requirement in terms of performance/load. I think GraphQL would be great for frontend usages, but it can be a bit much to put in place if you're in a hurry. The header solution works fine and can easily have a default implementation for all endpoints (implemented as a aiohttp middleware) and have a custom implementation for /messages.json that filters fields on the DB query.

Example of the middleware:

from aiohttp.web import middleware

@middleware
async def x_fields_middleware(request, handler):
    resp = await handler(request)
    x_fields = request.headers.get("X-Fields")
    if x_fields and response.content_type == "application/json":
        fields = x_fields.split(",")
        # Note that this is a dumb implementation, ideally we'd find a way to avoid
        # serializing in the handler then deserializing the JSON response here.
        response_dict = json.loads(response.body)
        response_dict {k: v for k, v in response_dict.items() if k in fields}
        resp.body = json.dumps(response_dict)

    return resp

I still think GraphQL is worth a shot, to be discussed with @BjrInt and @hoh .