-
Notifications
You must be signed in to change notification settings - Fork 879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(blob): BlobModule v1 #3390
Comments
GetAll should not have TxHash, as its per namespace getting, not per transaction. I can see an argument for Get taking it, but what worries me there is that we are going up a layer of abstraction away from blobs and into transactions. Not going to block on it though |
Yeah, already removed it |
Lets also add Subscription methods to the whole namespacw |
Another point to consider, which has been brought up several times by @renaynay and @walldiss and I agree with, is to rework GetAll to use one namespace instead of multiple ones. Managing a single namespace per query becomes much easier and error handling becomes clearer. Currently, it is almost impossible to handle errors that occur as a result of multiple request failures. // GetAll retrieves all the blobs for given namespaces at the given height.
GetAll(context.Context, height uint64, namespace.ID) ([]Blob, error) |
I would like to clarify that propagating retrieval results, indicating which namespaces were retrieved successfully and which resulted in errors, is very messy with the existing multi-namespace API. Handling such errors is even worse on the client side, so we should prioritize reworking the API to work with only a single namespace. |
I am not sure where the v1 versioning will be applied but note that I'd like to reserve v1 for semantically versioning the next iteration of the blob API that will also overhaul the current implementation to use grpc instead of json rpc.
IMO, it could also be important to allow users some control over retries. If we are not sure what they want, we should also allow users to completely control the retry logic client side then. |
Iirc, we discussee grpc coming on top of internal API as separate layer. This internal API versioning, while Canonical can have its own independent |
If Submit guarantees tx inclusion with internal retrying, no one would want/need to handle it themselves. Btw, they can always cancel the Submit API call and start a new retry, even with presence of internal retries, but thats suboptimal compared to internal ones |
Implicit retries does not communicate the state of retries to user. He might want to do something else after N retries failed. Like try fallback while keep retrying, or introduce backoff. I think implicit retries should be optional and well defined. They also need to be documented, describing retry-able conditions. We can start without restart just to have less and add more if needed. |
I don't see an example of such something else and would be great if someone can provide that. The point is to make a reliable API method that handles mempool.congestion and account seqeunce issues internally without leaking those details to the client. It's hard for me to imagine a world where every node user has to design their own way of dealing with our mempool impl details.
We already doing that. The current released version of the node has no retries and that's causing issues and introducing the need to add them. With the Signer being integrated we get those retries for free |
Introducing BlobModulev1 that will support new features:
With the upcoming features like async blob submission, changes need to be made to the Blob Module API. Currently, the Blob Module relies on height to perform read operations, which is not feasible in the case of async submissions. Therefore, blob.Get should utilize txHash instead of height to identify the blob. Additionally, blob.Get will allow users to subscribe to a particular blob by waiting until the transaction is included in the block.
Some open questions are:
The text was updated successfully, but these errors were encountered: