New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lossy Channels #400
Comments
What do you mean by |
Apologies for not being clear. By "lossy channel", I mean one that discards old data when the buffer is full. This is useful for things like robotics where new sensor information is more important than old information and likely makes the old information completely worthless. The |
I see, I'm not sure if you really need a full channel for such thing ‒ do you need a „buffer“ for several updates, then? If I was doing something like sensor data, I'd probably have some kind of atomic storage for single (newest) snapshot of each sensor, then a mechanism to wake up the thread that consumes the data. The wake-up could be a single-element bounded channel with element = |
You may want the last A sizeable chunk of the robotics community, and probably most of robotics in acedamia, heavily utilize lossy channels like this (see ROS). The |
@neachdainn In case the channel is full and you want to "overwrite" the oldest element in it, what if you called I wonder if that would work for you? |
That is what I'm doing now but it requires a racey loop. |
I can see @neachdainn 's point here: a common use case of channels usually require separation of the sender and the receiver, where the sender will be owned by the data producer, while the receiver owned by the consumer. And in this pattern, the producer usually don't have access to the receiver to pop one or more messages when the channel is full (except making a receiver clone and use it solely for full-inbox-poping, seems a waste), and it is also hard for the consumer to decide if it shall discard some "old" messages to clean up space for "newer" messages, as it can't foresee if more messages are coming. I think a naive implementation of the |
Just to be clear: I am not asking for someone to implement this for me. I am willing to spend time implementing this, I just want to know if a PR for something like this would be (potentially) accepted or if someone is already working on a similar channel flavor. |
I don't think adding a whole new channel flavor is worth it. At best, we might introduce a new method, perhaps named But even so, I find this a bit of a niche use case and the problem is not difficult to get around manually. I'm wary of adding small helper methods like these -- the channel interface is already complex enough, and there's always a bunch of new methods we could add. |
That would definitely cover my use-case (assuming "last element" means the oldest element) and a new flavor is probably overkill.
My example might be niche but I think the concept of a channel that drops the oldest item is not uncommon. And while it isn't difficult to get around manually, it is very inelegant. At the very least, the problem of figuring out when to break out of the
That is fair. I really like how Crossbeam and |
FWIW, I added an asynchronous variant of such a channel type to https://github.com/Matthias247/futures-intrusive, which I called StateBroadcastChannel. The motivation is more or less what @neachdainn was asking for. A certain component generating state updates which must be distributed to potentially more than 1 consumer. I had use-cases for that before, when working on embedded (soft)realtime system. In my case I did not add any additional buffering to the channel. I guess if that would be required it would be more of a buffer per consumer instead of on inside the channel. |
Perhaps a You could try to use a technique similar to graphics programming where you would swap buffers, see https://computergraphics.stackexchange.com/questions/4550/how-double-buffers-works-in-opengl For example, let's say the consumer always wants a batch of
Alternatively, you could maybe do something from the producer thread that would combine buffering updates locally for the "next batch", while doing a
You could still use a channel and |
What I have now works well enough for me - I opened the issue originally because I thought it would be a feature that people (including myself) would appreciate having. I've since started digging through the code and I've come to two conclusions:
Based on those conclusions, I would be fine if this issue is closed. |
We are still interested in this feature. |
This should be able to be implemented just by porting #789 to channel. |
Is there any current work for adding lossy bounded channels? Or any strong reason to not add them? If not, I might open a PR in the next few weeks with an implementation.
The text was updated successfully, but these errors were encountered: