Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delay trait using core::time::Duration #523

Open
madsmtm opened this issue Nov 21, 2023 · 4 comments
Open

Delay trait using core::time::Duration #523

madsmtm opened this issue Nov 21, 2023 · 4 comments

Comments

@madsmtm
Copy link

madsmtm commented Nov 21, 2023

Is there a reason that the embedded_hal::delay::DelayUs API is not defined as follows:

use core::time::Duration;

pub trait Delay {
    fn delay(&mut self, duration: Duration);

    #[inline]
    fn delay_us(&mut self, us: u32) {
        self.delay(Duration::from_micros(us))
    }

    #[inline]
    fn delay_ms(&mut self, ms: u32) {
        self.delay(Duration::from_millis(us))
    }
}

impl<T: Delay> Delay for &mut T { ... }

I haven't actually implemented this API myself, but it seems like this would be both much cleaner, and much more flexible (e.g. allowing nanosecond resolution as well)?

Is the issue that core::time::Duration is too large to pass around?

@madsmtm
Copy link
Author

madsmtm commented Nov 21, 2023

Another oddity: DelayUs from embedded-hal has a default implementation for delay_ms, while the same trait from embedded-hal-async does not have this default implementation.

@ryankurte
Copy link
Contributor

Is the issue that core::time::Duration is too large to pass around?

iirc this is because core::time::Duration is 64-bit and we didn't want to impose that on (what are predominantly) 32-bit targets

@madsmtm
Copy link
Author

madsmtm commented Nov 21, 2023

Fair enough, though perhaps it would still makes sense to design the API as I've outlined above?

That way, people on 32-bit targets could still use the usual delay_ms/delay_us methods if they're concerned about 64-bit numbers (implementers could provide efficient versions for each of these), while most people could use the usual delay method (which after inlining would likely compile down to the same thing, if we assume the common case of a constant duration).

@adamgreig
Copy link
Member

Another oddity: DelayUs from embedded-hal has a default implementation for delay_ms, while the same trait from embedded-hal-async does not have this default implementation.

Thanks for pointing this out! This was due to a limitation in async functions in traits when it was added, but it's now possible so that's been fixed (see #535).

Fair enough, though perhaps it would still makes sense to design the API as I've outlined above?

The problem is that the end users (who've picked a target) aren't the ones calling delay: it's driver authors, writing platform-agnostic code, that call it from some generic Delay provider. The end user can't choose which delay a driver called, so they will likely end up with delays based on core's Duration -- which is 12 bytes in the end (u64 seconds + u32 nanoseconds), so the computation and memory overhead could be significant.

We have however just changed (in #537) the trait to permit nanosecond resolution, although many platforms likely will round up significantly (e.g. probably to the next highest µs).

The hope is that one day we can invent a suitable Duration type that everyone is happy with, even on restricted 8/16-bit platforms, and in that case we'll introduce a new Delay trait that uses it and in the meantime we have DelayNs which provides for delay_ms/delay_us/delay_ns. There is also a possibility of calling the current trait Delay (#541) and either giving up on finding a Duration, or if we do, just adding it as a new method to Delay.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants