Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rust-based Dialect api. #109

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Rust-based Dialect api. #109

wants to merge 6 commits into from

Conversation

richardkiss
Copy link
Contributor

No description provided.

@richardkiss richardkiss force-pushed the dialect branch 2 times, most recently from 7f312d2 to 738b3b0 Compare August 23, 2021 22:01
Copy link
Contributor

@arvidn arvidn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, generally, a single interface/trait would suffice and make this API simpler. I think Dialects should be that trait that then has two implementations, one ChiaDialect which does not keep a lookup table for functions, and maybe the other one could be called CustomDialect, or something like that, which is configurable via a string map.

const QUOTE_KW: [u8; 1] = [1];
const APPLY_KW: [u8; 1] = [2];

pub struct OperatorHandlerWithMode {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't look like it belongs in the chia dialect. The chia dialect is known and doesn't need to be determined at run-time. It doesn't need to keep a run-time lookup-table. In fact, the run-time lookup-table seems like it belongs in the python binding, as it's a trick to allow mixing python functions with rust functions. In the rust API, there's no need for that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lowest-level run_program implementation currently expects a thing with the OperatorHandler trait, which defines the opcode-to-functionality map. Except for strict, this is pretty much the simplest implementation of that (FLookup is just an array of 256 optional function pointers).

I can move this to its own file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OperatorHandlerWithMode::new_with_hashmap(chia_opcode_mapping(), strict)
}

pub fn chia_dialect(strict: bool) -> Dialect {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as far as I can tell, this is the only function that needs to be public in this file.

pub struct Dialect {
quote_kw: Vec<u8>,
apply_kw: Vec<u8>,
op_handler: Box<dyn OperatorHandler>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see why this needs to be a run-time interface instead of a compile-time trait, not why it needs to be heap allocated. In the chia dialect case this class only contains one bool of state (strict mode).

Dialect seems like a perfect candidate for being a trait rather than a concrete type.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could make a dialect trait, but we'd still need at least one concrete instance.

Changing to op_handler: &dyn OperatorHandler would mean having to keep around both a Dialect and the concrete OperatorHandler.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what I'm suggesting is that you can just rename the OperatorHandler trait to Dialect and add the two quote_kw and apply_kw to it. Then the python binding can have a concrete implementation of this trait, which contains a run-time lookup table, and the chia_rs library could have its own concrete type that doesn't, but that has a static operator -> function mapping.

args: NodePtr,
max_cost: Cost,
) -> Response {
self.op_handler.op(allocator, op, args, max_cost)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of forwarding this call, could the Dialect trait and OperatorHandler traits just be merged? They seem simple enough and related enough that it could just be one Dialect trait

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For sure that'd be possible, but I don't immediately see that it would be any more efficient.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would primarily be simpler, because you have a single interface instead of two. As far as I can tell, you'd never mix-n-match the OperatorHandler and the Dialect anyway. It would be more efficient to allow the rust API to specify a static/compile-time operator -> function mapping. It allows the optimizer to see through those calls in the LTO step.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Dialect represents the complete opcode-to-functionality map, so it really is just an OperatorHandler plus the q and a values. Making it a trait would mean we add a function to get the q and a values from the Dialect instead of just having publicly accessible fields.

I haven't benchmarked it, but I'd be surprised to see the FLookup (it's a [Option<OpFn>; 256]) implementation being slower than the let f = match b[0] { 3 => op_if, ... implementation in your PR. I'd think they both produce a look-up table, so O(1).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the LTO would be able to do anything without further changes to code here https://github.com/Chia-Network/clvm_rs/blob/main/src/run_program.rs#L299

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making it a trait would mean we add a function to get the q and a values from the Dialect instead of just having publicly accessible fields.

I don't think you have to do it that way. Surely a trait can declare names of constants. These opcodes is known at compile time, so they don't need to be stored in variables. Functions would allow an implementation to keep them as run-time values though, but a chia_rs library wouldn't need that.

I haven't benchmarked it, but I'd be surprised to see the FLookup (it's a [Option; 256]) implementation being slower than the let f = match b[0] { 3 => op_if, ... implementation in your PR. I'd think they both produce a look-up table, so O(1).

I would expect the our operator functions are so slow that they would dwarf the operator lookup, but once we make those faster I wouldn't be surprised this might be next. I would expect the branch prediction benefits of static calls, rather than indirect calls.

I don't see the value in requiring to keep all this state in memory. As far as I can tell, it's more code and complexity.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the LTO would be able to do anything without further changes to code here https://github.com/Chia-Network/clvm_rs/blob/main/src/run_program.rs#L299

right, that shouldn't be a dynamic interface

src/dialect.rs Outdated Show resolved Hide resolved
src/lib.rs Outdated
mod serialize;
pub mod reduction;
pub mod run_program;
pub mod serialize;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it makes me a bit nervous to expose so much from the library. Is it necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I don't know the answer to that offhand. The main issue is that we want clvm_rs to be a crate, and use it with chia_rs (and maybe other things). But since there are no users of it yet, it's hard to know what to export. Exposing API to python uses a different mechanism.

I suppose a safer way of doing it is to keep as much private as possible, and when clients need new API, we be explicit and judicious about exposing it. We may end up doing new releases just to expose new API.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a good idea to expose as little as possible and extend it as we discover new things that need to be public. That gives us an opportunity to think through how we'd like to expose various functionality too.

@github-actions
Copy link

'This PR has been flagged as stale due to no activity for over 60
days. It will not be automatically closed, but it has been given
a stale-pr label and should be manually reviewed.'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants