Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider alternate styling/linking for method argument definitions #574

Open
inexorabletash opened this issue Feb 16, 2024 · 4 comments
Open

Comments

@inexorabletash
Copy link
Contributor

inexorabletash commented Feb 16, 2024

Method arguments are currently detailed with this markdown (using compute() as an example):

<div>
    **Arguments:**
      - *graph*: an {{MLGraph}}. The compiled graph to be executed.
      - *inputs*: an {{MLNamedArrayBufferViews}}. The resources of inputs. Will be [=MLNamedArrayBufferViews/transfer|transferred=] if there are no validation errors.
      - *outputs*: an {{MLNamedArrayBufferViews}}. The pre-allocated resources of required outputs. Will be [=MLNamedArrayBufferViews/transfer|transferred=] if there are no validation errors.
</div>

which renders as:

There are a few other forms used in other specs worth considering to improve maintainability.

Alternative 1: Make args into definitions

A minimal change makes the argument names become definitions which are get automatically linked, to help catch errors.

<div dfn-for="MLContext/compute(graph, inputs, outputs)" dfn-type="argument">
    **Arguments:**
      - <dfn>graph</dfn>: an {{MLGraph}}. The compiled graph to be executed.
      - <dfn>inputs</dfn>: an {{MLNamedArrayBufferViews}}. The resources of inputs. Will be [=MLNamedArrayBufferViews/transfer|transferred=] if there are no validation errors.
      - <dfn>outputs</dfn>: an {{MLNamedArrayBufferViews}}. The pre-allocated resources of required outputs. Will be [=MLNamedArrayBufferViews/transfer|transferred=] if there are no validation errors.
</div>

which renders as:

Alternative 2: Style as definition list

Extending the above using a definition list, similar to what is currently done for dictionaries:

**Arguments:**
<dl dfn-for="MLContext/compute(graph, inputs, outputs)" dfn-type="argument">

: <dfn>graph</dfn>
:: an {{MLGraph}}. The compiled graph to be executed.

: <dfn>inputs</dfn>
:: an {{MLNamedArrayBufferViews}}. The resources of inputs. Will be [=MLNamedArrayBufferViews/transfer|transferred=] if there are no validation errors.

: <dfn>outputs</dfn>
:: an {{MLNamedArrayBufferViews}}. The pre-allocated resources of required outputs. Will be [=MLNamedArrayBufferViews/transfer|transferred=] if there are no validation errors.

</dl>

which renders as:

Alternative 3: Auto-generated table

This is a much more radical departure from what the spec has today. The spec source is extremely compact, which eases maintenance. Argument type, nullability, and optionality are filled in by Bikeshed automatically, reducing errors. On the flip side, I personally find the big table ugly.

<pre class=argumentdef for="MLContext/compute(graph, inputs, outputs)">
graph: The compiled graph to be executed.
inputs: The resources of inputs. Will be [=MLNamedArrayBufferViews/transfer|transferred=] if there are no validation errors.
outputs: The pre-allocated resources of required outputs. Will be [=MLNamedArrayBufferViews/transfer|transferred=] if there are no validation errors.
</pre>

which renders as:


Note that for all of these the return value of the method is not handled.

@zolkis
Copy link
Collaborator

zolkis commented Feb 19, 2024

Note that the current style has been a compromise for not inducing too much change at a time during last year.
Now it's a good time to stylistically rewrite the spec to conform as much as possible to modern web spec prose.

I'd prefer alternative 1 (closest to prose) or 3 (concise, but the main text is in Description, which is less readable in the limited space), 2 (not preferred).

@fdwr
Copy link
Collaborator

fdwr commented Mar 9, 2024

I like option 1, but can we have the bold weight for identifiers while keeping the font size of the current form? Otherwise the words (graph, inputs, outputs) look awkwardly smaller from the rest of the text, which doesn't help readability. Option 2 is meh, preferring the bulleted list of 1 (though I can't articulate exactly why I prefer it 🤔, maybe just because I'm more used to it). Option 3 is quite okay too (can visually scan the field/parameter name and type really quickly), except that the nullable and optional columns squeeze the description text into narrow columns 🤏🔎👀 (same feedback as Zoltan).

@inexorabletash
Copy link
Contributor Author

can we have the bold weight for identifiers while keeping the font size of the current form?

Yeah, it's just CSS - we can bump up the font size / override whatever's making it small.

@inexorabletash
Copy link
Contributor Author

FYI, I've experimented more with (1) locally. A few notes for posterity:

  • Bumping up the font size just requires this CSS: dfn code { font-size: 100%; } which IMHO looks better everywhere.
  • When specifying dfn-for on the container you MUST include the argument names and you MUST put spaces after the comma, e.g. dfn-for="MLContext/compute(graph, inputs, outputs)". If you don't, Bikeshed doesn't link the IDL to the definition, and for overloaded methods you get warnings about duplicate IDs.
  • When we define multiple methods in bulk (e.g. argMin/argMax, binary/logical/unary, pool & reduce), we can link at most one of the IDL instances - e.g. argMin's arguments get linked but not argMax's.
  • Bikeshed doesn't provide any warnings if the IDL and <dfn>s diverge; basically, you can have a <dfn> for anything, and Bikeshed just opportunistically links the IDL to it if it can. You can visually scan for this, because the arguments inside IDL blocks would not be links, but it's not automatic. This makes option (3) even more appealing... except for the visual noise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants