Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't add additional custom tokenizer or renderers? #1693

Closed
alystair opened this issue May 31, 2020 · 15 comments
Closed

Can't add additional custom tokenizer or renderers? #1693

alystair opened this issue May 31, 2020 · 15 comments
Labels
good first issue Something easy to get started with proposal
Projects

Comments

@alystair
Copy link

What pain point are you perceiving?.
I'm reviewing Marked documentation, attempting to create a custom setup where, it transforms new lines starting with 'notice: ' into a specifically formatted DIV. By my understanding I need to first add a custom named tokenizer and then a renderer based on it? Or am I going about this the wrong way?

Describe the solution you'd like
I'd like to easily add this functionality without creating a separate post-processing tool external to Marked...

@KilianKilmister
Copy link

You are propably able to get by creating a walk-token. They basically act as an intermediate processing step before the data is handed to the renderer. Their ability is somewhat limited, but it should be plenty for things like this.

@alystair
Copy link
Author

I tried the following but it doesn't work. Does tokenizer only substitute existing tokens? Can you not add your own custom ones?

const tokenizer = {
	tipNotice(src) {
		const match = src.match(/\nnotice: (.*)\n/);
		if (match) {
			return {
				type:'tipNotice',
				raw:match[0],
				text:match[1].trim()
			};
		}
		return false;
	}
};
const renderer = {
	tipNotice(text) {
		return `<div class="tip notice">${text}</div>`;
	}
};
marked.use({ tokenizer, renderer });
marked(content,{ headerIds:true });

@alystair alystair changed the title Do I need a custom tokenizer or renderer? Can't add additional custom tokenizer or renderers? May 31, 2020
@KilianKilmister
Copy link

@alystair Both the Parser and the Lexer implementation currently have their structure hardcoded. So you can so there currently is no straight way of altering what they hand to the Tokenizer and Renderer, as those are "stupid" as they are build tor speedy processing.

And as it turnes out, it is pretty much impossible to make invasive modifications short of altering the source-code or meta programming a weil.

Whoever wrote the core of this package can really pad himself on the back, it is as solid as granit.

#1695 i filed an issue about that yesterday

@alystair
Copy link
Author

alystair commented Jun 1, 2020

Argh... in that case I'll have to replace several built-ins like paragraph, and review the source code to expand on the current functionality. Fun.

@KilianKilmister
Copy link

A little hint: copy the repo and directly modify the sourcecode i spent a full day trying to alter the parser and lexer from inside my module and i have nothing to show for it, as marked is stable as hell.

But the code is modern and well formatted. Navigating it is a breze

@UziTech
Copy link
Member

UziTech commented Jun 22, 2020

This is something that could be useful. If someone wants to create a PR I would be happy to review it.

@UziTech UziTech added good first issue Something easy to get started with proposal labels Jun 22, 2020
@thien-do
Copy link

@UziTech the PR is to add "token" param to renderer methods, right? So for example the "paragraph" one will receive "token" in addition to "text"?

@UziTech
Copy link
Member

UziTech commented Aug 21, 2020

No the issue is for the ability to add additional tokenizers/renderers. For example adding an "underline" tokenizers and renderer to extend markdown.

@UziTech
Copy link
Member

UziTech commented Aug 21, 2020

Moving to the renderer taking tokens instead of certain parameters is something that will have to be done on a major version bump but I think it would help with this issue.

@alystair
Copy link
Author

Any chance of this being done internally or you'll definitely be waiting for a 3rd party PR?

@UziTech
Copy link
Member

UziTech commented Apr 22, 2021

There is already a PR to add the ability to extend the tokenizer #1872 but it hasn't been worked on in a while.

@calculuschild
Copy link
Contributor

calculuschild commented Apr 22, 2021

We kind of got stuck on #1872 due to some slowdown issues, and other things had higher priority at the time so it got left behind. There was discussion of a possible different approach that might work better, but I'm not sure if that's still the route we want to end up taking. @UziTech Is that still our preferred approach? Perhaps it would be worth setting up a discussion page to determine how exactly we want extensions to work in the future with a clear structure before revisiting that PR.

@UziTech
Copy link
Member

UziTech commented Apr 22, 2021

@calculuschild I think we can continue the conversation on that PR. The biggest thing is keeping marked fast for users that don't use that feature.

@alystair If you wanted to create a PR with a solution it would definitely help.

@dawgdemi
Copy link

cd utils
node build.js

@UziTech
Copy link
Member

UziTech commented Jun 15, 2021

This Should be available in v2.1.0

see Custom Extensions section in docs

@UziTech UziTech closed this as completed Jun 15, 2021
vNext automation moved this from To Do to Done Jun 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Something easy to get started with proposal
Projects
vNext
Done
Development

No branches or pull requests

6 participants