Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some lexers lack aliases #2630

Open
8day opened this issue Jan 27, 2024 · 2 comments
Open

Some lexers lack aliases #2630

8day opened this issue Jan 27, 2024 · 2 comments

Comments

@8day
Copy link

8day commented Jan 27, 2024

As far as I can tell, get_lexer_by_name() is in fact get_lexer_by_alias(). So if lexer name is not listed under aliases, you won't be able to find such lexer through get_lexer_by_name().

ATM at least lexers/special.py:TextLexer and lexers/special.py:OutputLexer don't have their names present in their aliases.

class TextLexer(Lexer):
    """
    "Null" lexer, doesn't highlight anything.
    """
    name = 'Text only'
    aliases = ['text'] # This should be ['text only', 'text'].
    filenames = ['*.txt']
    mimetypes = ['text/plain']
    priority = 0.01

    def get_tokens_unprocessed(self, text):
        yield 0, Text, text

    def analyse_text(text):
        return TextLexer.priority
@jeanas
Copy link
Contributor

jeanas commented Jan 27, 2024

Yes, the name get_lexer_by_name() is unfortunately misleading. However, it's not the intent that lexer.name should always be part of lexer.aliases. Basically, the name is meant to be a human-readable language name (esp. for listing at https://pygments.org/languages), and lexer.aliases is for lookup, e.g., in reST/Markdown code blocks.

@birkenfeld
Copy link
Member

Yeah, maybe the docs need a tweak.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants