Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tackle performance and accuracy regression of sentence tokenizer since NLTK 3.6.6 #3014

Merged
merged 5 commits into from Jul 4, 2022

Commits on Jun 21, 2022

  1. Configuration menu
    Copy the full SHA
    952119e View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    bf952d6 View commit details
    Browse the repository at this point in the history
  3. Align _match_potential_end_contexts with NLTK 3.6.5 sent_tokenize res…

    …ults
    
    Some minor inconsistencies remain, e.g. for '.!, a'. Furthermore, the time-efficiency was improved. Lastly, some methods were supplemented with type hints.
    tomaarsen committed Jun 21, 2022
    Configuration menu
    Copy the full SHA
    92edd2b View commit details
    Browse the repository at this point in the history
  4. Remove leftover print

    tomaarsen committed Jun 21, 2022
    Configuration menu
    Copy the full SHA
    cf944c1 View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    4073f2b View commit details
    Browse the repository at this point in the history