Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding LEPOR - A machine translation evaluation metric. #3176

Open
ulhaqi12 opened this issue Jul 24, 2023 · 3 comments
Open

Adding LEPOR - A machine translation evaluation metric. #3176

ulhaqi12 opened this issue Jul 24, 2023 · 3 comments

Comments

@ulhaqi12
Copy link

Hi,
Hope you are doing well. I have an idea of a potential contribution to the NLTK translation module.
Currently, in nltk.translate, we have bleu score, meteor score, and other evaluation metrics but there is a new method LEPOR that has better results than other evaluation metrics available in NLTK. I tried this when I was working on a machine translation project recently.
Lepor was not available in Python. After understanding the algorithm from the original paper, I have implemented this metric in Python and it is available here. I was thinking if can integrate this in NLTK as well so people can easily use it. In the future, the different variations of this metric will also be implemented.

Looking forward to your response.
Thank you

BR,
Ikram

@ulhaqi12
Copy link
Author

Hey @stevenbird and @tomaarsen,

It would be great if I can get a green signal to generate PR.
Thank you

BR,
Ikram

@alvations
Copy link
Contributor

@ulhaqi12 I'll suggest that you create the PR and I'll be glad to help you as a reviewer since I've interest in having good metrics added to the nltk.translate module.

@ulhaqi12
Copy link
Author

Sure. I will do it. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants