-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add language detection to REST API #659
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportBase: 99.55% // Head: 99.53% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #659 +/- ##
==========================================
- Coverage 99.55% 99.53% -0.02%
==========================================
Files 87 87
Lines 6006 6047 +41
==========================================
+ Hits 5979 6019 +40
- Misses 27 28 +1
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
I see this is only a draft at the moment, but I took a glance and I think it would be better that the end-point name had hyphen instead of underscore ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a good start. I gave some comments on individual code lines. In addition to those:
- black formatting should be applied (see details)
- there should be a unit test in tests/test_rest.py which exercises the detect_language method
annif/openapi/annif.yaml
Outdated
/detect-language: | ||
post: | ||
tags: | ||
- Languages detection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be "Language detection"
annif/openapi/annif.yaml
Outdated
example: A quick brown fox jumped over the lazy dog. | ||
candidates: | ||
type: array | ||
description: candidate languages as ISO 639-1 codes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change "ISO 639-1 codes" to "IETF BCP 47 codes", because not all supported languages are necessarily in ISO 639-1. (I noticed the same problem in Simplemma README and opened an issue there)
annif/openapi/annif.yaml
Outdated
description: candidate languages as ISO 639-1 codes | ||
items: | ||
type: string | ||
maxLength: 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be 3, as some valid BCP 47 language tags (e.g. enm
, hbs
) may have 3 characters
annif/rest.py
Outdated
scores = lang_detector(body.get("text"), tuple(body.get("candidates"))) | ||
return { | ||
"results": [ | ||
{"language": s[0] if s[0] != "unk" else None, "score": s[1]} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about this instead:
{"language": lang if lang != "unk" else None, "score": score} for lang, score in scores
I think it would be easier to understand than the numeric indexing.
Thanks for adding tests. A few more things:
|
Right now, when making a request with no candidates or unknown candidates, the endpoint returns an empty list and when making a request with no text, it returns I also tested making a request with all 48 possible language candidates. I had about 4 GB of free memory which was used amost completely after making the request. The memory isn't released automatically afterwards but it is freed if the endpoint is accessed again (simplemma is run again) or annif is restarted. Making other requests also slows down a lot after runnig simplemma with all candidates. |
The good news is that it's not crashing! 😁 My opinions on these cases:
There should be unit tests to check that these are indeed the results.
I think this is OK, but there should also be a unit test for this special case.
Great, thanks for testing! This is mostly what I suspected, although it's a surprise that accessing the endpoint again will free the memory. (Maybe this has to do with Flask running in development mode?) This has some potential for DoS situations (intended or not), but I guess it's hard for us to avoid that given how Simplemma works. We could, however, limit the number of candidate languages per request to, say, at most 5. What do others think? @juhoinkinen ? We could also try to work with the Simplemma maintainer if we want to change the way Simplemma allocates and releases models. For example, it could be possible to ask Simplemma to release the memory immediately or after a set period like 60 seconds after use. |
Limiting the number of candidate languages seems reasonable. If there is no simple way to make the limit configurable, 5 could be a good number for that.
I noticed there is an issue in Simplemma repository about loading models to memory, which was opened just yesterday. |
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
Just started to think, if some some testing could be performed also in tests/test_swagger.py. I don't remember just what more functionality does the tests in |
Background to the question of |
This PR adds the ability to detect the language of a text to the REST API. The language detection uses the simplemma python library.
A POST method is added to the end-point
/detect-language
. It expects the request body to include a json object with the text whose language is to be detected and a list of candidate languages as their ISO 639-1 codes. For example:The response is a json object with the format:
where the scores range from 0 to 1 and a
null
value is used for an unknown language.Implements #631