New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Old error in the SBCharSetProber.cpp (or .py) of the Universal Charset Detector #50
Comments
Thanks for the bug report. I actually came across this difference as well when I was working on trying to update a lot of our code to incorporate upstream changes. I wasn't sure if it was necessary for our Python port or not, so I didn't actually incorporate the change. Also, @PyYoshi didn't actually make that change, he just wrapped uchardet-enhanced, which (as you said) is the only place that has that change. Unfortunately all of the other predictions from uchardet-enhanced seem to be much less accurate on our test set, so I wasn't sure if I should trust that change or not. You can see some discussion at the bottom of issue #48. |
Thanks for the link to uchar-enhanced. Their database for creating language models is funny small. I have spent one month collecting approx. 10MB raw text per language :) (I have created 7 Central European language models). I have one question. How to estimate/calculate good "typical positive ratio"?
vs
|
Let me explain what is the difference with or without "-1": |
Thanks for trying to explain the reasons why you think In spite of the seeming correctness of adding that It seems to me that what is likely happening is that when the tables were originally created by Mozilla, they were created in such a fashion that this error was happening on the generation side too. When we address #48 and retrain all of the models in a more sensible fashion, we will want to address this issue, but I think we shouldn't until then. |
"It seems to me that what is likely happening is that when the tables were originally created by Mozilla, they were created in such a fashion that this error was happening on the generation side too." Ok I understand, it is pity, but it is very simple to shift current data to the left and add zeros to the end of the list, because data in the list are sorted by probability. Btw first 64+1 values are unnecesary because they will never be indexed and also it is very improbable that some twochars sequences will be indexed to the end of table. |
Since this is a longstanding bug, we can certainly shift the data over and add the adjustment like you suggested. My wife just had a baby on the 8th, so I don't have much time to dedicate to this at the moment, so if you could make a pull request that includes this fix (and shifts the data), that would be enormously helpful. |
Congratulations!!!!! Of course. I've tested models (nosetests) with shifted data and the results was exactly same, therefore we can do it what you suggest. |
The changes discussed here are part of #99. |
Hello,
when I was working on my new language models for Central European countries I found old error in the sbcharsetprober.py (or .cpp) file.
I've looked around on the internet and I've found only ONE developer/contributor (PyYoshi): who corrected this amazing error (There are some other bugs are corrected and added many new language models).
In the code of all forks I've found (python, cpp, ...) "-1" is missing (The part of source code below is already corrected):
I've spent half a day to understand why my new lang models give very low
confidence value for tested text. With adding "-1" confident values are
normal.
If you can, please, post this info to other chardet developers.
Many thanks.
The text was updated successfully, but these errors were encountered: