Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Old error in the SBCharSetProber.cpp (or .py) of the Universal Charset Detector #50

Closed
ghost opened this issue Feb 5, 2015 · 8 comments

Comments

@ghost
Copy link

ghost commented Feb 5, 2015

Hello,

when I was working on my new language models for Central European countries I found old error in the sbcharsetprober.py (or .cpp) file.

I've looked around on the internet and I've found only ONE developer/contributor (PyYoshi): who corrected this amazing error (There are some other bugs are corrected and added many new language models).

In the code of all forks I've found (python, cpp, ...) "-1" is missing (The part of source code below is already corrected):

// Order is in [1-64] but we want 0-63 here.
order = mModel->charToOrderMap[(unsigned char)aBuf[i]] -1

if (order < SYMBOL_CAT_ORDER)
  mTotalChar++;
if (order < SAMPLE_SIZE)
  {

I've spent half a day to understand why my new lang models give very low
confidence value for tested text. With adding "-1" confident values are
normal.

If you can, please, post this info to other chardet developers.

Many thanks.

@dan-blanchard
Copy link
Member

Thanks for the bug report. I actually came across this difference as well when I was working on trying to update a lot of our code to incorporate upstream changes. I wasn't sure if it was necessary for our Python port or not, so I didn't actually incorporate the change.

Also, @PyYoshi didn't actually make that change, he just wrapped uchardet-enhanced, which (as you said) is the only place that has that change.

Unfortunately all of the other predictions from uchardet-enhanced seem to be much less accurate on our test set, so I wasn't sure if I should trust that change or not. You can see some discussion at the bottom of issue #48.

@ghost ghost closed this as completed Feb 5, 2015
@ghost ghost reopened this Feb 5, 2015
@ghost
Copy link
Author

ghost commented Feb 5, 2015

Thanks for the link to uchar-enhanced. Their database for creating language models is funny small. I have spent one month collecting approx. 10MB raw text per language :) (I have created 7 Central European language models).
It would be very useful to have more infos month ago when I was to learn how to build a language model. I had only one relevant info from mozilla doc. Therefore I had to read "between the lines" :)

I have one question. How to estimate/calculate good "typical positive ratio"?
Which expression is good?

typical_ratio = float(total_num_sequences - total_num_negative_sequences) / total_num_sequences * total_num_frequent_chars / total_num_chars

vs

typical_ratio = float(total_num_positive_sequences) / total_ num_sequences * total_num_frequent_chars / total_num_chars

@ghost
Copy link
Author

ghost commented Feb 6, 2015

Let me explain what is the difference with or without "-1":
The order (mModel->charToOrderMap[(unsigned char)aBuf[i]]) is index to twochars (sequences) table (with values 0=negative, 1, 2, 3=positive occurence) created from 64 (=SAMPLE_SIZE) the most frequented chars sorted by their occurence probability in descending order. This table is list with first index=0 (standard python behavior).
If you omit "-1" you not only cut off the most frequent twochar occurences (exactly SAMPLE_SIZE+1 length) but you make twochars table inconsistent to charmap table.
For example: Let 'e' and 'a' are the most frequent letters. 'e' has order=1 in the charmap table and 'a' has order=2. The twochars table therefore has 'ee' "occurence value" (0,1,2 or 3) in the first place in the list (index "0") and 'ea' value has the second place (index "1"). If the line of the source code above gives order=1 for 'e' and order=2 for 'a' then expression
i = (last_orderSAMPLE_SIZE)+order
gives i=64
1+2 for 'ea', and it points somewhere else but not to second item in our twochars table. If you add "-1" to order evaluation then you get
i=64*0+1=1 what is correct index value

@dan-blanchard
Copy link
Member

Thanks for trying to explain the reasons why you think - 1 should be included.

In spite of the seeming correctness of adding that - 1, if we add that our detection accuracy plummets. We fail 41 unit tests (instead of the 1 known failure we have right now) when adding that in, so I'm not very inclined to make the change.

It seems to me that what is likely happening is that when the tables were originally created by Mozilla, they were created in such a fashion that this error was happening on the generation side too.

When we address #48 and retrain all of the models in a more sensible fashion, we will want to address this issue, but I think we shouldn't until then.

@ghost
Copy link
Author

ghost commented Feb 6, 2015

"It seems to me that what is likely happening is that when the tables were originally created by Mozilla, they were created in such a fashion that this error was happening on the generation side too."

Ok I understand, it is pity, but it is very simple to shift current data to the left and add zeros to the end of the list, because data in the list are sorted by probability. Btw first 64+1 values are unnecesary because they will never be indexed and also it is very improbable that some twochars sequences will be indexed to the end of table.

@dan-blanchard
Copy link
Member

Since this is a longstanding bug, we can certainly shift the data over and add the adjustment like you suggested. My wife just had a baby on the 8th, so I don't have much time to dedicate to this at the moment, so if you could make a pull request that includes this fix (and shifts the data), that would be enormously helpful.

@ghost
Copy link
Author

ghost commented Feb 10, 2015

Congratulations!!!!!

Of course. I've tested models (nosetests) with shifted data and the results was exactly same, therefore we can do it what you suggest.

@dan-blanchard
Copy link
Member

The changes discussed here are part of #99.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant