You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Investigating a performance bug on large inputs, I found that chardet 3.0.4, using the code under "Example: Detecting encoding incrementally", simply reads the entire file (e.g. I tried with the Complete works of Shakespeare lossily converted to ASCII with recode -f utf-8..ascii).
This is a little surprising, as if I only allow it to read the first bufferful (8KB on my machine), then detector.done is False (as I'd expect), but becomes True after detector.close(). Shouldn't chardet have an inkling this might happen in advance?
I appreciate that it might be complex to fix this issue, so a warning in the section of the docs I mention would be nice, something like: if you're using this on large inputs you might want to limit the maximum amount of data you feed to the incremental detector.
The text was updated successfully, but these errors were encountered:
Investigating a performance bug on large inputs, I found that chardet 3.0.4, using the code under "Example: Detecting encoding incrementally", simply reads the entire file (e.g. I tried with the Complete works of Shakespeare lossily converted to ASCII with
recode -f utf-8..ascii
).This is a little surprising, as if I only allow it to read the first bufferful (8KB on my machine), then
detector.done
isFalse
(as I'd expect), but becomesTrue
afterdetector.close()
. Shouldn't chardet have an inkling this might happen in advance?I appreciate that it might be complex to fix this issue, so a warning in the section of the docs I mention would be nice, something like: if you're using this on large inputs you might want to limit the maximum amount of data you feed to the incremental detector.
The text was updated successfully, but these errors were encountered: