-
-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.8.4 vs 2.0.0a4 performance #236
Comments
I'm aware of this. |
Still it would be very difficult to compare 0.8.4 with 2.0.0a4 because of 1. |
Thanks for your answers @karlcow @lepture. About 2., I think it is rather relevant: I tried on both my home laptop and two distant servers with different documents (both long and short; but I can't post them here because they contain personal data), and I used The fact 0.8.4 has a single file architecture might be one of the reasons: I observed for different projects that it's sometimes faster to have a one-file structure (Bottle web framework is single-file) |
See https://gregoryszorc.com/blog/2019/01/10/what-i%27ve-learned-about-optimizing-python/
About why I was saying it doesn't matter: the x3 really depends on the use case. And it also probably depends on cold start vs cached compiled version. But we are talking about milliseconds, here so I guess it starts to matter if you are in the business of processing thousands of documents and have to render them right away. Now if we compare the document generation to let say rtt, latency, etc in case of a webserver. We start to be in a very different scenario where 4ms or 13ms doesn't matter. Improving performances is good, but not necessary everywhere. What would be interesting is to understand how it impacts your own use case, that would make it easier to find the right path of correction or not. It's always easier to work on something concrete. |
Thank you for your answer @karlcow. I am indeed aware of these profiling techniques (you're right: the initial setup / import time should not be taken into consideration, and this can be done well by using 4ms vs 13ms was the timing on my local (quite powerful) computer. When I do it on the distant dedicated server, on real (bigger) documents that I need to serve, I have these numbers:
(Once again I tested carefully, on many similar-length documents, and I profiled only the execution of the rendering, I did not take the import/initial setup time in consideration, etc.) As I serve many requests par second on my server, I will probably use 0.8.4: it seems to work fine, so I'm happy with 0.8.4. PS: anyway |
@josephernest there are some edge cases that 0.x mistune will perform very bad. Are you rendering markdown text on the fly? Here is how I use mistune in Typlog:
|
Hi @josephernest. How about saving the parsed document instead of doing it in each request? And if you already do it, have you already thought about converting the markdown source in a queue, detached from the request? |
@viniciusban Thanks for these suggestions. I'll probably try these later; for now ~25ms with mistune 0.8.4 is fine, I think I'll just keep using this version and it's perfect. |
Could be useful. https://functiontrace.com/ |
Hi @josephernest, I was thinking if anything performance critical happens, might be worth grinding the code down on Cython and see how it performs ... "I believe" 0.8.4 seems to be functioning well in my case (and can't see why it shouldn't), probs will give you some benefit (I expect up to 2x without any modifications to the pure python code just putting it through Cython). Regretfully I haven't done any particular benchmarking, as this processing step isn't foreseeably becoming a bottleneck in the short/medium run in my case. (By the way, I've spotted that A4 is recommended in the README: (Update: sorry just now checking your linked article - and indeed, Cython is one of the tested ones there ... anyway, others, maybe even me might be interested if you have any info about the performance implications.) |
Here is our benchmark for v3 on my Macbook. Please check
|
Congrats @lepture for this project! (I'm doing a benchmark of many Markdown to HTML libraries, and
mistune
seems to be the best!)I used
timeit
with the same Markdown document, and:mistune.markdown(s)
took 4ms on averagemistune.markdown(s)
took 13ms on average (for the same document)What could be the reason of 2.0.0a4 be 3x slower than version 0.8.4?
Can I use 0.8.4 for my current project, is it still stable?
PS: I also found this article, it's linked to this topic: https://getnikola.com/blog/markdown-can-affect-performance.html
The text was updated successfully, but these errors were encountered: