Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

search results page occasionally dumps escaped HTML for part of the page #89

Open
iisa opened this issue Jan 4, 2022 · 7 comments
Open
Assignees
Labels
bug Something isn't working

Comments

@iisa
Copy link

iisa commented Jan 4, 2022

Repro steps:

  • open safari incognito window, go to: scholar.archive.org

  • search "Beyonce"

  • search results appear as HTML string

  • on page refresh, elements are no longer strings, styles are properly applied.

Image of side-by-side results page w/ string HTML & its element via inspector:
image

side-by-side inspector/view diff of sidebar w/ html string:
image

@bnewbold bnewbold changed the title Safari: search results page returns html as string search results page occasionally dumps escaped HTML for part of the page Jan 5, 2022
@bnewbold bnewbold self-assigned this Jan 5, 2022
@bnewbold bnewbold added the bug Something isn't working label Jan 5, 2022
@bnewbold
Copy link
Member

bnewbold commented Jan 5, 2022

I just encountered this bug myself via organic use.

My browser environment: Chromium on Debian Linux (x86_64), not incognito mode, language preference is English. The query I first encountered this on was https://scholar.archive.org/search?q=%22west+roxbury%22. I refreshed in a new tab once and got the same thing (escaped HTML). Refreshed again a minute later and got non-escaped text, so this seems to be an intermittent issue.

I did a view source and see similar things to what Isa posted above. I tried a wget to capture the raw HTML, and also did "save HTML", and in both cases the text was not escaped (aka, looked correct). Also when I go in devtools -> Sources and inspect the source code the HTML looks correct. In all these cases it may be that reloading the page means I am seeing the updated ("correct") HTML? This is confusing/confounding.

Three debugging thoughts so far:

  • random because only one backend experiencing this problem. Looked at the haproxy panel (https://lb.fatcat.wiki/) and all recent requests have gone to a single backend (svc500), so that doesn't seem to be the issue
  • random because of multiple processes or threads in the server runtime (FastAPI running under gunicorn/uvicorn)
  • root cause something to do with jinja2 template execution and translations. maybe translation files somehow out of sync? doesn't explain inconsistency

To synthesize the later two, perhaps I did a git checkout on the server, but didn't update the FastAPI process, so some processes/threads have out of sync template or translation files?

The uptime of the main fatcat-scholar systemd service is: Active: active (running) since Sat 2021-12-11 17:46:44 UTC; 3 weeks 3 days ago

The currently deployed git commit is:

commit 6038bea6ad071db02110fdaeeb51b38fc19dbd54 (HEAD -> master, origin/master, origin/HEAD)
Author: Bryan Newbold <bnewbold@archive.org>
Date:   Fri Dec 10 14:26:25 2021 -0800

    semantic-ui css: fix typo in italic font path (causing 404 errors)

That doesn't seem to be it. I'm going to restart the process, and possibly the entire VM (failover to replica for a couple minutes) and try to reproduce.

@bnewbold
Copy link
Member

I redeployed and no change. As of today this problem seems to be happening even more frequently. I was able to reproduce with wget, so problem is definitely server-side, not in browser processing of HTML.

@bnewbold
Copy link
Member

After deploying some small translation updates this morning, I am no longer able to reproduce the problem. I suspect that the issue is related to our translation (i18n) infrastructure and jinja2 HTML template integration. We do some special hack to make this work with our async web framework (FastAPI) on a per-request basis, incorporating path prefixes and Accept-Language browser headers. I suspect that there might be some obscure threading/concurrency corner case happening which is breaking the jinja2 {% trans %} blocks, resulting in them not being returned as "safe" strings. An alternate hypothesis is a related issue with jinja2 macros.

Without being able to reliably reproduce this right now (in any of dev, qa, or prod environments), and the issue being intermittent even when it can be reproduced, I think unfortunately this is going to have to sit on the back-burner until it crops up again. 😞

@bnewbold
Copy link
Member

@miku
Copy link
Member

miku commented Feb 17, 2022

The above example reproduces quite reliable (seeing raw html in about 50% of the cases while reloading in firefox 97.0, linux).

@bnewbold
Copy link
Member

bnewbold commented Aug 5, 2022

This issue has come and gone over the past couple months. Will prioritize getting a reproducible example of this bug and getting it fixed.

@bnewbold
Copy link
Member

I pushed some changes to production just now which I hope have resolved this issue. There is a small impact on latency, which is unfortunate.


A few notes on the issue:

  • I wrote a client script to reproduce the issue by making 40+ parallel requests using an asyncio HTTP client in python. it was surprisingly hard to get the issue to reproduce even using this script, lots of tweaking was necessary
  • At first I thought the problem was in the jinja2 i18n extension, which builds on babel. There are reported problems with this extension in async web frameworks like starlette (which fastapi is built on, and fatcat-scholar uses fastapi). I think this turned out to be a red herring though, the work-arounds I already had in place resolved those problems, and almost all the templated endpoints in fatcat-scholar are not actually using async/await handlers
  • I also assumed the problem was triggered when templates were being rendered when multiple i18n translations engines were in play. I excluded this possibility by removing the i18n extension and template invocations in a temporary branch, and could still reproduce the issue on that branch
  • having only a single language in play (all parallel requests in the same language preference) made reproduction much easier, which was a hint that the actual issue was my creation of jinja2 template environments in a python-global context. My hypothesis results in state leakage across requests when rendering templates
  • the current work-around is to re-create the jinja2 Environment in every single request. this is quite inefficient, but I have not been able to trigger/reproduce the issue with this work-around, and the impact on latency is on the order of tens of milliseconds, so i'm going to stick with this in production for now

This debugging process took multiple entire days. I'm pretty inclined to ditch fastapi and refactor the web service to use Flask instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants