New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sometimes the function page. Waitforxpath() does not detect the DOM that already exists #5539
Comments
Same here I think it is the same issue that happened here: #4072 The behaviour I'm having (my browser is always running in the background, I don't close it on each request):
|
Seems unlikely that this is #4072 given that the issue was closed by devs a long, long time ago. We saw this today, where a page presumably got faster, leading to the DOM being loaded sooner, and thusly, the waitForXpath never returns despite the xPath matching the existing content. |
same problem here |
I think I'm having the same issue, it's been quite a while since it was reported, any idea on a work around? Seems to work in non headless mode. await browser.newPage({ context: 'context-' + Math.floor(Math.random() * 10000) }) |
same problems here |
Could someone provide a repro script? |
We're marking this issue as unconfirmed because it has not had recent activity and we weren't able to confirm it yet. It will be closed if no further activity occurs within the next 30 days. |
We are closing this issue. If the issue still persists in the latest version of Puppeteer, please reopen the issue and update the description. We will try our best to accommodate it! |
Have u fix this? I have same problem here (version 19.6.3) |
what 's this workaround based on? |
Any fixes on this? |
I use puppeter to collect a website,I'll loop collect a lot of pages。
It was normal at the beginning,After collecting many pages of data,the function page.waitforxpath() does not detect the DOM that already exists.and throw error, such as:TimeoutError: waiting for XPath "//table[@Class="hq_table"]/tbody/tr[position()>1]" failed: timeout 3000ms exceeded
i don't know how to deal it. It affected the speed of acquisition
The text was updated successfully, but these errors were encountered: