Skip to content
This repository has been archived by the owner on Jan 3, 2024. It is now read-only.

Unable to get content of page from special characters (utf-8) URL #733

Open
Creatium opened this issue Dec 18, 2023 · 1 comment
Open

Unable to get content of page from special characters (utf-8) URL #733

Creatium opened this issue Dec 18, 2023 · 1 comment

Comments

@Creatium
Copy link

Hey,

I am currently in progress of switching from main Selenium to Selenium Wire to start using proxies for scraping activities.

Now everything works fine and as expected, except websites that have special characters in their addresses, like https://männimetsa.ee/. Here I have actually two issues.

  1. Website page_source results in empty content <html><head></head><body></body></html>. No matter what I do, if I implement waiting or not, this is what I get. No actual Exception in my try catch block is thrown. Just empty body.
  2. If using request_interceptor I get this error: UnicodeEncodeError: 'ascii' codec can't encode character '\xe4' in position 1: ordinal not in range(128) . Even if inside interceptor I just print out the request.path, its the error that gets thrown in my face. And printing request.path results in this text: https://männimetsa.ee/ where the https://m part is as active URL.

Not sure if issue 1 is related to issue 2, but both issues only happen with special character website URLs. So I have a feeling that both issues are caused by the same main reason.

If I switch back to Selenium, I can get the content of website with no problems.

@Creatium
Copy link
Author

For now moved to using SeleniumBase for proxy functionality - no such issues.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant