Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blocking fetcher thread #996

Open
Mikwiss opened this issue Sep 2, 2022 · 4 comments
Open

Blocking fetcher thread #996

Mikwiss opened this issue Sep 2, 2022 · 4 comments

Comments

@Mikwiss
Copy link
Contributor

Mikwiss commented Sep 2, 2022

Hi @jnioche !

Thanks again for all your work ! Now, let me expose you our fetcher thread issue.

Resume

Our cluster have 6 worker nodes. We are fetching more than 3 million URLs per day with our topology. It is deployed on 16 worker slots and use 16 fetchers, one by worker slot.

OkClient.HttpProtocol

The worst issue was spotted with the OkClient.HttpProtocol. Sometime, one of the worker nodes step up to 100% CPU usage. For example, the worker 5 in this case:

image

On StromCrawler board, we can see the fetcher count increase up to 50 (our fetcher limit) :

image

Worst, in another case, all the topologies are impacted :

image

All fetchers are impacted, and the topology is running slowly. The only way to fix the problem, is to kill and redeploy the topology. On kill phase, the log confirms some blocking thread:

2022-05-30 06:37:06.557 O.A.S.D.W.WORKER SHUTDOWNHOOK-SHUTDOWNFUNC [INFO] SHUTTING DOWN EXECUTORS
...
2022-05-30 06:37:07.028 O.A.S.E.EXECUTORSHUTDOWN SHUTDOWNHOOK-SHUTDOWNFUNC [INFO] SHUTTING DOWN EXECUTOR FETCHER:[30, 30]
2022-05-30 06:37:07.077 C.D.S.B.FETCHERBOLT THREAD-21-FETCHER-EXECUTOR[30, 30] [ERROR] INTERRUPTED EXCEPTION CAUGHT IN EXECUTE METHOD
2022-05-30 06:37:07.077 C.D.S.B.FETCHERBOLT THREAD-21-FETCHER-EXECUTOR[30, 30] [ERROR] INTERRUPTED EXCEPTION CAUGHT IN EXECUTE METHOD
2022-05-30 06:37:07.077 C.D.S.B.FETCHERBOLT THREAD-21-FETCHER-EXECUTOR[30, 30] [ERROR] INTERRUPTED EXCEPTION CAUGHT IN EXECUTE METHOD
2022-05-30 06:37:07.077 C.D.S.B.FETCHERBOLT THREAD-21-FETCHER-EXECUTOR[30, 30] [ERROR] INTERRUPTED EXCEPTION CAUGHT IN EXECUTE METHOD

HttpClient.HttpProtocol

We had tried to change the protocol to fix this issue. The CPU has never reach again 100%. But periodically, some fetcher threads are not released.

image

After some days, those “zombie” threads increase. We are often redeploying the topology (for functional update) and obviously, a new deployment reset thread count.

For now, the issue is less critical then the OkClient one, but we are trying to understand. Do you have any ideas or similar case?

@sebastian-nagel
Copy link
Contributor

Hi @Mikwiss, for the OKHttp protocol, see #918: OkHttp's internal connection pool implementation does not scale up to 1000 or more open connections. You might want to try to tune your pool configuration. Note: the issue with connection pooling was discovered on Nutch and then ported to Stormcrawler. The best pool configuration depends on how many hosts are crawled, the distribution of URLs over hosts and the configured partitioning. Could you share more information, including which Stormcrawler and Storm versions are used? Also the Storm UI provides insight which bolts in the topology are actually the bottleneck.

@Mikwiss
Copy link
Contributor Author

Mikwiss commented Sep 2, 2022

Hi @sebastian-nagel ! Thanks for reply ! I will check #918 in order to understand.

Each crawler we have crawls only one host. We are currently in SC 2.2 and Storm 2.3.0. We have a task in our backlog to update SC to 2.5 and Storm to 2.4.0.

@jnioche jnioche modified the milestone: 2.6 Sep 3, 2022
@jnioche
Copy link
Contributor

jnioche commented Sep 3, 2022

Thanks @Mikwiss for reporting this issue and thanks @sebastian-nagel for your comment

Each crawler we have crawls only one host

so the 16 fetchers all deal with the same host? Did you chose that over a single Fetcher so that the tuples get distributed evenly across the Parser tasks?

What value do you have in your conf for fetcher.threads.per.queue ?

@Mikwiss
Copy link
Contributor Author

Mikwiss commented Jan 10, 2023

Hi !

Sorry for the delay. To reproduce the issue we have to wait a long time.

So, this issue occurs on only once topology (with specific target/host and 50 fetcher.threads.per.queue). According to our conversation with @jnioche, we decrease the http.content.limit to 10 000 000 (instead of -1).
We still have the same issue :

  • okhttp : 100% CPU on one worker after a few moment
  • httpclient : some thread zombie

But, it's seems less violent. So we'll decrease again this parameter. We keep in touch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants