-
Notifications
You must be signed in to change notification settings - Fork 13.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(Likely) left-overs from previous build are breaking the build in self-hosted runeers #14228
Comments
Looking. |
If anything I can help to analyse some of the self-hosted problems - later in the evening more likely. So let me know on slack if there are any unresolved issues/investigation points I can help with - happy to help with those. |
@potiuk It's possible this is the cause (and I am doing much cleaning already, just not total nuke and refresh) but in this case I think it is very unlikely to be the problem.
In this particular case, the runner two jobs in total: CloudWatch Logs Insights
Since the only previous job was Wait for CI images" I find it hard to believe it would cause this behaviour. A google of the error turns up sphinx-doc/sphinx#8880 Which we switched to yesterday 79ed11f Not a problem with the runner. Please don't leap to conclusions. |
Ah cool. Yeah. looks like it. Thanks for that. I will fix master constraints now - needs also upper limit in sphinx in setup.py |
(I'm glad I took the time to upload all logs from the runner hosts to Cloudwatch now. https://vector.dev/ is much more powerful than Amazon's Cloudwatch agent.) |
Absolutely! the more logs, the better and good interface is important for those! |
BTW. once we get non-failing master (which was a bit neglected due to the mayhem with CI) those kind of updates will only happen automatically after all tests and build passes in master, so it would not have happened :) Glad we have the self-hosted runners now - bigger chance to get it all under control |
Yeah, master seems to be "better"/heading in the right direction now at least. |
Setup.py limit : #14238 |
I started to experience strange errors when building documentation And I have a reason to believe this is because of left-overs from previous builds on the self hosted machines. For example looks like somewhere in the generated classes there is a get_info class used which normally is only generated (in different directory) during provider packages preparation.
I think this is the manifestation of not "cleaning" everything during the self-hosted build. As discussed before. I think the only way to avoid any such errors is always start from a completely "clean" state when the runner starts building, otherwise it opens up Pandora's box of many similar problems (imagine for example .pyc files for one python version used in another).
I think the only reasonable way to approach it is to what every other CI system does - hard-cleaning of everything, docker cache, sources, log directories - everything just before the new job starts. While it might mean it's a waste of time (and cache) as we always have to start from scratch, this avoids multitude of problems and lost developer's time on investigating issues that should not be on our radar (and that have no easy way to fix them either). Our builds are prepared to quickly restore the cache as needed - they are able to - very quickly usually - bring the CI images from registry (which we use as cache) - in most cases bringing in nessary images is as quick as 1 minute, so caching them locally is not needed.
Also cleaning help in keeping the environment sane - if we clean-up everything (True Clean State (TM)) before the build, there will be no "growing" logs and other artifacts that might grow as machine is re-used for several jobs.
We need to clean-up everything before the run because there might be many reasons why the job is not cleanly stoped (cancelling job, temporary network failures and the like). And since we have everything in tmpfs it should be as easy as simply removing and recreating tmpfs volume,
Example failed job where I suspect the problem with non-clean state. I run doc build locally and it completed without problems so I suspect the "non-clean" state of the CI machine's job is the root cause.
https://github.com/apache/airflow/pull/14125/checks?check_run_id=1897287904#step:4:16551
The text was updated successfully, but these errors were encountered: