Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump jenkins.version from 2.379 to 2.380 for bom-weekly #1611

Conversation

github-actions[bot]
Copy link
Contributor

@github-actions github-actions bot commented Nov 29, 2022

Bump jenkins.version from 2.379 to 2.380 for bom-weekly

Report

Source:
	✔ [jenkins] Get Last jenkins Weekly Version(jenkins)


Condition:
	✔ [jenkins] Test if Jenkins stable published(maven)

Target:
	✔ [jenkins] Update Jenkins version(shell)

Changelog

Click to expand
Jenkins changelog is available at: https://www.jenkins.io/changelog/#v2.380


Remark

This pull request was automatically created using Updatecli.

Please report any issues with this tool here

…/b...

... om/updatecli/update-jenkins.ps1 weekly 2.380"

Made with ❤️️ by updatecli
@github-actions github-actions bot added the dependencies Pull requests that update a dependency file label Nov 29, 2022
@github-actions github-actions bot enabled auto-merge (squash) November 29, 2022 23:38
Copy link
Contributor

@MarkEWaite MarkEWaite left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a dependabot pull request. Will need to be merged after success.

@timja
Copy link
Member

timja commented Dec 1, 2022

Not a dependabot pull request. Will need to be merged after success.

it has automerge on and will merge itself if it passes

@MarkEWaite
Copy link
Contributor

I can't duplicate the failure on my computer, yet it fails consistently with a test timeout in CI.

@basil
Copy link
Member

basil commented Dec 5, 2022

I can't duplicate the failure on my computer, yet it fails consistently with a test timeout in CI.

This is a legitimate performance regression introduced in jenkinsci/jenkins#6408 and resolved in jenkinsci/jenkins#7464. After jenkinsci/jenkins#6408 and before jenkinsci/jenkins#7464 running ReadOnlyTest#testGlobalConfiguration took me 97 seconds locally (on CI, long enough to hit the 180-second timeout and fail the build), while locally the same test took me only 13 seconds before jenkinsci/jenkins#6408 and after jenkinsci/jenkins#7464.

@MarkEWaite
Copy link
Contributor

I can't duplicate the failure on my computer, yet it fails consistently with a test timeout in CI.

This is a legitimate performance regression introduced in jenkinsci/jenkins#6408 and resolved in jenkinsci/jenkins#7464. After jenkinsci/jenkins#6408 and before jenkinsci/jenkins#7464 running ReadOnlyTest#testGlobalConfiguration took me 97 seconds locally (on CI, long enough to hit the 180-second timeout and fail the build), while locally the same test took me only 13 seconds before jenkinsci/jenkins#6408 and after jenkinsci/jenkins#7464.

That is a brilliant result! I didn't even consider that there might be a performance regression at the root of the timeout. Thanks very much!

@basil
Copy link
Member

basil commented Dec 6, 2022

Thank you Mark. But can we discuss how we want to handle this situation in the future. I think an escalation to a broader audience was warranted when this update couldn't be completed in a few days, or at least an issue filed somewhere (BOM? core?). We can't afford to have core updates stuck unresolved for an extended period of time. Open to hearing your thoughts or discussing further.

@basil basil mentioned this pull request Dec 6, 2022
@basil
Copy link
Member

basil commented Dec 6, 2022

Retroactively filed #1628

@MarkEWaite
Copy link
Contributor

But can we discuss how we want to handle this situation in the future. I think an escalation to a broader audience was warranted when this update couldn't be completed in a few days, or at least an issue filed somewhere (BOM? core?). We can't afford to have core updates stuck unresolved for an extended period of time. Open to hearing your thoughts or discussing further.

That's a good point. We would benefit from general guidance for handling issues in bom pull requests. It seems like there are multiple levels of issues and they may justify different responses. For example:

  1. Jenkins core upgrade not proceeding as in
    Bump jenkins.version from 2.379 to 2.380 for bom-weekly #1611,
    Bump jenkins.version from 2.378 to 2.379 for bom-weekly #1602
  2. High use plugin not proceeding as in
    Bump git-plugin.version from 4.14.1 to 4.14.2 in /bom-weekly #1623,
    Bump versions forensics-api, plugin-util-api, and checks-api #1513,
    Bump font-awesome-api from 6.1.2-1 to 6.2.1-1 in /bom-weekly #1601,
    Bump plugin-util-api.version from 2.17.0 to 2.20.0 in /bom-weekly #1614
  3. Experiments that were exploring a question and are not proceeding as in
    Testing Tippy.js over YUI tooltips #992
  4. Pull requests that should be closed because they are incorrect, as in
    Bump script-security from 1183.v774b_0b_0a_a_451 to 1189.vb_a_b_7c8fd5fde in /bom-2.332.x #1563,
    Bump script-security from 1183.v774b_0b_0a_a_451 to 1189.vb_a_b_7c8fd5fde in /bom-2.332.x #1559

For case 1, core upgrade not proceeding, we could declare that if the upgrade pull request has been open 4 days without successful merge, then raise an issue in this repository to seek more help. I think that case 1 is the most urgent of the examples.

For case 2, high use plugin not proceeding, we could declare that if the upgrade pull request has been open 10 days without successful merge, then raise an issue with the specific plugin to seek more help

For case 3, we could declare a policy that experiments should be closed when they are no longer serving their purpose.

For case 4, we could identify common sources of incorrect pull requests and note that maintainers close them as soon as they are detected.

How do those examples and the proposed responses seem to you? What refinements, corrections, or improvements should be considered?

@basil
Copy link
Member

basil commented Dec 6, 2022

Sounds like as good a place to start as any. In the long term I think it would be good to have a clear assignee for each failed core upgrade who is responsible for clearing the hurdle. You can retroactively consider me the assignee for the failures in 2.379 and 2.380 but I think others besides me should also assign themselves to future failed core upgrades, and if they don't then we should start asking people to avoid having one person doing it all the time.

@MarkEWaite
Copy link
Contributor

#1629 resolves the problems that were blocking this pull request

@MarkEWaite MarkEWaite closed this Dec 6, 2022
auto-merge was automatically disabled December 6, 2022 19:54

Pull request was closed

@timja timja deleted the updatecli_2fbdd9525c5c49a7e14c32638dfba66ea692663a5330f0932b56bb9f61f7833b branch December 6, 2022 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants