Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Allow unpinned in non-privileged workflows #2018

Open
laurentsimon opened this issue Jul 1, 2022 · 51 comments
Open

Feature: Allow unpinned in non-privileged workflows #2018

laurentsimon opened this issue Jul 1, 2022 · 51 comments
Labels
kind/enhancement New feature or request Stale

Comments

@laurentsimon
Copy link
Contributor

laurentsimon commented Jul 1, 2022

Workflows that have no secret and all their permissions set to read/none don't benefit from being pinned, and add burden for users to keep them up to date. We may want to relax the Token-Permission check, making this a "bonus" point rather than flagging it as an problem

@laurentsimon laurentsimon added the kind/enhancement New feature or request label Jul 1, 2022
@laurentsimon
Copy link
Contributor Author

/cc @raghavkaul

@evverx
Copy link
Contributor

evverx commented Nov 23, 2022

Workflows that have no secret and all their permissions set to read/none don't benefit from being pinned

They do but it has nothing to do with security. systemd pins its dependencies to make it easier to maintain the CI. For example super-linter broke at some point https://github.com/github/super-linter/issues/2128 and when Dependabot tried to update the action it failed and the last working version was successfully used until the bug was fixed. If it hadn't been pinned the part of the CI relying on super-linter would have been just broken.

Having said that I agree that unpinned read-only actions shouldn't be flagged by analyzers focusing primarily on security and it seems to me that promotional PRs like the one discussed at ossf/scorecard-action#1017 triggered semi-automatically by overzealos "security" tools fixing imaginary "security" issues can be eliminated by addressing this issue. Can it be bumped somehow?

@laurentsimon
Copy link
Contributor Author

I think we'll try to tackle this in Q1. It's on our radar.

@evverx

This comment was marked as outdated.

@laurentsimon
Copy link
Contributor Author

@diogoteles08 is working on it afaik

@evverx

This comment was marked as outdated.

@diogoteles08
Copy link
Contributor

Hi there! Yes, I'll be working on this issue. Expected to end of Q1.

@hartwork
Copy link

hartwork commented Jun 5, 2023

I would like to emphasize that even for read-only actions, a gone-malicious action could:

  • modify build artifacts
  • leak CI secrets (if secrets are used)
  • release/deploy/publish malicious code (if secrets are used)

So if the scorecard de-penalizes read-only actions, it closes their eyes to all of these scenarios. Is that a good idea?
I would personally expect to at least require that the action does not use any secrets (and check for that) to be considered safe without pinning, just my 2 cents.

Am I missing something in this picture?

CC @hugovk @joycebrum

PS: https://github.com/step-security/harden-runner tries to go the exact opposite way, in case someone is interested.

@evverx
Copy link
Contributor

evverx commented Jun 5, 2023

modify build artifacts

I wonder how it can do this if it doesn't have contents: write? Or do you mean actions that upload stuff to, say, PyPI?

leak CI secrets

They can leak secrets but I don't think pinning stuff to hashes can help much here given that under the hood a lot of actions use a bunch of unpinned modules nobody has ever reviewed. PRs where Dependabot updates stuff aren't reviewed either.

release/deploy/publish malicious code

They can do that too and pinning stuff doesn't help here either.

https://github.com/step-security/harden-runner tries to go the exact opposite way, in case someone is interested

I took a look at it systemd/systemd#25205 (comment) so personally I'm not interested in that.

All in all I think instead of that security theater scorecard should ideally actually audit/scan actions. It was discussed in #3022 (comment) and #2991 (comment).

@hartwork
Copy link

hartwork commented Jun 5, 2023

@evverx pinning makes sure that the same action code will be run every time to allow a review-once-run-multiple-times kind of workflow. That's the problem pinning solves — it makes changes visible and controlled — nothing more. That should answer 99% of your question. By build artifact I meant files uploaded via action actions/upload-artifact, which (unlike PyPI) does not require any additional secrets.

@evverx
Copy link
Contributor

evverx commented Jun 5, 2023

That should answer 99% of your question

I didn't ask any question regarding pinning stuff. I know what it is and what it's supposed to accomplish.

By build artifact I meant files uploaded via action actions/upload-artifact, which (unlike PyPI) does not require any additional secrets

It's a well-known GitHub bug: google/clusterfuzzlite#84 (comment). It should be fixed by GitHub.

@hartwork
Copy link

hartwork commented Jun 5, 2023

That should answer 99% of your question

I didn't ask any question regarding pinning stuff. I know what it is and what it's supposed to accomplish.

@evverx if the the use of pinning is clear, what exactly is your motive and role in this discussion then?

By build artifact I meant files uploaded via action actions/upload-artifact, which (unlike PyPI) does not require any additional secrets

It's a well-known GitHub bug: google/clusterfuzzlite#84 (comment). It should be fixed by GitHub.

That's arguable, but an interesting point.

@evverx
Copy link
Contributor

evverx commented Jun 5, 2023

what exactly is your motive and role in this discussion then?

#2018 (comment)

That's arguable, but an interesting point.

I'm not sure why stuff is arguable when GitHub bugs and shortcomings come up :-)

@hartwork
Copy link

hartwork commented Jun 5, 2023

what exactly is your motive and role in this discussion then?

#2018 (comment)

@evverx either you acknowledge the security aspect of pinning for even read-only actions or you don't. If you do, to keep penalizing is in your interest, which is what I was arguing for. Excellent.

@evverx
Copy link
Contributor

evverx commented Jun 5, 2023

either you acknowledge the security aspect of pinning for even read-only actions or you don't

I can acknowledge that but it doesn't mean that false positives shouldn't be fixed. And this issue is about false positives.

If you do, to keep penalizing is in your interest

It isn't. I'm not interested in receiving bogus promotional "security" PRs fixing imaginary "security" issues. This issue is supposed to address false positives.

@hartwork
Copy link

hartwork commented Jun 5, 2023

@evverx I see. I'm not sure I mind false positives in this context, I support more hardening. Your point is clear then, thanks.

@evverx
Copy link
Contributor

evverx commented Jun 5, 2023

I support more hardening

FWIW I do too. The problem is that even though I would love to pin CIFuzz and ClusterFuzzLite they can't be pinned. Another way to address this would be #1907 but that would waste maintainer's time, wouldn't be visible outside of repositories and wouldn't prevent PRs trying to fix them.

Anyway the reason I'm annoyed is that because thanks to all that automation I somehow end up receiving bogus PRs, bogus CVEs generated automatically based on OSV (with inaccurate commit ranges) and so on and all that can be avoided by addressing false positives.

@diogoteles08
Copy link
Contributor

Hi!

@evverx I understand your complains against False Positives and annoying notification on GitHub and we are taking them seriously! As you said, this current issue intends to reduce False Positives.

About the idea of enhancing Scorecards have some kind of feature to scan/audit actions, I also think it's a good idea and it's in my radar, but it's still a bit far from Scorecard current approach and might take longer to became feasible. By now I believe it's more reasonable to focus on short-term fixes to avoid False Positives and enhance the UX of Scorecard for Maintainers.

@diogoteles08
Copy link
Contributor

diogoteles08 commented Jun 6, 2023

Thanks for the input, @hartwork.

To answer your main complains, for this issue we are already considering that unpinned actions would be allowed (i.e. not cause a score decrease at the Pinned-Dependencies check) only if the action:

  1. Is not called with any secret
  2. Is not called with any "dangerous permission"

But I also believe that hash-pinning actions could bring security benefits even when used with read-only permissions. I tried to compile my concerns as follows:

  1. As I raised on the issue Should security-events: read be considered a dangerous permission? #3131, security-events: read can be dangerous. So we might want to define it as a "dangerous permission" and keep punishing unpinned actions that use this permission.
  2. As @hartwork pointed out, actions with read permissions can still modify build artifacts released with actions/upload-artifact. For this I'd like to hear other opinions, because I don't know if those are ever used for "publishing" or anything critical.
  3. Contributing to a general idea of hash-pinning being a good practice, because it also prevents bugs or malfunctions not directly related to security, but that eventually could be abused. One can also argue that leaving a action unpinned can be dangerous for the case someone changes the permission to a write permission and doesn't remember to pin the action (scorecard should point that out, though)

Given those points, we could debate if we want to keep a light score reduction even for the "allowed cases" of unpinned actions discussed in this issue. Maybe something like a 0.1 score reduction for each occurrence? With this setup, a repository that prefers to keep "safe actions" unpinned won't have a 10/10, but should still get something around 9/10, which is already a pretty great score.

Would like to hear @maintainers input on this. @laurentsimon @raghavkaul

@joycebrum
Copy link
Contributor

I really liked this idea of "punishing but not much" because it would also differ the case of a "safe workflow unpinned" from a "potentially dangerous workflow unpinned".

Good idea @diogoteles08 !

@evverx
Copy link
Contributor

evverx commented Jun 6, 2023

enhance the UX of Scorecard for Maintainers

I agree that it's much more important. The CLOMonitor folks and the dashboards they built is the only reason why I didn't threw out the scorecard action. (though it's still weird to me that it was somehow totally acceptable to just dump json)

this current issue intends to reduce False Positives

I completely forgot that at least in the systemd repository CIFuzz is no longer read-only: systemd/systemd#27501 so I guess at least there it's too late :-) My opinion on the security tab and the ability to read it can be found at google/oss-fuzz#10090 (comment). That same argument is more or less applicable to the 90-day OSS-Fuzz disclosure but I guess it somehow makes people feel secure for some reason.

Anyway since I'm subscribed to some OSS-Fuzz issues it seems to me that libexpat is in the process of integrating CIFuzz. My journey with pinning it ended in google/clusterfuzzlite#95 (where I gave up). If libexpat could somehow convince OSS-Fuzz to make its actions actually meaningfully pinnable it would be great. (though personally I don't think it's worth it).

@spencerschrock
Copy link
Contributor

  • modify build artifacts

Am I interpreting your scenario right?

  1. Some repo has a workflow which saves a workflow artifact via actions/upload-artifact
  2. Another step later in the workflow is compromised and now calls actions/upload-artifact to modify and overwrite the artifact from 1?

@hartwork
Copy link

hartwork commented Jun 6, 2023

@spencerschrock what I had in mind was more like:

  1. actions/upload-artifact is used near the end of the workflow
  2. a gone-malicious action earlier in the workflow modifies files in some way so that what is uploaded later will be malicious in some way also

@pnacht
Copy link
Contributor

pnacht commented Jun 7, 2023

@pnacht what I meant is that when producing releases archives on the developer machines there is risk that the resulting archives include files from your development machine that were not intended to even be in the release archives. I know of at least two real cases where that happened. One way to prevent the scenario altogether is by producing release archives in a clean environment, and CI is a candidate for that in theory, depending on your threat model.

Oh, I'm a big proponent for publishing via CI over laptops. Couldn't agree more! I just misunderstood you before, thought you were talking about using upload-artifacts in CI as being better than using softprops/action-gh-release to avoid leaking info.

@evverx

This comment was marked as off-topic.

@diogoteles08
Copy link
Contributor

So, we agreeing that the @upload-artifacts is not completely safe, even though it can be used with read-only permissions, right?

Getting back to the original discussion of the issue, one straightforward idea is that we could add this as another requirement to have an "allowed unpinned action". So the unpinned actions would be allowed (i.e. not cause a score decrease at the Pinned-Dependencies check) only if the action:

  1. Is not called with any secret
  2. Is not called with any "dangerous permission"
  3. Does not call upload-artifacts (also because the uploaded artifacts could be used by jobs with write-permissions at the same workflow)

WDYT?

@hartwork
Copy link

hartwork commented Jun 7, 2023

@diogoteles08 I think when starting to look for use of https://github.com/actions/upload-artifact in particular, the question would be how special that action really is. If I use a fork of that action and the action works and does something similar, it would evade detection and still be affected, I suppose? (Maybe I'm biased, I think pinning everything is the best way to go. I seem to mind less than others about the pull request and commit noise that can go along with that.)

@evverx
Copy link
Contributor

evverx commented Jun 7, 2023

@diogoteles08 it seems to me that it would be better to assume that all the actions (regardless of whether they are read-only or not) can be used to produce important things (regardless of whether it's a good idea or not) so I'm inclined to say that scorecard should probably ask projects to pin stuff there too. I think false positives can be suppresed using something like #1907.

@diogoteles08
Copy link
Contributor

Yes that's sad but true

@diogoteles08
Copy link
Contributor

Best we could do is to penalise less the "allowed actions" I specify here, but not the 0.1 decrease I was talking about, something bigger than that, but still smaller than a reduction for an unpinned action with secrets or dangerous permissions.
But even this is debatable. The Pinned-Dependencies current scoring is not simple, so it wouldn't be simple to define this "smaller" reduction

@evverx
Copy link
Contributor

evverx commented Jun 8, 2023

penalise less the "allowed actions"

@diogoteles08 dunno. I think it would just complicate things and I don't think it's worth it. Maybe it would be better if overzealous "security" tools/scanners/analyzers using scorecard were smart enough to deal with that.

I'm not sure what else I can add there so I'm going to focus on that SBOM nonsense. Luckily it hasn't been integrated yet :-) (Don't get me wrong I think SBOMs make sense but "let's push them upstream without any clear rationale and use cases and penalize projects if they don't comply" nonsense is what concerns me).

@evverx
Copy link
Contributor

evverx commented Jun 8, 2023

I think when starting to look for use of https://github.com/actions/upload-artifact in particular, the question would be how special that action really is

@hartwork I missed that comment. Just to clarify I didn't mean to say that there is anything inherently wrong with that action. It does what it's supposed to do. Your use case is safe because somehow I'm pretty sure you can't be tricked into downloading bogus artifacts when you do it manually. Even those nightly builds can be distributed via artifacts properly if they are signed with something random PRs don't have access to (it isn't clear where consumers would verify those signatures :-) but that's another matter). What I was trying to say is that it's a bit harder to protect artifacts because it's easier to mess with them than with actual releases. Then again, the GitHub setting preventing CI from running on PRs submitted by first-time contributors kind of mitigates that too (and that explains why when it was introduced a lot of weird accounts started fixing typos in upstream projects for no apparent reason :-)).

@evverx

This comment was marked as off-topic.

@evverx

This comment was marked as off-topic.

@diogoteles08

This comment was marked as off-topic.

@spencerschrock
Copy link
Contributor

This was brought up in the sync yesterday, and was centered around the optics of false positives. Namely, that if scorecard can't detect if workflow artifacts are being used in a dangerous way, then it shouldn't. This of course is a consumer preference, and some people/use-cases have different tolerances for false positives.

The general consensus was to avoid judging possible misuse of workflow artifacts (although perhaps throwing a log statement). This would keep determination of non-privileged workflow to:

@evverx
Copy link
Contributor

evverx commented Jun 17, 2023

no privileged permissions

Unfortunately scorecard can't detect that reliably because the "read-only" GitHub setting is out of reach. It can still trigger unwanted PRs like step-security/secure-repo#462 (comment)

(Thanks to the link @diogoteles08 posted now I know that that stuff is prompted by bug bounties: https://crowdfunding.lfx.linuxfoundation.org/initiative/049febeb-075e-4cbc-8d98-c5ef62483388/financial. It would be nice by the way if PRs indirectly sponsored by Google/OpenSSF disclosed that).

Use of secrets

If secrets are used to keep, say, "junk" tokens used to push stuff to third-party CI services I'm not sure it should affect repositories. For example back in the day there was a fuzzing service that used tokens to authenticate jobs sent from GitHub. It was known that those tokens could be dumped by any PRs, could be abused to overflow the queue there and so on but that service was willing to accept the risk. I'm not sure how often tokens like that are used but my guess is that the idea is to keep track of secrets used to publish stuff consumed by people and I'm not sure it can be detected automatically.

@github-actions
Copy link

This issue is stale because it has been open for 60 days with no activity.

@diogoteles08
Copy link
Contributor

Is it common for workflow artifacts created with upload-artifact to be used in security-critical processes? Are there projects that use upload-artifact to create their release artifacts (and then manually transfer them to the release, I suppose)?

I think I've found an example of this:
https://github.com/PyTables/PyTables/blob/e37ff3983d4df38761d235fb244e688a5f62f848/.github/workflows/wheels.yml

IIUC, every time a new release is made this workflow is responsible to build and use upload-artifact with the python wheels of the release.

@diogoteles08 diogoteles08 removed their assignment Nov 14, 2023
Copy link

This issue is stale because it has been open for 60 days with no activity.

@jsoref
Copy link
Contributor

jsoref commented Jan 28, 2024

fwiw actions/upload-artifact@v4 (an API break from v3) generates sealed immutable artifacts -- they can't be changed w/o something that has permission first deleting the artifact.

Copy link

This issue has been marked stale because it has been open for 60 days with no activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement New feature or request Stale
Projects
Status: No status
Development

No branches or pull requests

8 participants