Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitor JNA's security posture with the Scorecard Action #1583

Open
pnacht opened this issue Dec 28, 2023 · 5 comments · May be fixed by #1584
Open

Monitor JNA's security posture with the Scorecard Action #1583

pnacht opened this issue Dec 28, 2023 · 5 comments · May be fixed by #1584

Comments

@pnacht
Copy link
Contributor

pnacht commented Dec 28, 2023

Hey, it's Pedro (see #1567) and I'm back with another security suggestion.

I detected that issue (and my colleague Joyce detected #1508) by using Scorecard, a tool that scans projects for potential improvements to their security posture.

It is also available as the Scorecard Action, which continuously monitors the repository and adds security suggestions directly to the project's Security Panel. In doing so, it can let you know if something accidentally lowers the project's security posture.

I'll send a PR along with this issue adding the Action for you to take a look.

@matthiasblaesing
Copy link
Member

Don't take this wrong, but this sound like: "Hey! Great idea here, you get monitoring for free and get more work to do."

JNA was and is driven by volunteers. If there is a group, that wants to work on security and such, great, lets add this, if not, no thanks. If you want to improve security: Provide fixes, public resources that can be used by the project (prefereably not tied to a single person), recipes how to built native library.

@dbwiddis
Copy link
Contributor

Adding a monitoring tool is great if you plan to be the one to take action on it! This looks like a cool tool and I might use it on my downstream JNA-based project, where I'm the one who takes such actions. I had previously looked into the OSSF scorecard and my day-job repo is using the score as one of many code quality tools.

Here, it will just generate issues.

A good idea if you're interested in being "the one" to monitor these is to fork JNA to your own repo, and set up this action there. Monitor it for a while and take action on the fixes here. I've done that before using various static analysis tools... great way to improve the codebase without adding work to the maintainers.

@dbwiddis
Copy link
Contributor

For the record, you can see the current "score" at this URL:
https://securityscorecards.dev/viewer/?uri=github.com/java-native-access/jna

I scanned the list of things lowering the score and found the usual list of automated things that don't understand this repo:

  • we have 9/10 for license because w don't have "an FSF or OSI" license. We do, we offer downstream users their choice of two different FSF or OSI licenses. But the boilerplate doesn't match so we got dinged.
  • We don't have branch protection on master by choice. We strongly limit who can commit there and have use cases to do so outside of the PR flow.
  • We are dinged for generating a lot of binary artifacts but that's part of the model of how this repo works
  • We are dinged for code review if the merger is not different than the committer and doesn't have an official "review". This is normal on a project with very few active maintainers.
  • We don't use static code analysis.... although I have occasionally run such tools on the repo
  • We don't use fuzzing in our testing. Mostly because we're just a pass through to native APIs.

I'm sure there's probably something actionable somewhere on that list, but the score isn't really representative of the project.

@pnacht
Copy link
Contributor Author

pnacht commented Dec 29, 2023

Hey Daniel, thanks for the quick feedback!

Just a few comments:

Adding a monitoring tool is great if you plan to be the one to take action on it! This looks like a cool tool and I might use it on my downstream JNA-based project, where I'm the one who takes such actions. I had previously looked into the OSSF scorecard and my day-job repo is using the score as one of many code quality tools.

Here, it will just generate issues.

A good idea if you're interested in being "the one" to monitor these is to fork JNA to your own repo, and set up this action there. Monitor it for a while and take action on the fixes here. I've done that before using various static analysis tools... great way to improve the codebase without adding work to the maintainers.

Unfortunately, there are quite a few things that can't really be handled by an external contributor running Scorecard on a fork. These are things that require changing repo settings, which can only be done by the project's maintainers (changing default workflow tokens to read-only, branch-protection, etc). Indeed, I've only suggested the Scorecard Action now because I've mostly run out of things I can meaningfully contribute to the project via PR (my colleague Joyce also suggested the adoption of a security policy in #1512, but it didn't get traction from many maintainers).

Also, the Action can help notify maintainers if something causes their score to decrease. For example, my colleague Joyce improved the project's security by ensuring ci.yml runs with a read-only token. However, should the project eventually create a separate workflow and forget to configure a read-only token, the Action will warn them to it.

And if there are notifications that aren't relevant to the project (as you've mentioned is the case for some checks), the maintainer can simply dismiss them and they won't be notified about them any more.

For the record, you can see the current "score" at this URL: https://securityscorecards.dev/viewer/?uri=github.com/java-native-access/jna

Indeed. I just didn't share this link before because the script that calculates these scores skips a few checks, which can only be run manually or with the scorecard-action: CI-Tests (9/10), Contributors (10/10), and Dependency-Update-Tool (10/10). With these checks, your score goes up to a 5.6/10.

  • we have 9/10 for license because w don't have "an FSF or OSI" license. We do, we offer downstream users their choice of two different FSF or OSI licenses. But the boilerplate doesn't match so we got dinged.

Yeah, that's a bummer that can't really be fixed on Scorecard's end since it uses GH's API, which doesn't detect the licenses.

  • We don't have branch protection on master by choice. We strongly limit who can commit there and have use cases to do so outside of the PR flow.

That's reasonable. You could, however, forbid force-pushes and deletions, which would give users confidence that the ground hasn't shifted beneath them. This would give you a 3/10 without changing your usual workflow.

And while it won't improve your score without requiring PRs, you may consider requiring that certain workflows pass before a PR can be merged. This would help ensure your master branch is stable.

  • We are dinged for generating a lot of binary artifacts but that's part of the model of how this repo works

Understood. My understanding is that these binaries are generated and stored for release. The suggested model would be to only generate these binaries during the release process and then discard them. If this would take too long, it could be streamlined with an automated release workflow, for example.

  • We are dinged for code review if the merger is not different than the committer and doesn't have an official "review". This is normal on a project with very few active maintainers.
  • We don't use fuzzing in our testing. Mostly because we're just a pass through to native APIs.

Fair!

  • We don't use static code analysis.... although I have occasionally run such tools on the repo

Out of curiosity, which tools have you used in the past?

@dbwiddis
Copy link
Contributor

dbwiddis commented Jan 8, 2024

Out of curiosity, which tools have you used in the past?

Don't recall the full list as I mostly did one-offs. Generally the same ones I've used on my own project. Not all are still up and running. Some have been incorporated into GitHub's own scanning. I've reduced mine mostly down to just Sonar checks now. Many others were variants on the usual PMD-based checks. Codacy, LGTM, Coverity, CodeQL, Scrutinizer.... anything that I could earn a badge implementing, basically.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants