Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggest registering for OpenSSF Best Practices badge #1901

Open
1 task done
mspi21 opened this issue Apr 17, 2024 · 9 comments
Open
1 task done

Suggest registering for OpenSSF Best Practices badge #1901

mspi21 opened this issue Apr 17, 2024 · 9 comments

Comments

@mspi21
Copy link

mspi21 commented Apr 17, 2024

Checklist

  • I've searched the issue tracker for similar requests. I've searched for the keywords OpenSSF, best practices or badge.

Is your feature request related to a problem? Please describe.

When developers are tasked with choosing a cryptographic library for their needs, they may want to ensure that a particular library is trustworthy and secure. While there is not a standard method to evaluate crypto libraries, there are some ways that a library can hint at its reliability. One of them is the OpenSSF Best Practices badge, which aims to certify good practices in OSS development. One example of a project which has obtained this badge is the ring library, which you are of course familiar with; another example is OpenSSL.

From what I was able to tell, the rustls project should meet most (if not all) of the criteria without any change, which is the main reason I am proposing to adopt this badge. It may not seem important, but there is scientific research (see https://dl.acm.org/doi/abs/10.1145/3180155.3180209) suggesting that displaying such badges correlate for example with more frequent PRs containing tests (apart from the obvious benefit of signaling trustworthiness to potential users).

Describe the solution you'd like
I would like the maintainers to consider if registering for the OpenSSF badge linked above is a good use of their time.

Describe alternatives you've considered
I have not considered any alternatives, as I'm not aware of any.

Additional context
This issue is motivated by my work on my bachelor's thesis, which aims to explore methods of evaluating the trustworthiness, security and usability of cryptographic libraries. If you're interested, I can provide you with a summary of the results when it is finished. The background of the OpenSSF (formerly CII) badge is explained in The Impact of a Major Security Event on an Open Source Project: The Case of OpenSSL.

@cpu
Copy link
Member

cpu commented Apr 17, 2024

👋 Hi @mspi21, thanks for opening an issue. This sounds like an interesting/worthwhile idea.

Would you be interested in trying to make a checklist of the requirements and checking off the ones you think we already meet? That would be a nice way for someone to help cut down on the amount of work that would be involved in applying. If you don't have the time/interest to do that I will try myself when time permits.

@mspi21
Copy link
Author

mspi21 commented Apr 17, 2024

Hi, thanks for your fast reply!

That does sound reasonable, since I already have a good idea about the different requirements. At the moment, I am primarily focusing on finishing my thesis, but once I have some spare time on my hands, I'll definitely do my best to help out. :)

@cpu
Copy link
Member

cpu commented Apr 17, 2024

At the moment, I am primarily focusing on finishing my thesis

Understood :-) Best of luck finishing that up.

@mspi21
Copy link
Author

mspi21 commented May 27, 2024

Hello 👋, since I've had some time, I have tried to summarize the requirements for the "passing" level of the badge (there's also the "silver" and "gold" levels, each requiring the previous to be achieved first).

In most cases, I have filled out the "evidence" or required information indicating that the criterion is met (based on the responses from other projects), but in some cases I was not entirely sure — I'm sure other community members or the maintainer team will be able to provide insight on those.

Here goes:

OpenSSF Best Practices Badge Criteria ('passing' level)

Basics

Basic project website content

FLOSS license

  • The software produced by the project MUST be released as FLOSS.
    • The license is available in the LICENSE file in the repository: https://github.com/rustls/rustls/blob/main/LICENSE
    • The library is distributed under Apache License version 2.0, MIT license and ISC license, all of which are OSI approved FLOSS licenses.
  • It is SUGGESTED that any required license(s) for the software produced by the project be approved by the Open Source Initiative (OSI).
    • The Apache License version 2.0, MIT license and ISC license are all approved by the OSI.
  • The project MUST post the license(s) of its results in a standard location in their source repository. (URL required)

Documentation

  • The project MUST provide basic documentation for the software produced by the project.
  • The project MUST provide reference documentation that describes the external interface (both input and output) of the software produced by the project.

Other

  • The project sites (website, repository, and download URLs) MUST support HTTPS using TLS.
    • The project website and repo use GitHub, which supports HTTPS using TLS. There's no separate download URL (use git to download from the repo). The distribution packages can be downloaded from crates.io, which also supports HTTPS.
  • The project MUST have one or more mechanisms for discussion (including proposed changes and issues) that are searchable, allow messages and topics to be addressed by URL, enable new people to participate in some of the discussions, and do not require client-side installation of proprietary software.
    • GitHub issue tracker and pull requests support discussion. The issues are searchable, have URLs, and allow new people to participate in the discussion. No proprietary software is required to participate in the discussion.
  • The project SHOULD provide documentation in English and be able to accept bug reports and comments about code in English.
    • The documentation is in English and the project accepts bug reports and comments in English.
  • The project MUST be maintained.

Change Control

Public version-controlled source repository

  • The project MUST have a version-controlled source repository that is publicly readable and has a URL.
  • The project's source repository MUST track what changes were made, who made the changes, and when the changes were made.
    • The project uses git for version control, which tracks changes, authors, and timestamps.
  • To enable collaborative review, the project's source repository MUST include interim versions for review between releases; it MUST NOT include only final releases.
    • Interim versions are put on git, not just final versions.
  • It is SUGGESTED that common distributed version control software be used (e.g., git) for the project's source repository.
    • The project uses git for version control.

Unique version numbering

  • The project results MUST have a unique version identifier for each release intended to be used by users.
    • The project uses semantic versioning, which provides a unique version identifier for each release.
  • It is SUGGESTED that the Semantic Versioning (SemVer) or Calendar Versioning (CalVer) version numbering format be used for releases. It is SUGGESTED that those who use CalVer include a micro level value.
    • The project uses Semantic Versioning (SemVer) for version numbering.
  • It is SUGGESTED that projects identify each release within their version control system. For example, it is SUGGESTED that those using git identify each release using git tags.
    • Full releases are tagged using 'git tag'.

Release notes

  • The project MUST provide, in each release, release notes that are a human-readable summary of major changes in that release to help users determine if they should upgrade and what the upgrade impact will be. The release notes MUST NOT be the raw output of a version control log (e.g., the "git log" command results are not release notes). Projects whose results are not intended for reuse in multiple locations (such as the software for a single website or service) AND employ continuous delivery MAY select "N/A". (URL required)
  • The release notes MUST identify every publicly known run-time vulnerability fixed in this release that already had a CVE assignment or similar when the release was created. This criterion may be marked as not applicable (N/A) if users typically cannot practically update the software themselves (e.g., as is often true for kernel updates). This criterion applies only to the project results, not to its dependencies. If there are no release notes or there have been no publicly known vulnerabilities, choose N/A.

Reporting

Bug-reporting process

  • The project MUST provide a process for users to submit bug reports (e.g., using an issue tracker or a mailing list). (URL required)
  • The project SHOULD use an issue tracker for tracking individual issues.
    • Yes, GitHub issue tracker.
  • The project MUST acknowledge a majority of bug reports submitted in the last 2-12 months (inclusive); the response need not include a fix.
    • Out of the 180 issues created in the last 12 months, 149 have been closed. Out of these, none are labeled as bugs.
  • The project SHOULD respond to a majority (>50%) of enhancement requests in the last 2-12 months (inclusive).
    • Out of the 180 issues created in the last 12 months, 149 have been closed.
  • The project MUST have a publicly available archive for reports and responses for later searching. (URL required)

Vulnerability report process

  • The project MUST publish the process for reporting vulnerabilities on the project site. (URL required)
  • If private vulnerability reports are supported, the project MUST include how to send the information in a way that is kept private. (URL required)
  • The project's initial response time for any vulnerability report received in the last 6 months MUST be less than or equal to 14 days.
    • There has been one private security vulnerability report in the last 6 months (CVE-2024-32650) - it was filed via our recommended communication channel (GitHub security report) on April 17th 2024 at 7:06pm ET. Maintainers replied on April 18th at 3:13 ET, ~8 hours later.

Quality

Working build system

  • If the software produced by the project requires building for use, the project MUST provide a working build system that can automatically rebuild the software from source code.
    • The project uses Cargo for building the library, which automatically rebuilds the software from source code.
  • It is SUGGESTED that common tools be used for building the software.
    • The project uses Cargo for building the library.
  • The project SHOULD be buildable using only FLOSS tools.
    • The project uses Cargo for building the library, which is FLOSS.

Automated test suite

  • The project MUST use at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project). The project MUST clearly show or document how to run the test suite(s) (e.g., via a continuous integration (CI) script or via documentation in files such as BUILD.md, README.md, or CONTRIBUTING.md).
  • A test suite SHOULD be invocable in a standard way for that language.
    • Yes, 'cargo test'.
  • It is SUGGESTED that the test suite cover most (or ideally all) the code branches, input fields, and functionality.
    • Yes, the code coverage is above 95%.
  • It is SUGGESTED that the project implement continuous integration (where new or changed code is frequently integrated into a central code repository and automated tests are run on the result).

New functionality testing

  • The project MUST have a general policy (formal or not) that as major new functionality is added to the software produced by the project, tests of that functionality should be added to an automated test suite. [test_policy]
    • Yes, this is described in the CONTRIBUTING.md file (https://github.com/rustls/rustls/blob/main/CONTRIBUTING.md): "Features involving additions to the public API should have (at least) API-level tests", "Protocol additions should have some coverage -- consider enabling corresponding tests in the bogo suite, or writing some ad hoc tests."
  • The project MUST have evidence that the test_policy for adding tests has been adhered to in the most recent major changes to the software produced by the project.
    • Yes, we measure every pull request's coverage. When coverage shows gaps, we will often add additional tests (example). Where it is difficult to implement tests ourselves (e.g. because of missing features), we employ other project's tests where possible (example). We often explicitly request additional tests for external contributions (example).
  • It is SUGGESTED that this policy on adding tests (see test_policy) be documented in the instructions for change proposals.
    • Yes, the CONTRIBUTING.md file describes the policy.

Warning flags

  • The project MUST enable one or more compiler warning flags, a "safe" language mode, or use a separate "linter" tool to look for code quality errors or common simple mistakes, if there is at least one FLOSS tool that can implement this criterion in the selected language.
    • The project uses 'forbid(unsafe_code)' attribute in the source code to ensure no unsafe code is used. We also use the nightly toolchain's cargo clippy tool for linting to ensure we're aware of new warnings/lints ASAP.
  • The project MUST address warnings.
    • In general warnings are addressed. In some cases warnings are disabled for specific cases.
  • It is SUGGESTED that projects be maximally strict with warnings in the software produced by the project, where practical.
    • The settings for the warning tools are generally fairly strict.

Security

Secure development knowledge

  • The project MUST have at least one primary developer who knows how to design secure software. (See ‘details’ for the exact requirements.)

    • This requires understanding the following design principles, including the 8 principles from Saltzer and Schroeder:

      • economy of mechanism (keep the design as simple and small as practical, e.g., by adopting sweeping simplifications)
      • fail-safe defaults (access decisions should deny by default, and projects' installation should be secure by default)
      • complete mediation (every access that might be limited must be checked for authority and be non-bypassable)
      • open design (security mechanisms should not depend on attacker ignorance of its design, but instead on more easily protected and changed information like keys and passwords)
      • separation of privilege (ideally, access to important objects should depend on more than one condition, so that defeating one protection system won't enable complete access. E.G., multi-factor authentication, such as requiring both a password and a hardware token, is stronger than single-factor authentication)
      • least privilege (processes should operate with the least privilege necessary)
      • least common mechanism (the design should minimize the mechanisms common to more than one user and depended on by all users, e.g., directories for temporary files)
      • psychological acceptability (the human interface must be designed for ease of use - designing for "least astonishment" can help)
      • limited attack surface (the attack surface - the set of the different points where an attacker can try to enter or extract data - should be limited)
      • input validation with allowlists (inputs should typically be checked to determine if they are valid before they are accepted; this validation should use allowlists (which only accept known-good values), not denylists (which attempt to list known-bad values)).
    • A "primary developer" in a project is anyone who is familiar with the project's code base, is comfortable making changes to it, and is acknowledged as such by most other participants in the project. A primary developer would typically make a number of contributions over the past year (via code, documentation, or answering questions). Developers would typically be considered primary developers if they initiated the project (and have not left the project more than three years ago), have the option of receiving information on a private vulnerability reporting channel (if there is one), can accept commits on behalf of the project, or perform final releases of the project software. If there is only one developer, that individual is the primary developer. Many books and courses are available to help you understand how to develop more secure software and discuss design. For example, the Secure Software Development Fundamentals course is a free set of three courses that explain how to develop more secure software (it's free if you audit it; for an extra fee you can earn a certificate to prove you learned the material).

    • Ctz has industry experience in high-security & high-availability domains (HSMs), mobile and embedded security.

    • Cpu has worked on large scale security-first systems (example, Let's Encrypt's CA software), independently audited software for vulnerabilities (e.g. CVEs in Apache HTTPD, OSSEC), worked as an application security engineer at a large fintech, as a security consultant at a security assessment firm, and holds a masters in computer science earned working in an academic security lab.

  • At least one of the project's primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them.

    • See above. Both Ctz and Cpu are familiar with many classes of vulnerabilities, and remediations. See for example, the detailed work Ctz did for the Rustls manual on the topic of implementation vulns and TLS protocol vulns.

Use basic good cryptographic practices

  • The software produced by the project MUST use, by default, only cryptographic protocols and algorithms that are publicly published and reviewed by experts (if cryptographic protocols and algorithms are used).
    • The project uses/implements only publicly published and reviewed cryptographic protocols and algorithms (TLS 1.2 and 1.3).
  • If the software produced by the project is an application or library, and its primary purpose is not to implement cryptography, then it SHOULD only call on software specifically designed to implement cryptographic functions; it SHOULD NOT re-implement its own.
    • The project implements the TLS protocol, which is a cryptographic protocol. For cryptographic primitives, the project relies on dedicated cryptography providers like ring or AWS libcrypto.
  • All functionality in the software produced by the project that depends on cryptography MUST be implementable using FLOSS.
    • All required functionality is implemented using FLOSS, including cryptography.
  • The security mechanisms within the software produced by the project MUST use default keylengths that at least meet the NIST minimum requirements through the year 2030 (as stated in 2012). It MUST be possible to configure the software so that smaller keylengths are completely disabled.
    • Rustls explicitly has many non-features related to weak configurations. It is not possible to use with weak key lengths enabled in any configuration.
  • The default security mechanisms within the software produced by the project MUST NOT depend on broken cryptographic algorithms (e.g., MD4, MD5, single DES, RC4, Dual_EC_DRBG), or use cipher modes that are inappropriate to the context, unless they are necessary to implement an interoperable protocol (where the protocol implemented is the most recent version of that standard broadly supported by the network ecosystem, that ecosystem requires the use of such an algorithm or mode, and that ecosystem does not offer any more secure alternative). The documentation MUST describe any relevant security risks and any known mitigations if these broken algorithms or modes are necessary for an interoperable protocol.
    • The project does not use broken cryptographic algorithms or inappropriate cipher suites. This is stated in the documentation.
  • The default security mechanisms within the software produced by the project SHOULD NOT depend on cryptographic algorithms or modes with known serious weaknesses (e.g., the SHA-1 cryptographic hash algorithm or the CBC mode in SSH).
    • No known serious weaknesses are known in the cryptographic protocols or algorithms used/implemented by the project.
  • The security mechanisms within the software produced by the project SHOULD implement perfect forward secrecy for key agreement protocols so a session key derived from a set of long-term keys cannot be compromised if one of the long-term keys is compromised in the future.
    • The project implements TLS 1.2 and 1.3. TLS 1.3 provides perfect forward secrecy by design. For TLS 1.2 Rustls does not support RSA key exchange, ensuring supported ciphersuites also offer PFS by design.
  • If the software produced by the project causes the storing of passwords for authentication of external users, the passwords MUST be stored as iterated hashes with a per-user salt by using a key stretching (iterated) algorithm (e.g., Argon2id, Bcrypt, Scrypt, or PBKDF2). See also OWASP Password Storage Cheat Sheet.
    • N/A: The project does not store passwords.
  • The security mechanisms within the software produced by the project MUST generate all cryptographic keys and nonces using a cryptographically secure random number generator, and MUST NOT do so using generators that are cryptographically insecure.
    • Random number generation is left to the cryptography provider, which uses a cryptographically secure random number generator.

Secured delivery against man-in-the-middle (MITM) attacks

  • The project MUST use a delivery mechanism that counters MITM attacks. Using https or ssh+scp is acceptable.
    • The project uses HTTPS for the project website and repository. The distribution packages can be downloaded from crates.io, which also supports HTTPS.
  • A cryptographic hash (e.g., a sha1sum) MUST NOT be retrieved over http and used without checking for a cryptographic signature.
    • The project does not allow retrieving cryptographic hashes over HTTP without checking for a cryptographic signature.

Publicly known vulnerabilities fixed

  • There MUST be no unpatched vulnerabilities of medium or higher severity that have been publicly known for more than 60 days.
    • There are no unpatched vulnerabilities in Rustls or its dependencies that the maintainers are aware of.
  • Projects SHOULD fix all critical vulnerabilities rapidly after they are reported.
    • The only vulnerability in rustls in the past year was patched within 3 days of the initial report. CVE-2024-32650 was reported April 17th at ~7pm EST. The fix was published April 19th at ~11:30am EST.

Other security issues

  • The public repositories MUST NOT leak a valid private credential (e.g., a working password or private key) that is intended to limit public access.
    • No valid private credentials are leaked.

Analysis

Static code analysis

  • At least one static code analysis tool (beyond compiler warnings and "safe" language modes) MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language.
    • We do not presently use any static code analysis tools beyond our linter, fuzzers, and code coverage.
  • It is SUGGESTED that at least one of the static analysis tools used for the static_analysis criterion include rules or approaches to look for common vulnerabilities in the analyzed language or environment.
    • We do not presently use any static code analysis tools beyond our linter, fuzzers, and code coverage.
  • All medium and higher severity exploitable vulnerabilities discovered with static code analysis MUST be fixed in a timely way after they are confirmed.
    • All discovered vulnerabilities were fixed in a timely manner.
  • It is SUGGESTED that static source code analysis occur on every commit or at least daily.
    • We do not presently use any static code analysis tools beyond our linter, fuzzers, and code coverage.

Dynamic code analysis

  • It is SUGGESTED that at least one dynamic analysis tool be applied to any proposed major production release of the software before its release.
    • The project uses fuzzing and GitHub Actions to test the library.
  • It is SUGGESTED that if the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A).
    • N/A: The project is written in Rust, which is a memory-safe language. The project uses fuzzing despite this.
  • It is SUGGESTED that the project use a configuration for at least some dynamic analysis (such as testing or fuzzing) which enables many assertions. In many cases these assertions should not be enabled in production builds.
    • We include assertions in our code that are only present in debug builds, and run tests against these debug builds.
  • All medium and higher severity exploitable vulnerabilities discovered with dynamic code analysis MUST be fixed in a timely way after they are confirmed.
    • All discovered vulnerabilities were fixed in a timely manner.

@cpu
Copy link
Member

cpu commented May 27, 2024

Thank you! I will review this shortly and see if I can fill in any of the TODOs/gaps. I appreciate you digging in.

@cpu
Copy link
Member

cpu commented May 27, 2024

@mspi21 This was very helpful, thanks again. I think 99% of your answers are spot on. It's validating to see that Rustls has already implemented the majority of these best practices.

I went through and made some small edits where you had TODOs. Quick summary:

Edits:

  • "Release notes" section
    • don't ref CHANGELOG.md, just github releases.
    • CVE sections: there's one more from recently.
  • "Automated test suite"
    • ref our unit tests, bogo in addition to daily tests
  • "warning flags"x
    • also cite aggressive clippy usage
  • TLS 1.2 can be configured to provide pfs -> we only ship PFS ciphersuites.
  • "Publicly known vulnerabilities fixed"
    • updated to ref more recent instance.

Adds:

  • "Vulnerability report process"

    • initial response times are good. Added data from most recent.
  • "New functionality testing"

    • Added some citations from PRs in my recent memory.
  • Security

    • Ctz and I both qualify. Probably djc too but I'm less familiar with his background. I added text about myself, I'll ask Ctz what he feels OK having disclosed as justification. It's an awkward requirement 😆
  • "Basic good cryptography practices"

    • updated to add more data.
  • Static code analysis

    • Yeah - I think besides coverage measurement this is a nope ATM.
  • Dynamic code analysis

    • "enables assertions" - mentioned our debug assert practices as a (weak) example.

In the future we could consider shoring up our static analysis coverage but I'm not in a rush to try and change anything in this department just to check a checkbox :-)

If the above looks good to folks I can kick off the submission.

@djc
Copy link
Member

djc commented May 27, 2024

I think clippy counts as static analysis, and we have local fuzzing setups as well OSFuzz for dynamic analysis.

I don't really have any security credentials -- just a lot of experience building Rust software including things like rustls and Quinn.

@ctz
Copy link
Member

ctz commented May 31, 2024

  • Ctz has considerable security domain expertise from previous employment, education (TODO: ask Ctz what he wants to disclosure here)

I hate to write things like this, but:

  • Ctz has industry experience in high-security & high-availability domains (HSMs), mobile and embedded security.

Feel free to do any and all wordsmithing as desired, and consider everything in my linkedin profile public: https://www.linkedin.com/in/joseph-birr-pixton-56149856/

@cpu
Copy link
Member

cpu commented May 31, 2024

I've submitted our application.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants