Sorry, you need to enable JavaScript to visit this website.

Feedback

Your feedback is important to keep improving our website and offer you a more reliable experience.

Choosing Secure Open Source Packages: Part 2

BY Terri Oda ON May 09, 2017

Many developers don’t feel qualified to make security decisions. In many ways, that’s a perfectly healthy attitude to have: Security decisions are hard, and even folk with training make mistakes. But a healthy respect for a hard problem shouldn’t result in decisions that make a hard problem even harder to solve. Sometimes, we need to recognize that a lot of architectural decisions in a project are security decisions, whether we like it or not. We need to figure out how to make better choices. After reviewing steps one through three of how to make a really simple security risk assessment in Part 1 of this blog, let's now look at a few more ways that you can tell whether or not an open source project is secure.

Step 4: Look at the test suite

If you’re still feeling unsure about a project, another thing you can do is take a look at the project’s test suite. Testing can be used as a rule of thumb if you don’t know security and want to guess at what might be a good library.

Here’s a few key questions you might ask yourself:

  • Does this test suite cover bad behaviour?
  • How comprehensive is this test suite?
  • Do all tests pass?
  • Is there continuous integration for tests?

The test suite can give you an idea about whether the developers have thought about error cases, which is often the first step towards solving security issues before they can be exploited. Testing is especially important for libraries that handle user input: parsers, input validation libraries, etc. If the project you’re looking at handles user input and doesn’t have test cases, you should be concerned.

Open source projects often use tools that provide “badges” that can give you a very quick view of whether or not a project has good code coverage in their test suite or whether all tests pass in the current build. You can read about some types of code badges here.

This is an example of a continuous integration badge from Travis-CI*. If tests are integrated, this indicates that none of them are failing.

This is an example of a code coverage badge from Codecov.io*. It indicates the percentage of the code that is covered by tests. Be aware that code that doesn’t check for errors might have 100% code coverage but might still be inadequate for security purposes. The badge can still be a useful metric even though it might not tell the whole story.

As a rule of thumb, here are three ratings for test suites

  • Bad-level:
    • Few or no tests, with a focus only on known-good cases.
    • No proof of execution; tests aren’t working.
  • Medium-level:
    • More tests, including many tests for known-bad cases.
    • Tests work and are executed at least sometimes.
  • Excellent-level:
    • Comprehensive test suite including both expected behaviour and known-bad cases, including potential security exploits.
    • Continuous integration or other proof tests are run regularly and pass.

Step 5: Be aware of assumptions

When you’re rating a large number of software packages, it can be very tempting to take cognitive shortcuts to make your job easier. This section covers a few that are dangerous to use in a security setting.

Popularity does not equal security

First off, popularity doesn’t equal security. Open source proponents like to talk about many eyes making all bugs shallow, but alas, many untrained eyes don’t guarantee that you are going to find finicky security bugs.

  • “14% of npm packages* carry known vulnerabilities, and 80% of Snyk users find known vulnerabilities in their apps.” https://snyk.io

Someone probably has great academic research about this, but as a practical example, consider npm, the Node.js* package manager. As you can see in the quote above, 14% of those top packages have known vulnerabilities. So if you assume that a package is secure because a billion Node.js users use it, you’re probably giving yourself a false sense of security. Popularity can be a positive indicator for security because more users means more chances that issues will be brought to light, but it’s not enough on its own to make a good security risk assessment.

No known vulnerabilities does not equal security

Another hazardous assumption is the idea that packages without any known vulnerabilities are bulletproof. I wish that was true, but it’s not. While it’s possible that a package with no known security issues is in fact secure, that’s honestly not the most likely possibility. Here’s a few more likely reasons that a package has no public security vulnerabilities:

  • Common vulnerabilities and exposures (CVEs) aren’t easy to file; sometimes no one from the team knows how.
  • This can be a sign of a lack of security expertise.
  • Sometimes it means no one’s looking for vulnerabilities.
  • Sometimes it means developers are actively rejecting or hiding issues.

You should treat a project with no known vulnerabilities as a sign of the project’s security immaturity unless you have serious evidence otherwise. That doesn’t mean the code is insecure, but it does probably mean that the team lacks experience handling public vulnerabilities.

Good packages do not guarantee good dependencies

Another thing that comes up a lot is assuming that choosing a great, well-supported framework means that all of the dependencies they’ve chosen will also be great and well-supported. While I also wish this were true, it’s usually not.

As we mentioned back in the very first section, open source package developers make decisions based on their own criteria, and they might not take security into account, or might prioritize code stability, performance, or code size over security. For example, if there’s no one available to do integration testing, it might make more sense to the project developers to just freeze on an older version of a library and ignore security updates entirely! These choices might be perfectly reasonable in the context of how the code is used, but they might equally provide a way for attackers to gain a foothold into a system. If you want to understand the risks of a whole system, you can’t assume too much about dependencies.

Package managers don’t always imply high quality

Many open source software repositories are something we jokingly call “crapacopias.” They've prioritized sharing, and the developers have made it as easy as possible for anyone to share and collaborate on the code. While that is fantastic for open source development, it also means that code in well-known repositories isn't necessarily high quality code with good community support. This isn't always true: Linux distributions are typically lightly curated lists of software, and some distributions do additional security work. But more often, being in a popular software repo doesn't mean anything about the code other than, "this was freely available online."

Good security reporting does not guarantee good action

And one last assumption to note is that good security reporting doesn’t actually guarantee good action. That might seem like a weird thing to say given that we use security reporting as one of the factors in risk assessment, but it’s worth noting that different groups have different attitudes towards security. Some groups will dismiss bugs outright unless the security researcher reporting them is fairly tenacious.

Here’s an example of a security bug reported to Oracle* against Virtual Box*:

The Virtual Box team originally closed this issue “won’t fix,” but further research and a proof of concept convinced the team that this was in fact a real security problem and that it needed a public vulnerability number.

This is one of those cases where a layman might not be able to tell what is a real bug and what isn’t, but if you’re interested in using a component that has a lot of these bugs, it might well be worth bringing in a security expert to help you figure out what’s going on.

Summary

So, now you’ve done a surface look into a few parts of the package that you’re considering using as part of your software. How did it do?

Here’s a little scorecard to help you summarize issues. The grading guidelines can be tweaked to meet your needs or the expectations of your project:

Section Grade Grading Guidelines
First look  
A - Mentions security audit or other proactive security activity.
B - No major warning signs, and code is used professionally.
C - No major warning signs, but not widely used or not well-supported.
D - Code has minor warning signs that need to be investigated in more detail.
F - Code has known issues, major warning signs, or is abandoned
Contributors and activity  
A - At least five significant, active contributors.
B - More than two significant, active contributors.
C - Only one major contributor who is active.
D - Project has been inactive for nine months or less.
F - Project has been inactive for more than one year.
Security issues  
A - Project has had previous security issues and handled them quickly and well. Bonus if they also mention doing proactive security such as fuzz testing, static analysis, or security audits.
B - Project has a plan for handling security issues but hasn’t had to use it much yet.
C - Project does not have a plan for security issues but at least has an active bug tracker and issues get resolved.
D - Project does not seem to resolve many open bugs.
F - Project has open security issues that are not in the process of being resolved.
Test suite  
A - Project has test suite with good coverage of positive and negative test cases set up as part of continuous integration, and test results are published for each build.
B - Project has test suite with good coverage but no continuous integration.
C - Test suite mostly covers positive test cases; very few or no error cases.
D - Test suite has very low coverage or is only a few examples.
F - No test suite.

Packages that score high on this scale are likely taking care of security, and you might feel comfortable assuming that all you need to do is make sure you get the latest released version of their code. (And do make sure that you keep using the latest supported version if you want to take advantage of all their security work!)

Packages that score in the middle aren’t as good at taking care of security, and you might want to mitigate that in some way. You could do some of your own proactive security, keep an eye on publicly reported bugs that might affect your project, get a security expert to give you a more refined look at the package, or dedicate some time or money to help the package maintainers implement more proactive security.

Packages that score low on this scale are likely to be weak points in your security, and you should consider replacing them or taking more intense measures to mitigate security concerns.

The first time you do a “simple” risk assessment like this, it probably won’t feel very simple. But with practice you can get a quick sense of a project from reading a few pages, looking at a few sources of information, and answering a few questions. A security expert with open source knowledge can give you much more accurate risk assessments, but not everyone has access to experts, and risk assessment is a good way to narrow down your field of choices while keeping security in mind.

So remember, dependency decisions in a project are security decisions, and basic risk assessments can help you make better choices about the open source packages you use.

Sources