Establishing "Gold Standards" for C Code Analysis
Many companies use one or more "bug finder" static analysis tools to find potential vulnerabilities in their C programs. These tools, while they find bugs, provide no assurance. They cannot mathematically prove that all bugs of a certain type have been found, and they produce no evidence to prove that all relevant code locations for a specific targeted property have been proven either safe or vulnerable. In practice, different tools are more efficient at finding certain types of bugs. It is important for users to understand the relative strengths / weaknesses of each of the bug finder tools they use.
Since CodeHawk can examine all relevant code locations in a C program for a targeted property (such as buffer overflow), and since it can mathematically prove (with evidence) that the location is safe or vulnerable for that property, it can provide a "gold standard" analysis of a particular C program. The gold standard analysis can then be used as a "baseline" to measure how effective various bug finder tools are in finding the target vulnerabilities in the same C program. The user then has an absolute measure (score) to judge the efficiency of the bug finder tools they are using. By establishing gold standards for the key vulnerabilities a user needs to identify in their software programs, users can judge which bug finder tool(s) are most efficient for their needs.
Kestrel Technology can work with customers to establish gold standards for C programs relevant to their needs. Contact KT for details.
Measuring Cyber Security Assurance Level
How secure is a software system to cyber attacks? How can cyber security even be measured? The challenge in measuring cyber security assurance levels is twofold: (1) to identify a specific class of vulnerabilities, and (2) to assess all code locations where the vulnerability may lie and avoid false negative reports. For classes of vulnerabilities that can be defined mathematically, static analysis based on abstract interpretation provides the means for proving that each location of a potential vulnerability is, in fact, safe. If CodeHawk cannot generate a proof, then it provides evidence in the form of unproven conditions. The ratio of proven-safe code locations to the total number of relevant code locations provides a measure of cyber security assurance level. Going through the iterative process to add checks and repair vulnerabilities, the developer can increase the cyber security assurance level with a goal of 100% assurance.
Alternatively, CodeHawk can be used as a powerful productivity tool in augmenting security code review to verify the absence of security vulnerabilities. By automatically proving large segments of the code are safe for a targeted vulnerability (such as buffer overflow), the security review effort is dramatically reduced. CodeHawk also produces evidence of safety for third party review and confirmation.
Hardening C Source Code
CodeHawk can help developers increase cyber security assurance levels through an iterative process - where a potentially vulnerable code location cannot be proved safe, the unproved conditions are examined and a course of action is determined. Either the code location is modified to fix the vulnerability or extra information is provided to CodeHawk to help it prove the safety case. Following this process, source code can be hardened against specific kinds of cyber attacks.
Analyzing Open Source C Code
Open source software is playing an increasing role in IT solutions worldwide. Before incorporating open source codes into mission-critical applications, industry and governments need assurance that the code will be secure from various kinds of cyber attacks. CodeHawk can measure cyber security assurance levels in open source C code for specific classes of vulnerabilities. CodeHawk provides a low-cost method for choosing amongst alternative open source candidates by assessing their relative risk to cyber attacks that exploit specific kinds of vulnerabilities.