Security testing has recently moved beyond the realm of network port scanning to include
probing software behavior as a critical aspect of system behavior (see the sidebar). Unfortunately, testing software security is a commonly misunderstood task..
Security testing done properly goes deeper than simple black-box probing on the presentation layer (the sort performed by so-called application security tools)—and even beyond the functional testing of security apparatus.Testers must use a risk-based approach,grounded in both the system’s architectural reality and the attacker’s mindset, to gauge software security adequately. By identifying risks in the system and creating tests driven by those risks, a software security tester can properly focus on areas of
Software security is about making software behave in the presence of a malicious attack, even though in the real world, software failures usually happen spontaneously—that is,without intentional mischief. Not surprisingly, standard software testing literature is only concerned with what happens when software fails,regardless of intent. The difference between software safety and software security is therefore the presence of an intelligent adversary bent on breaking the system.
Security is always relative to the information and services being protected,the skills and resources of adversaries,and the costs of potential assurance remedies; security is an exercisein risk management. Risk analysis, especially at the design level,can help us identify potential security problems and their impact.1 Once identified and ranked, softwarerisks can then help guide software security testing.
A vulnerability is an error that an attacker can exploit. Many types of vulnerabilities exist, and computer security researchers have created taxonomies of them.2 Security vulnerabilities in software systems range from local implementation errors (such as use of the gets() function call in C/C++), through interprocedural interface errors (such as a race condition between an access control check and a file operation),to much higher design-level mistakes (such as error handling and recovery systems that fail in an insecure fashion or object-sharing systems that mistakenly include transitive trust issues). Vulnerabilities typically fall into two categories—bugs at the implementation level and flaws at the design level.
Attackers generally don’t care whether a vulnerability is due to a flaw or a bug, although bugs tend tobe easier to exploit. Because attacks are now becoming more sophisticated,the notion of which vulnerabilities actually matter is changing.Although timing attacks, including the well-known race condition,were considered exotic just a few years ago, they’re common now.Similarly, two-stage buffer overflow attacks using trampolines were once the domain of software scientists, but now appear in 0day exploits.
Design-level vulnerabilities are the hardest defect category to handle, but they’re also the most prevalent and critical. Unfortunately, ascertaining whether a program has design-level vulnerabilities requires great expertise, which makes finding such flaws not only difficult, but particularly hard to automate.
Examples of design-level problems include error handling in object-oriented systems, object sharing and trust issues, unprotected data channels (both internal and external),incorrect or missing access control mechanisms, lack of auditing/logging or incorrect logging, and ordering and timing errors (especially in multithreaded systems). These sorts of flaws almost always lead to security risk.
风险管理和安全测试
Software security practitioners perform many different tasks to manage software security risks, including
creating security abuse/misusecases;
listing normative security requirements;
performing architectural riskanalysis;
building risk-based security testplans;
wielding static analysis tools;
performing security tests;
performing penetration testing in the final environment;
cleaning up after security breaches Three of these are particularly closely linked—architectural risk analysis, risk-based security test planning, and security testing—because a critical aspect of security testing relies on probing security risks. Last issue’s installment1 explained how to approach a software security risk analysis, the end product being a set of security-related risks ranked by business or mission impact. (Figure 1 shows where we are in our series of articles about software security’s place in the software development life cycle.) The pithy aphorism, “software security is not security software” provides an important motivator for security testing. Although security features such as cryptography, strong authentication,and access control play a critical role in software security, security itself is an emergent property of the entire system, not just the security mechanisms and features. A buffer overflow is a security problem regardless of whether it exists in a security feature or in the noncritical GUI. Thus, security testing must necessarily nvolve two diverse approaches:
1. testing security mechanisms to ensure that their functionality is properly implemented, and
2. performing risk-based security testing motivated by understanding and simulating the attacker’s approach.
声明:本站部分文章及图片源自用户投稿,如本站任何资料有侵权请您尽早请联系jinwei@zod.com.cn进行处理,非常感谢!