Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but not necessarily limited to:
- analyzing the product requirements for completeness and correctness in various contexts like industry perspective, business perspective, feasibility and viability of implementation, usability, performance, security, infrastructure considerations, etc.
- reviewing the product architecture and the overall design of the product
- working with product developers on improvement in coding techniques, design patterns, tests that can be written as part of code based on various techniques like boundary conditions, etc.
- executing a program or application with the intent of examining behavior
- reviewing the deployment infrastructure and associated scripts & automation
- take part in production activities by using monitoring & observability techniques
Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors.[1]
Contents
- 1Overview
- 2History
- 3Testing approach
- 4Testing levels
- 5Testing types, techniques and tactics
- 5.1Installation testing
- 5.2Compatibility testing
- 5.3Smoke and sanity testing
- 5.4Regression testing
- 5.5Acceptance testing
- 5.6Alpha testing
- 5.7Beta testing
- 5.8Functional vs non-functional testing
- 5.9Continuous testing
- 5.10Destructive testing
- 5.11Software performance testing
- 5.12Usability testing
- 5.13Accessibility testing
- 5.14Security testing
- 5.15Internationalization and localization
- 5.16Development testing
- 5.17A/B testing
- 5.18Concurrent testing
- 5.19Conformance testing or type testing
- 5.20Output comparison testing
- 5.21Property testing
- 5.22Metamorphic testing
- 5.23VCR testing
- 6Testing process
- 7Automated testing
- 8Measurement in software testing
- 9Testing artifacts
- 10Certifications
- 11Controversy
- 12Related processes
- 13See also
- 14References
- 15Further reading
- 16External links
Overview[edit]
Although software testing can determine the correctness of software under the assumption of some specific hypotheses (see the hierarchy of testing difficulty below), testing cannot identify all the failures within the software.[2] Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against test oracles — principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts,[3] comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.
A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions, but only that it does not function properly under specific conditions.[4] The scope of software testing may include the examination of code as well as the execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.[5]: 41–43
Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing assists in making this assessment.
Faults and failures[edit]
Software faults occur through the following process: A programmer makes an error (mistake), which results in a fault (defect, bug) in the software source code. If this fault is executed, in certain situations the system will produce wrong results, causing a failure.[6]: 31
Not all faults will necessarily result in failures. For example, faults in the dead code will never result in failures. A fault that did not reveal failures may result in a failure when the environment is changed. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software.[7] A single fault may result in a wide range of failure symptoms.
Not all software faults are caused by coding errors. One common source of expensive defects is requirement gaps, i.e., unrecognized requirements that result in errors of omission by the program designer.[5]: 426 Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability, performance, and security.
Input combinations and preconditions[edit]
A fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[4]: 17–18 [8] This means that the number of faults in a software product can be very large and defects that occur infrequently are difficult to find in testing and debugging. More significantly, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do) — usability, scalability, performance, compatibility, and reliability — can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.
Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use combinatorial test design methods to build structured variation into their test cases.[9]
Economics[edit]
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided, if better software testing was performed.[10][dubious – discuss]
Outsourcing software testing because of costs is very common, with China, the Philippines, and India being preferred destinations.[11]
Roles[edit]
Software testing can be done by dedicated software testers; until the 1980s, the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing,[12] different roles have been established, such as test manager, test lead, test analyst, test designer, tester, automation developer, and test administrator. Software testing can also be performed by non-dedicated software testers.[13]
History[edit]
Glenford J. Myers initially introduced the separation of debugging from testing in 1979.[14] Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."[14]: 16 ), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.
Testing approach[edit]
Static, dynamic, and passive testing[edit]
There are many approaches available in software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas executing programmed code with a given set of test cases is referred to as dynamic testing.[15][16]
Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules.[15][16] Typical techniques for these are either using stubs/drivers or execution from a debugger environment.[16]
Static testing involves verification, whereas dynamic testing also involves validation.[16]
Passive testing means verifying the system behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions.[17] This is related to offline runtime verification and log analysis.
Exploratory approach[edit]
Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design, and test execution. Cem Kaner, who coined the term in 1984,[18]: 2 defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."[18]: 36
The "box" approach[edit]
Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box testing may also be applied to software testing methodology.[19][20] With the concept of grey-box testing—which develops tests from specific design elements—gaining prominence, this "arbitrary distinc