| Introduction to Test
             ConceptsThe Test discipline acts in many respects as a service provider to the other 
  disciplines. Testing focuses primarily on the evaluation or assessment of 
  product quality realized through a number of core practices: 
  Finding and documenting defects in software quality.Generally advising about perceived software quality.Proving the validity of the assumptions made in design and requirement specifications 
    through concrete demonstration.Validating the software product functions as designed.Validating that the requirements have been implemented appropriately. An interesting difference between Test and the other disciplines in RUP is 
  that Test is essentially tasked with finding and exposing weaknesses in the 
  software product. That is interesting in that, to yield the most benefit, it 
  necessitates a different general philosophy to that used in the Requirements, 
  Analysis and Design and Implementation disciplines. The somewhat subtle difference 
  is that while those other disciplines focus on completeness, Test focuses on 
  incompleteness. A good test effort is driven by questions such as "How 
  could this software break?" and "In what possible situations could 
  this software fail to work predictably?". Test challenges the assumptions, 
  risks and uncertainty inherent the work of the other disciplines, addressing 
  those concerns by concrete demonstration and impartial evaluation. The challenge 
  is to avoid two potential extremes: an approach that does not suitably and effectively 
  challenge the software and expose it's inherent problems and weaknesses, and 
  an approach that is inappropriately negative or destructive. Adopting such a 
  negative approach you will likely never find it possible to consider the software 
  product of acceptable quality, and will likely alienate the Test effort from 
  the other disciplines.  Based on information presented in various surveys and essays, software testing 
  is said to account for 30 to 50 percent of total software development costs. 
  It is therefore perhaps surprising to note that most people believe computer 
  software is not well tested before it is delivered. This contradiction is rooted 
  in a few key issues. First, testing software is enormously difficult. The different ways a given 
  program can behave are unquantifiable. Second, testing is typically done without 
  a clear methodology so results vary from project to project, organization to 
  organization: success is primarily a factor of the quality and skills of the 
  individuals. Third, insufficient use is made of productivity tools, making the 
  laborious aspects of testing manageable: in addition to the lack of automated 
  test execution, many test efforts are conducted without tools that allow the 
  effect management of extensive Test Data and Test Results. While the flexibility 
  of use and complexity of software makes complete testing an impossible goal, 
  a well-conceived methodology and use of state-of-the-art tools, can help to 
  improve the productivity and effectiveness of the software testing. For "safety-critical" systems where a failure can harm people (such 
  as air-traffic control, missile guidance, or medical delivery systems), high-quality 
  software is essential for the success of the system. For a typical MIS system, 
  the criticality of the system may not be as immediately obvious, but it's likely 
  that the impact of a defect could cause the business using the software considerable 
  expense in lost revenue or possibly legal costs. In this "information age", 
  with increasing demand on provision of electronically delivered services over 
  the Internet, many MIS systems are now considered "mission-critical"; 
  that is, companies cannot fulfill their functions and experience massive losses 
  when failures occur. A continuous approach to quality, initiated early in the software lifecycle, 
  can significantly lower the cost of completing and maintaining the software. 
  This greatly reduces the risk associated with deploying poor quality software. The Test discipline is related to other disciplines.
 
  The Requirements discipline captures requirements for the 
    software product, and those requirements are one of the primary inputs for 
    identifying what tests to perform.The Analysis & Design discipline determines the appropriate 
    design for the software product; this is the another important input for identifying 
    what tests to perform.The Implementation discipline produces builds of the software 
    product that are validated by the Test discipline. Within an iteration multiple 
    builds will be tested, typically one per test cycle.The Environment discipline develops and maintains supporting 
    artifacts that are used during test, such as the Test Guidelines and Test 
    Environment.The Management discipline plans the project, and the necessary 
    work in each iteration. Described in an Iteration Plan, this artifact is an 
    important input to defining the correct evaluation mission for the test effort.The Configuration & Change Management discipline controls 
    change within the project team. The test effort verifies that each change 
    has been completed appropriately. 
 
 
Copyright 
© 1987 - 2001 Rational Software Corporation
 |  | 
 
   |