Pages

Men

rh

7/13/2012

Test Preparation & Design Process

Baseline Documents
Construction of an application and testing are done using certain documents. These documents are written in sequence, each of it derived from the previous document. 

  • Business Requirement
  • Functional Specification
  • Design Specification
  • System Specification


Business Requirement
It describes user’s needs for the application. This is done over a period of time and going through various levels of requirements. This should also portray functionalities that are technically feasible within the stipulated time frames for delivery of the application.
As this contains user perspective requirements, User acceptance test is based on this document.
Functional Specification
The document that describes in detail the characteristics of the product with regard to its intended capability.

The Functional Specification document describes the functional needs, design of the flow and
user maintained parameters. It is primarily derived from Business requirement document, which specifies the client's business needs. The proposed application should adhere to the
specifications specified in the document. This is used henceforth to develop further documents for software construction, validation and verification of the software.
 
Design Specification
The Design Specification document is prepared based on the functional specification. It contains the system architecture, table structures and program specifications. This is ideally prepared and used by the construction team. The test team should also have a detailed understanding of the design specification in order to understand the system architecture.
 
System Specification
The System Specification document is a combination of Functional specification and design specification. This is used in case of small application or an enhancement to an application. Case Study on each document and reverse presentation
 
Traceability
  • BR and FS
  • FS and Test conditions 


BR and FS
The requirements specified by the users in the business requirement document may not be exactly translated into a functional specification. Therefore, a trace on specifications between functional  pecification and business requirements is done a one to one basis. 

This helps finding the gap between the documents. These gaps are then closed by the author of the FS, or deferred after discussions.
 
Testers should understand these gaps and use them as an addendum to the FS, after getting this signed off from the author of the FS. The final FS form may vary from the original, as deferring or taking in a gap may have ripple effect on the application. Sometimes, these ripple effects may not be reflected in the FS. Addendum’s may sometime affect the entire system and the test case
development.
 
FS and Test conditions
Test conditions built by the tester are traced with the FS to ensure full coverage of the baseline document. If gaps between the same are obtained, tester must then build conditions for the gaps. In this process, testers must keep in mind the rules specified in Test condition writing.
 
Gap Analysis
This is the terminology used on finding the difference between "what it should be" and "what it is".
As explained, it is done on the Business requirement to FS and FS to test conditions. Mathematically, it becomes evident that Business requirements that are user’s needs are tested, as Business requirement and Test conditions are matched.
 
Simplifying the above,
A=Business requirement
B=Functional Specification
C=Test conditions
A=B, B=C, Therefore A=C
 
Another way of looking at this process is to eliminate as many mismatches at every stage of the process, there by giving the customer an application, which will satisfy their needs.
 
In the case of UAT, there is a direct translation of specification from the Business Requirement to Test conditions leaving lesser amount of understandability loss.
 
Choosing Testing Techniques
The testing technique varies based on the projects and risks involved in the project.
  • It is determined by the criticality and risks involved with the Application under Test (AUT).
  • The technique used for testing will be chosen based on the organizational need of the end
  • user and based on the caracal risk factor or test factors that do impacts the systems
  • The technique adopted will also depend on the phases of testing
  • The two factors that determine the test technique are
  1. Test factors: the risks that need to be address in testing
  2. Test Phases: the phase of the systems development life cycle in which testing will occur.
  • And also depends onetime and money spend on testing.

Error Guessing
A test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them.
 
Error Seeding
The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.


Test Plan
This is a summary of the ANSI/IEEE Standard 829-1983. It describes a test plan as:
“A document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do each task,and any risks requiring contingency planning.”


Sub Types:
  • Test Plan Identifier
  • Introduction
  • Test Items
  • Features to be Tested
  • Features Not to Be Tested
  • Approach
  • Item Pass/Fail Criteria
  • Suspension Criteria and Resumption Requirements
  • Test Deliverables
  • Testing Tasks
  • Environmental Needs
  • Responsibilities
  • Staffing and Training Needs
  • Schedule
  • Risks and Contingencies
  • Approvals
This standard specifies the following test plan outline:


Test Plan Identifier
A unique identifier


Introduction
  • Summary of the items and features to be tested
  • Need for and history of each item (optional)
  • References to related documents such as project authorization, project plan, QA plan,
  • configuration management plan, relevant policies, relevant standards
  • References to lower level test plans

Test Items
Test items and their version
Characteristics of their transmittal media
References to related documents such as requirements specification, design specification, users guide, operations guide, installation guide
References to bug reports related to test items
Items which are specifically not going to be tested (optional)


Features to be Tested
All software features and combinations of features to be tested
References to test-design specifications associated with each feature and
combination of features


Features Not to Be Tested
All features and significant combinations of features which will not be tested
The reasons these features won’t be tested


Approach
Overall approach to testing
For each major group of features of combinations of featres, specify the approach
Specify major activities, techniques, and tools which are to be used to test the groups
Specify a minimum degree of comprehensiveness required
Identify which techniques will be used to judge comprehensiveness
Specify any additional completion criteria
Specify techniques which are to be used to trace requirements
Identify significant constraints on testing, such as test-item availability, testingresource availability, and deadline


Item Pass/Fail Criteria
Specify the criteria to be used to determine whether each test item has passed or failed testing


Suspension Criteria and Resumption Requirements
Specify criteria to be used to suspend the testing activity
Specify testing activities which must be redone when testing is resumed


Test Deliverables
Identify the deliverable documents: test plan, test design specifications, test case specifications, test procedure specifications, test item transmittal reports, test logs, test incident reports, test summary reports
Identify test input and output data
Identify test tools (optional)


Testing Tasks
Identify tasks necessary to prepare for and perform testing
Identify all task interdependencies
Identify any special skills required


Environmental Needs
Specify the level of security required
Identify special test tools needed
Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware, communications and system 
software, the mode of usage (i.e., stand-alone), and any other software or supplies needed
Identify any other testing needs
Identify the source for all needs which are not currently available


Responsibilities
Identify groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving
Identify groups responsible for providing the test items identified in the Test Items section
Identify groups responsible for providing the environmental needs identified in the Environmental Needs section


Staffing and Training Needs
Specify staffing needs by skill level
Identify training options for providing necessary skills


Schedule
Specify test milestones
Specify all item transmittal events
Estimate time required to do each testing task
Schedule all testing tasks and test milestones
For each testing resource, specify its periods of use


Risks and Contingencies
Identify the high-risk assumptions of the test plan
Specify contingency plans for each


Approvals
Specify the names and titles of all persons who must approve the plan
Provide space for signatures and dates



High Level Test Conditions / Scenario
It represents the possible values that can be attributed to a particular specification.
The importance of determining the conditions are:

  • Deciding the architecture of testing approach
  • Evolving design of the test scripts
  • Ensuring coverage
  • Understanding the maximum conditions for a specification
At this point the tester will have a fair understanding of the application and his module. The
functionality can be broken into
  • Field level rules
  • Module level rules
  • Business rules
  • Integration rules

Sub Types:
  • Processing Logic
  • Data definition
  • Feed Analysis


Processing logic
It may not be possible to segment the specifications into the above categories in all applications. It is left to the test team to decide on the application segmentation. For the segments identified by the test team, the possible condition types that can be built are

Positive condition 
Polarity of the value given for test is to comply with the condition existence.
Negative condition 
Polarity of the value given for test is not to comply with the condition existence.
Boundary condition 
Polarity of the value given for test is to assess the extreme values of the condition.
User perspective condition 
Polarity of the value given for test is to analyze the practical usage of the condition.

In order to test the conditions and values that are to be tested, the application should be populated with data.

There are two ways of populating the data into tables of the application.

Intelligent: Data is tailor-made for every condition and value, having reference to its condition. These will aid in triggering certain action by the application. By constructing such intelligent data, few data records will suffice the testing process.
Example:
Business rule, if the interest to be paid is more than 8 % and the tenor of the deposit exceeds one month, then the system should give a warning.

To populate an interest to be paid field of a deposit, we can give 9.5478 and make the
tenor as two months for a particular deposit. This will trigger the warning in the application.

Unintelligent: Data is populated in mass, corresponding to the table structures. Its values
are chosen at random and not with reference to the conditions derived. This type of population can be used for testing the performance of the application and its behavior to random data. It will be difficult for the tester to identify his requirements from the mass data.
Example:
Using the above example, to find a suitable record with interest exceeding 8 % and the Tenor being more than two months is difficult.

Having now understood the difference between intelligent and unintelligent data and also at this point having a good idea of the application, the tester should be able to design intelligent data for his test conditions.

Application may have its own hierarchy of data structure which is interconnected.

Feeds Analysis
Most applications are fed with inputs at periodic intervals, like end of day or every hour etc. Some applications may be stand alone i.e., all processes will happen within its database and no external inputs of processed data are required.

In the case of applications having feeds, received from other machines, they are sent in a format, which are redesigned. These feeds, at the application end, will be processed by local programs and populated in respective tables.

It is therefore, essential for testers to understand the data mapping between the feeds and the
database tables of the application. Usually, a document is published in this regard. 

Translation of the high level data designed previously should be converted into the feed formats, in order to populate the application database.
  • Data Sheet format (ISO template)
  • Exercise with the live application
  • Test Case



Test Case
A set of inputs, execution preconditions, and expected outcomes developed for a particular
objective, such as to exercise a particular program path or to verify compliance with a specific
requirement.


Test cases are written based on the test conditions. It is the phrased form of Test conditions,
which becomes readable and understandable by all. Language used in the expected results
should not have ambiguity. The results expressed should be clear and have only one interpretation possible. It is advisable to use the term "Should" in the expected results.


There are three headings under which a test case is written. Namely,
Description: Here the details of the test on a specification or a condition are written.
Data and Pre-requirements: Here either the data for the test or the specification is mentioned. Pre-requirements for the test to be executed should also be clearly mentioned.


E xpected results: The expected result on the execution of the instruction in the description is mentioned. In general, it should reflect in detail the result of the test execution.


While writing a test case, to make the test cases explicit, the tester should include the following:

  • Reference to the rules and specifications under test in words with minimal technical jargons.
  • Check on data shown by the application should refer to the table names if possible
  • Names of the fields and screens should also be explicit.


Sub Types:
  • Expected Results
  • Pre-requirements
  • Data definition
Expected Results



The outcome of executing an instruction would have a single or multiple impacts on the
application. The resultant behavior of the application after execution is the expected result.


Sub Types:

  • Single Expected Result
  • Multiple Expected Result

Single Expected Result
It has a single impact on the instruction executed.


Example:
Test Case Description: Click on the hyperlink "New deposit" at the top left hand corner of the main menu screen.
Expected result: New time deposit screen should be displayed.


Multiple Expected Result
It has multiple impacts on executing the instructions.


Example:
Test Case Description: Click on the hyperlink "New deposit" at the top left hand corner of the main menu screen. 
Expected result: New time deposit screen should be displayed & Customer contact date should be pre-filled with the system date.





Pre-requirements
Test cases cannot normally be executed with normal state of the application. Below is the list of
possible pre-requirements that could be attached to the test case:


Enable or Disable external interfaces
Example: 
Reuters, Foreign exchange rate information service organization server to be connected to the application.
Time at which the test case is to be executed


Example: 
Test to be executed after 2.30 p.m. in order to trigger a warning.
Date's that are to be maintained (Pre-date or Post date) in the database before testing ,as its sometimes not possible to predict the dates of testing ,and populate certain date fields when they are to trigger certain actions in the application.


Example: 
Maturity date of a deposit should be the date of test. So, it is difficult to give the value of the maturity date while data designing or preparing test cases.


Deletion of certain records to trigger an action by the application
Example: A document availability indicator field to be made null, so as to trigger a warning from the application.
Change values if required to trigger an action by the application
Example: Change the value of the interest for a deposit so as to trigger a warning
by the application.


Data definition
Data for executing the test cases should be clearly defined in the test cases. They should indicate the values that will be entered into the fields and also indicate default values of the field.
Example:
Description: Enter Client's name
Data: John Smith
(OR)
Description: Check the default value of the interest for the deposit
Data: $ 400


In the case of calculations involved, the test cases should indicate the calculated value in the
expected results of the test case.
Example:
Description: Check the default value of the interest for the deposit
Data: $ 400
This value ($400) should be calculated using the formula specified well in advance while data
design.