Pages

Tuesday, May 25, 2010

MANUAL TESTING FAQ,TUTORIEALS

Software Quality Assurance
1. A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
2. Quality Assurance makes sure you are doing the right things, the right way.
3.Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention' .
4.It is used to take quality in the company.
5.It depends on the customer and project.
www.allwalkin.blogspot.com
Quality Control
1.A set of activities designed to evaluate a developed work product.
2. QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. .
3. Quality Control makes sure the results of what you've done are what you expected.
4.It invoves in validation.
5.It means of quality of a product.

Why Quality?
1.To satisfy customer.
2.To meet customer expectations.
3.To make new customer.
4.To reduce the maintence cost.

Software Quality:
Software satisfies quality only when meets to customer requirements/ customer satisfaction/ customer expectations.
Non technical reason -cost of product& Time to market

Software Testing
The verification of the software development process and the validation of software build before release to customer is called software testing.


Verification:
1.Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications.
2.The determination of consistency, correctness & completeness of a program at each stage
3.are we building the product right?

Validation:
1.Validation typically involves actual testing and takes place after verifications are completed.
2.determination of correctness of a final program with respect to its requirements.
3.Are we building the right product?
www.allwalkin.blogspot.com
Software Development Life Cycle
Initial Phase/Requirements Phase:
In this phase gathering the requirements and Business talks.
Involved BA (Business Analyst) – responsible for gathering the requirementEM (Engagement Manager) – responsible for discussion with customers/users/ clients
QA does the requirements analysis
They prepared a requirement documents (BDD - Business Design Document)/
(FSD – Functional Specification Document)/
(BRS – Business Requirement Specifications)
This BDD/FSD/BRS document contains the functional requirements.
Analysis Phase:
With the contents of BDD in this phase –
Analysis of the requirements-
Feasibility study - achievable contents, applicable situations
Technical specifications
Design specifications
Create prototypes (Model of the Project) etc.
The persons involved in Analysis phase Tech. Arch./Tech. Manager /System Analyst/Proj. Manager
They prepare a SRS (System Requirements Specification) document.
Design Phase:
The design phase project can be done in two ways:
1. High Level Design (Design the Units/Modules) – Tech Arch/TM/PM
2. Low Level Design (Design the Sub modules) – Technical Lead
With the using of UML (Unified Markup Language) tool they will design the project.
They prepare a TDD (Technical Design Document) with the Functional Specifications. It contains pseudo code (dummy code) QA/QM does the analysis of Functional Specifications in TDD.
Coding Phase:
Developed and implements the design with programming/ coding. Programmers/ Developers are responsible for implementation. They must follow the coding standards.
Testing Phase:www.allwalkin.blogspot.com
QA Analysts/Testers involved in this Phase and developed the STLC (Software Testing Life Cycle)
Test Planning, Test Development, Test Execution, Test logs, Result Analysis, Bug Tracking, Reporting
•(We study the Functional Specifications Document/BDD, then create test cases, using the test cases preparing test case document. With this document we test the modules. Now we list all the defects in a defect profile. The defect profile assign to QL/Developer for fixation of defects. The loop continues until the module is defects free)
Deployment & Support:
User Manuals, Support Docs., Training Docs and Maintenance of Programs.
(In the above all phases generates the Review Reports with their views on Specifications and Requirements in within the responsible persons)

Software Testing Models
Waterfall Model
Prototyping Model
Spiral Model
V Model

Waterfall Model
This model is also known as software development life cycle (SDLC) model or linear sequintial model or engineering model.This model is followable in company when customer requirements are clear and complete.
Strengths
.Emphasizes completion of one phase before moving on
•Emphasises early planning, customer input, and design
•Emphasises testing as an integral part of the life cycle
•Provides quality gates at each life cycle phase
Weakness:
•Depends on capturing and freezing requirements early in the life cycle
•Depends on separating requirements from design
•Feedback is only from testing phase to any previous stage•Not feasible in some organizations
•Emphasises products rather than processes

Prototyping Model
This model is follwable in company when customer requirements are not clear.
Strengths:
.Requirements can be set earlier and more reliably
•Requirements can be communicated more clearly and completelybetween developers and clients
•Requirements and design options can be investigated quickly and with low cost
•More requirements and design faults are caught early
Weakness:
•Requires a prototyping tool and expertise in using it – a cost for the development organisation
•The prototype may become the production system

Spiral Model
This model is follovable in company when customer requirements are enhancive.www.allwalkin.blogspot.com
Strengths:
•It promotes reuse of existing software in early stages of development
•Allows quality objectives to be formulated during development
•Provides preparation for eventual evolution of the software product
•Eliminates errors and unattractive alternatives early.
•It balances resource expenditure.
.Doesn’t involve separate approaches for software development and software maintenance.
•Provides a viable framework for integrated Hardware-software system development.
Weakness:
•This process needs or usually associated with Rapid Application Development, which is very difficult practically.
•The process is more difficult to manage and needs a very different approach as opposed to the waterfall model (Waterfall model has management techniques like GANTT charts to assess)

What is Testing?
1.An examination of the behavior of a program by executing on sample data sets.
2.Testing comprises of set of activities to detect defects in a produced material.
3.To unearth & correct defects
4.To detect defects early & to reduce cost of defect fixing
5.To avoid user detecting problems
6.To ensure that product works as users expected it to.

Why Testing?www.allwalkin.blogspot.com

• To unearth and correct defects.
• To detect defects early and to reduce cost of defect fixing.
• To ensure that product works as user expected it to.
• To avoid user detecting problems.

Software Testing Life Cycle
Test life cycle:
1. Prepare the test strategy.
2. Prepare the test plan.
3. Prepare the test cases.
4. Execute the test cases.
5.. Analyze the result.
6. Do the regression testing.
7. Submit the bug free build
Software Testing Life Cycle consist of six (generic) phases:
1) Planning, 2) Analysis, 3) Design, 4) Construction, 5) Testing Cycles, 6) Final Testing and Implementation and 7) Post Implementation. Each phase in the life cycle is described with the respective activities.
Planning.
Planning High Level Test plan, QA plan (quality goals), identify – reporting procedures, problem classification, acceptance criteria, databases for testing, measurement criteria (defect quantities/severity level and defect origin), project metrics and finally begin the schedule for project testing. Also, plan to maintain all test cases (manual or automated) in a database.
Analysis.
Involves activities that - develop functional validation based on Business Requirements (writing test cases basing on these details), develop test case format (time estimates and priority assignments) , develop test cycles (matrices and timelines), identify test cases to be automated (if applicable), define area of stress and performance testing, plan the test cycles required for the project and regression testing, define procedures for data maintenance (backup, restore, validation), review documentation.
Design.
Activities in the design phase - Revise test plan based on changes, revise test cycle matrices and timelines, verify that test plan and cases are in a database or requisite, continue to write test cases and add new ones based on changes, develop Risk Assessment Criteria, formalize details for Stress and Performance testing, finalize test cycles (number of test case per cycle based on time estimates per test case and priority), finalize the Test Plan, (estimate resources to support development in unit testing).
Construction (Unit Testing Phase).
Complete all plans, complete Test Cycle matrices and timelines, complete all test cases (manual), begin Stress and Performance testing, test the automated testing system and fix bugs, (support development in unit testing), run QA acceptance test suite to certify software is ready to turn over to QA.
Test Cycle(s) / Bug Fixes (Re-Testing/ System Testing Phase).
Run the test cases (front and back end), bug reporting, verification, revise/add test cases as required.
Final Testing and Implementation (Code Freeze Phase).
Execution of all front end test cases - manual and automated, execution of all back end test cases - manual and automated, execute all Stress and Performance tests, provide on-going defect tracking metrics, provide on-going complexity and design metrics, update estimates for test cases and test plans, document test cycles, regression testing, and update accordingly.www.allwalkin.blogspot.com
Post Implementation.
Post implementation evaluation meeting can be conducted to review entire project. Activities in this phase - Prepare final Defect Report and associated metrics, identify strategies to prevent similar problems in future project, automation team - 1) Review test cases to evaluate other cases to be automated for regression testing, 2) Clean up automated test cases and variables, and 3) Review process of integrating results from automated testing in with results from manual testing.

Bug Life Cycle
Bug Life Cycle starts with an unintentional software bug/behavior and ends when the assigned developer fixes the bug. A bug when found should be communicated and assigned to a developer that can fix it. Once fixed, the problem area should be re-tested. Also, confirmation should be made to verify if the fix did not create problems elsewhere. In most of the cases, the life cycle gets very complicated and difficult to track making it imperative to have a bug/defect tracking system in place.
Following are the different phases of a Bug Life Cycle:
Open: A bug is in Open state when a tester identifies a problem area
Accepted: The bug is then assigned to a developer for a fix. The developer then accepts if valid.
Not Accepted/Won’ t fix: If the developer considers the bug as low level or does not accept it as a bug, thus pushing it into Not Accepted/Won’ t fix state.Such bugs will be assigned to the project manager who will decide if the bug needs a fix. If it needs, then assigns it back to the developer, and if it doesn’t, then assigns it back to the tester who will have to close the bug.
Pending: A bug accepted by the developer may not be fixed immediately. In such cases, it can be put under Pending state.
Fixed: Programmer will fix the bug and resolves it as Fixed.
Close: The fixed bug will be assigned to the tester who will put it in the Close state.
Re-Open: Fixed bugs can be re-opened by the testers in case the fix produce problems elsewhere.
Defect life cycle:
1. If the bug is new then open it.
2. Assign it to tester.
3. Retest it.
4. If still have not resolved reassign it.
5. Retest it.
6. After resolving, close it.

Developer send the Module 1/Build 1 to testers for testing
Tester raises the defect and the defect assign to developer/quality lead as new status.
Developer verifies that defect if it is really defect he rectify that, after that he assign to tester/QL in fixed status.
Tester tests the fixed status defects again, the defect is fixed he close the defect.
If the fixed defect is not rectified, again he assign to developer as reopen status.
Developer does the reopen defects for rectification again he send it to tester.
This process does until the module is defect free.
Intermediately in this process, developer provides the status to defects as Hold, As per design, TE/QA error like.

Defect Severity Definitions
Five levels of severity are defined for reported problems, with Severity 1 being the most serious problem.
•Severity 1 (Critical)
Application Defect causes complete loss of service and work cannot reasonably continue. The Problem/Defect has one or more of the following characteristics:
Data Corruption. Physical or Logical data is unavailable or incorrect.
System hangs. The process hangs indefinitely or there is severe performance degradation, causing unreasonable waits for resources or response, as if the system is hanging.
System crashes repeatedly. A processes fails and continues to fail after restart attempts.
Critical functionality is not available. The application cannot continue because a vital feature is inoperable.
www.allwalkin.blogspot.com
•Severity 2 (Major)
Problem/Product Defect causes an internal (software) error, or incorrect behavior causing a severe loss of service. No customer acceptable workaround is available, however operations can continue in a restricted fashion. The Problem/Defect has one or more of the following characteristics:
Internal software error, causing the system to fail, but restart or recovery is possible.
Severely degraded performance due to software error.
Some important functionality is unavailable, yet the system can continue to operate in a restricted fashion.

•Severity 3 (Average)
Problem/Product Defect causes minimal loss of service. The impact of the problem/defect is minor or an inconvenience, such as a manual bypass to restore product functionality. The Problem/Defect has one or more of the following characteristics:
A software error for which there is a customer acceptable workaround.
Minimal degraded performance (<=10%), due to software error.
Software error or incorrect behavior with minor impact to the operation of the system.
Software error requiring manual editing of configuration or script files to work around a problem.

•Severity 4 (Minor)
The Problem/Product Defect causes NO loss of functionality. The problem/defect is a minor error, incorrect behavior, or a documentation error that in no way impedes the operation of a system.

•Severity 5 (Enhancement)
The problem/lack of functionality is out of scope for the current version of the application. It is recorded in the Defect Tracking Database so as to document enhancement requests for future versions of the application.

Defect Profilewww.allwalkin.blogspot.com

Defect
No
Defect
Description
Detected by
Date of
Submission
Module
Name
Version
Number
Severity
Priority

Unit Testing
The most 'micro' scale of testing to test particular functions or code modules. Typically done by the programmer and not by testers.It requires detailed knowledge of the internal program design and code
• Unit - smallest testable piece of software
• A unit can be compiled/ assembled/ linked/ loaded; and put under a test harness
• Unit testing done to show that the unit does not satisfy the functional specification and/ or its implemented structure does not match the intended design structureerify a single program or a section of a single program.

Integration Testing
To verify interaction between system components.
Prerequisite: unit testing completed on all components that compose a system
Integration is a systematic approach to build the complete software structure specified in the design from unit-tested modules.
testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems
There are two ways integration performed. It is called Pre-test and Pro-test.

1.Pre-test: the testing performed in Module development area is called Pre-test. The Pre-test is required only if the development is done in module development area.
2.Pro-test: The Integration testing performed in baseline is called pro-test. The development of a release will be scheduled such that the customer can break down into smaller internal releases.

Top-down
Advantages•Useful if major flaws occur towards the top of the program
•Early skeletal programs allows demos and boosts morale

Disadvantages
•STUB modules must be produced
•Test conditions are difficult to create
•Observations of results is difficult
•Programs correctness can be misleading

Bottom-up
Advantages
•Useful if major flaws occur towards the bottom of the program
•Test conditions are easier to create
•Observation of test results is easier
Disadvantages•
.DRIVER modules must be produced
•No demonstratable program exists until the last module is added
•Design errors in the higher modules are not detected.
www.allwalkin.blogspot.com
System Testing
A system is the big component
• System testing is aimed at revealing bugs that cannot be attributed to a component as such, to inconsistencies between components or planned interactions between components.
.black box type testing that is based on overall requirement specifications; covers all combined parts of a system..
• To verify and validate behaviors of the entire system against the original system objectivesSoftware testing is a process that identifies the correctness, completeness, and quality of software.
list of various types of software testing
Formal Testing: Performed by test engineers·
Informal Testing: Performed by the developers·
Manual Testing: That part of software testing that requires human input, analysis, or evaluation.
Automated Testing: Software testing that utilizes a variety of tools to automate the testing process. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tools and the software being tested to set up the test cases.·
Black box Testing: Testing software without any knowledge of the back-end of the system, structure or language of the module being tested. Black box test cases are written from a definitive source document, such as a specification or requirements document.·
White box Testing: Testing in which the software tester has knowledge of the back-end, structure and language of the software, or at least its purpose.·
Unit Testing: Unit testing is the process of testing a particular complied program, i.e., a window, a report, an interface, etc. independently as a stand-alone component/program. The types and degrees of unit tests can vary among modified and newly created programs. Unit testing is mostly performed by the programmers who are also responsible for the creation of the necessary unit test data.·
Incremental Testing: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.·
System Testing: System testing is a form of black box testing. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed.·
Integration Testing: Testing two or more modules or functions together with the intent of finding interface defects between the modules/functions.·
System Integration Testing: Testing of software components that have been distributed across multiple platforms (e.g., client, web server, application server, and database server) to produce failures caused by system integration defects (i.e. defects involving distribution and back-office integration) .·
Functional Testing: Verifying that a module functions as stated in the specification and establishing confidence that a program does what it is supposed to do.·
End-to-end Testing: Similar to system testing - testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.· Sanity Testing: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes testing basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.· www.allwalkin.blogspot.com
Regression Testing: Testing with the intent of determining if bug fixes have been successful and have not created any new problems.·
Acceptance Testing: Testing the system with the intent of confirming readiness of the product and customer acceptance. Also known as User Acceptance Testing.·
Adhoc Testing: Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an addition to formal testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed – usually done by skilled testers. Sometimes ad hoc testing is referred to as exploratory testing.
Configuration Testing: Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.·
Load Testing: Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.·
Stress Testing: Testing done to evaluate the behavior when the system is pushed beyond the breaking point. The goal is to expose the weak links and to determine if the system manages to recover gracefully.·
Performance Testing: Testing with the intent of determining how efficiently a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.
Usability Testing: Usability testing is testing for ‘user-friendliness’ . A way to evaluate and measure how users interact with a software product or site. Tasks are given to users and observations are made.·
Installation Testing: Testing with the intent of determining if the product is compatible with a variety of platforms and how easily it installs.·
Recovery/Error Testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.·
Security Testing: Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.·
Penetration Testing: Penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.·
Compatibility Testing: Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.·
Exploratory Testing: Any testing in which the tester dynamically changes what they’re doing for test execution, based on information they learn as they’re executing their tests.·
Comparison Testing: Testing that compares software weaknesses and strengths to those of competitors’ products.·
Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to reaching customers. Sometimes a selected group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.·
Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large.·
Gamma Testing: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks.·www.allwalkin.blogspot.com
Mutation Testing: A method to determine to test thoroughness by measuring the extent to which the test cases can discriminate the program from slight variants of the program.·
Independent Verification and Validation (IV&V): The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn’t fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software.·
Pilot Testing: Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Typically involves many users, is conducted over a short period of time and is tightly controlled. (See beta testing)·
Parallel/Audit Testing: Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.·
Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.·
Closed Box Testing: Closed box testing is same as black box testing. A type of testing that considers only the functionality of the application.·
Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.·
Smoke Testing: A random test conducted before the delivery and after complete testing

Functional Testing
Study SRS
Identify Unit Functions
For each unit function
Take each input function
Identify Equivalence class
Form Test cases
Form Test cases for boundary values
Form Test cases for Error Guessing
Form Unit function v/s Test cases, Cross Reference Matrix
Find the coverage

Testing Terms
· Bug: A software bug may be defined as a coding error that causes an unexpected defect, fault or flaw. In other words, if a program does not perform as intended, it is most likely a bug.
· Error: A mismatch between the program and its specification is an error in the program.
· Defect: Defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is a flaw in the software system and has no impact until it affects the user/customer and operational system. 90% of all the defects can be caused by process problems.
· Failure: A defect that causes an error in operation or negatively impacts a user/ customer.
· Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.
· Quality Control: quality control or quality engineering is a set of measures taken to ensure that defective products or services are not produced, and that the design meets performance requirements.
· Verification: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.· Validation: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed. [/size]
www.allwalkin.blogspot.com
Most common software errors
Following are the most common software errors that aid you in software testing. This helps you to identify errors systematically and increases the efficiency and productivity of software testing. Types of errors with examples·
User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues - Poor responsiveness, Can’t redirect output, Inappropriate use of key board·
Error Handling: Inadequate - protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.·
Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.·
Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.
· Initial and Later states: Failure to - set data item to zero, to initialize a loop-control variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.
· Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.
· Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.
· Race Conditions: Assumption that one event or task finished before another begins, Resource races, Tasks starts before its prerequisites are met, Messages cross or don’t arrive in the order sent.
· Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn’t erase old files from mass storage, Doesn’t return unused memory.
· Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status or return code, Wrong operation or instruction codes.
· Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.
· Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.
www.allwalkin.blogspot.com
Test Strategy
A high-level document defining the test phases to be performed and the testing within those phases for a programme. It defines the process to be followed in each project. This sets the standards for the processes, documents, activities etc. that should be followed for each project. For example, if a product is given for testing, you should decide if it is better to use black-box testing or white-box testing and if you decide to use both, when will you apply each and to which part of the software? All these details need to be specified in the Test Strategy. Project Test Plan - a document defining the test phases to be performed and the testing within those phases for a particular project.
A Test Strategy should cover more than one project and should address the following issues: An approach to testing high risk areas first, Planning for testing, How to improve the process based on previous testing, Environments/ data used, Test management - Configuration management, Problem management, What Metrics are followed, Will the tests be automated and if so which tools will be used, What are the Testing Stages and Testing Methods, Post Testing Review process, Templates.
Test planning needs to start as soon as the project requirements are known. The first document that needs to be produced then is the Test Strategy/Testing Approach that sets the high level approach for testing and covers all the other elements mentioned above.

Test Planning
Once the approach is understood, a detailed test plan can be written. Usually, this test plan can be written in different styles. Test plans can completely differ from project to project in the same organization.
IEEE SOFTWARE TEST DOCUMENTATION Std 829-1998 - TEST PLAN
Purpose To describe the scope, approach, resources, and schedule of the testing activities. To identify the items being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the risks associated with this plan.
OUTLINE A test plan shall have the following structure:· Test plan identifier. A unique identifier assign to the test plan.· Introduction: Summarized the software items and features to be tested and the need for them to be included.· Test items: Identify the test items, their transmittal media which impact their· Features to be tested· Features not to be tested· Approach· Item pass/fail criteria· Suspension criteria and resumption requirements· Test deliverables· Testing tasks· Environmental needs· Responsibilities· Staffing and training needs· Schedule· Risks and contingencies· Approvals
Test Plan contains
the Objective of Project and Product / Description of Project and Product
Test Scope - it gives scope of the project.
Test Environment – applications, databases, servers, network etc.
Test Objectives – what to be tested and to be not tested
Testing Approach – test methods, describe the testing steps
Description of problem Reporting –
Entrance Criteria (When to start testing)
Exit Criteria (When to Stop testing)
Scheduling
Where to Automate
Identify the groups and their Responsibilities
Describing the Test Schedule
Review Reports

Major Test Planning Tasks
Like any other process in software testing, the major tasks in test planning are to – Develop Test Strategy, Critical Success Factors, Define Test Objectives, Identify Needed Test Resources, Plan Test Environment, Define Test Procedures, Identify Functions To Be Tested, Identify Interfaces With Other Systems or Components, Write Test Scripts, Define Test Cases, Design Test Data, Build Test Matrix, Determine Test Schedules, Assemble Information, Finalize the Plan
www.allwalkin.blogspot.com
Test Case Development
A test case is a detailed procedure that fully tests a feature or an aspect of a feature. While the test plan describes what to test, a test case describes how to perform a particular test. You need to develop test cases for each test listed in the test plan.
Test Case is a description of what to be tested, what data to be given and what actions to be done to check the actual results against the expected results.
Test case is commonly used for specific test. Test case will consist of information such as Requirements Testing. Test steps, verification steps, pre requisites, outputs of the Test case.
We develop Test Cases for each functionality by putting various test conditions or test scenarios.


Different groups using different types of test cases.
Unit test case
Unit test cases are being used by development team. When developer is completed the code writing he will start running unit test cases. These cases are more technical and usually contain certain activities which executing and checking queries, loops, compilations etc. A unit test case is mostly running on a local developer environment. Optimal timing of writing unit test case:
The best timing to write unit test case is during development. Although each code line is a potential to be a defect, not each code line should be written as tests case. Selectively choose the line/s which can cause severe defects.
Subsystem test case
Subsystem test cases are being used by both developers and testers. These test cases are necessary to perform, but some organizations are skipping directly to system test due to effort constraints. The test cases of subsystem test are focusing in the correctness of applications or outputs such as GUI windows, invoices, web sites, generation of files etc. The cases are checking the end results of the software. There are two main important tasks of subsystem cases:
1. Testing that back-end and front-end of the same functionality are working correctly.
2. Testing that two integrated front-end applications or two integrated back-end applications are working correctly.
Example for task number one, the set of subsystem test case activities will be the following: Testing the retrieval of the correct data of parameters in window application or in web site application which defined dynamically in tables or even hard-coded in the code lines.
Window or web site = front-end
Dynamic tables or hard-coded code lines = back-end
I can agree that in some extent system test is also performing end software output tests and integrated applications tests, but the difference is huge, see both below and system test article.
Optimal timing of writing subsystem test case:
The most efficient time to write subsystem test cases is during code review activity. The review of the code as one package is providing the ability to understand what integration constraints of this code can be. It is no longer a local isolated query which is running stand alone but rather wide and correlated one.
System test case –
System test cases are being used by testers. The cases are detailed and covering the entire system functionality, therefore the total number of cases is much bigger than unit or subsystem tests. The effort or writing system test cases is almost the same as the effort of executing them, because the analysis of many important cases is required time and resources.
In software, several applications are integrated. Some of them are internal and others are external. A complex system will have many different interfaces, some of them are GUI applications others are receiving flat files etc. An output of one application is an input of another one and so on.
Following are four important elements of system test cases:
1. Each test case will simulate a real live scenario.
2. The cases will execute with priorities according to the real live volume of each scenario.
3. Cross application scenarios will test carefully.
4. Test execution will run on environments which close to the production platform from data and infrastructure point of views.
www.allwalkin.blogspot.com
Client Server Testing vs Web Based Testing

Client Server Testing
1.This involves testing of the client installation, connectivity between the Client and the Server, configuration required to connect to the server apart from the regular GUI, functional testing.
You will be having specific client version for specific OS.
2.Client Server Application are those that runs on LAN, WAN but is not connected to www or internet.
3.It Uses Connection Oriented (TCP/IP) Protocol.
Ex:
Your going to your bank and asking a/c balance to customer server. He/she use banking system application to check you're a/c balance – that is C/S application.
4.In Client/Server testing, test phases can include :
Build Acceptance Testing
Prototype Testing
System Reliability Testing
Multiple phases of regression testing, and beta testing.



www.allwalkin.blogspot.com
Web Based Testing
1.Here client is your browser, you can skip the installation testing, checking the connectivity between browser and your server etc but you have a addition of testing different types of browsers and on different platforms.
2.The web based Applications are those that are connected with www, internet. Ex: Yahoo and Google etc.
3.It Uses HTTP Protocol
Ex..Also you can go to net cafe and check you a/c balance that is Web application. You can imagine the facilities. What all the customer service executive can do in his/her system and what all you can do from you net cafe
4.For web based testing –
Validation the HTML,
Check for broken links,
Speed - Access the site,
Browser independence - Try different browsers,
Check printed pages,
performance sessions
www.allwalkin.blogspot.com
Client/Server Application:
It is 2 tier architecture. .
It servers restricted number of users.
FTP will process the request which is given by the client.
Advantages:- High Performance
Disadvantage: -Hard to maintain.

Web Server:
It is min 3 tier architecture.
It servers n number of users.
Browser will process the request.
Disadvantage: -Slow Performance

System Vs End-to-End Testing
System testing black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
System Testing can be limited up to functional as well as non functional testing of the System, but when we talk about end-to-end testing, all the interfaces, sub-systems must come into picture. All the end-to-end scenarios should be considered and executed before deployment of the system into actual environment.

End-to-end testing similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Smoke Vs Sanity testing
Sanity Testing:
This testing will be done by the tester to check whether the new application is ready for the major testing effort.
Smoke testing :
this just like sanity testing, but this testing will be done by the developer before releasing to the tester for the further testing.

Testcase vs Usecase

A use case is A highlevel scenario where you specify the functionality of the application from a business perspective.
A use case describes an entire flow of intercation that the user has with the system/application for e.g. a user logging into the system and searching for a flight, booking it and then logging out is a use case. There are multiple ways a user can interact with a system, and they all map to positive/ negative use cases.
Test case is The implementation of the highlevel scenario(usecase) wherein one gives detailed and step-by-step account of procedures to test a particular functionality of the application. Things get lot technical here.
Test cases are written on the basis of use cases. The test cases check if the various functionalities that the user uses to interact with the system is working fine or not.
www.allwalkin.blogspot.com
Test matrix VS Test metrix
Test matrix: Tester will write test matrix in test specification document which keep track of testing flow, testing type and test cases activities etc.
Test Metrix: This will define the scale up to What level of testing can be achieved by doing particular testing on application in the scale of 100% testing meter?

Web testing Vs GUI testing
Web testing is Server side testing and GUI Testing is Client Side Testing
GUI testing is the part of web testing as well as desktop testing.
In GUI testing we check the graphical user interface that is Font size, font color, links, labels etc.
Web testing means it is an 3 tier architecture, here we check the performance of the application (volume, load, stress). Here we do the compatibility testing, user interface testing etc.

Check List for Web Site
1. Are fonts consistent within functionality?
2. Are the company display standards followed?- Logos- Font size- Colors- Scrolling- Object use
3. Are legal requirements met?
4. Is content sequenced properly?
5. Are web-based colors used?
6. Is there appropriate use of white space?
7. Are tools provided (as needed) in order to access the information? 8. Are attachments provided in a static format?
9.. Is spelling and grammar correct?
10. Are alternative presentation options available (for limited browsers or performance issues)?
www.allwalkin.blogspot.com
Testing Methods
1.White Box Testing
Also called ‘Structural Testing / Glass Box Testing’ is used for testing the code keeping the system specs in mind. Inner working is considered and thus Developers Test..
· Mutation Testing Number of mutants of the same program created with minor changes and none of their result should coincide with that of the result of the original program given same test case.
·Basic Path Testing Testing is done based on Flow graph notation, uses Cyclometric complexity & Graph matrices.
· Control Structure Testing The Flow of control execution path is considered for testing. It does also checks :-Conditional Testing : Branch Testing, Domain Testing.Data Flow Testing.Loop testing :Simple, Nested, Conditional, Unstructured Loops.

2.Black Box Testing
Also called ‘Functional Testing’ as it concentrates on testing of the functionality rather than the internal details of code. Test cases are designed based on the task descriptions
· Comparison Testing Test cases results are compared with the results of the test Oracle.
·Graph Based TestingCause and effect graphs are generated and cyclometric complexity considered in using the test cases.
· Boundary Value TestingBoundary values of the Equivalence classes are considered and tested as they generally fail in Equivalence class testing.
· Equivalence class Testing Test inputs are classified into Equivalence classes such that one input check validates all the input values in that class.

3.Gray Box Testing : Similar to Black box but the test cases, risk assessments, and test methods involved in gray box testing are developed based on the knowledge of the internal data and flow structures
writeAds_bannerBott om("").
www.allwalkin.blogspot.com
Levels of Testing
1.Unit Testing.· Unit Testing is primarily carried out by the developers themselves.
· Deals functional correctness and the completeness of individual program units.
· White box testing methods are employed

2. Integration Testing.· Integration Testing: Deals with testing when several program units are integrated.
· Regression testing : Change of behavior due to modification or addition is called ‘Regression’. Used to bring changes from worst to least.
· Incremental Integration Testing : Checks out for bugs which encounter when a module has been integrated to the existing.
· Smoke Testing : It is the battery of test which checks the basic functionality of program. If fails then the program is not sent for further testing.

3. System Testing.
· System Testing - Deals with testing the whole program system for its intended purpose.
· Recovery testing : System is forced to fail and is checked out how well the system recovers the failure.
· Security Testing: Checks the capability of system to defend itself from hostile attack on programs and data.
· Load & Stress Testing: The system is tested for max load and extreme stress points are figured out.
· Performance Testing: Used to determine the processing speed.
· Installation Testing: Installation & uninstallation is checked out in the target platform.

4. Acceptance Testing.
· UAT ensures that the project satisfies the customer requirements.
· Alpha Testing : It is the test done by the client at the developer’s site.
· Beta Testing : This is the test done by the end-users at the client’s site.
· Long Term Testing : Checks out for faults occurrence in a long term usage of the product.www.allwalkin.blogspot.com
· Compatibility Testing : Determines how well the product is substantial to product transition.

Software Testing process

1) First, we have to analyze the user requirements by going through functional specifications, use cases and prototypes.
2) Prepare the understanding document.
3) Prepare test scenarios.
4) Review the test scenarios.
5) Prepare test cases based on test scenarios.
6) Peer review and lead review.
7) After that we will get a build from dev team for testing.
8) First did sanity testing.
9) Then execute all test cases.
10) Report the bugs.
11) Verify the bugs after bug fixing.
12) Prepare test summary report.
13) Prepare traceability matrix.
14) Finally prepare exist notes.

Reviews:
A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.
A main goal of reviews is find defects. Reviews are a good compliment to testing to help assure quality.
The review types includes

Management Reviews: these reviews are performed by those directly responsible for the system in order to monitor progress, determine status of plans and schedules, and confirm requirements and their system allocation. Like… ensure that deliverables are ready for management approvals. Resolve issues that require management’s attention. Identify any project bottlenecks. Keeping project in Control.
www.allwalkin.blogspot.com
Technical Reviews: Technical reviews confirm that product conforms to specifications, adheres to regulations, standards, guidelines, plans, change are properly implemented; changes affect only those system areas identified by the change specification. Software requirements specification, S/W Design description, S/W Test documentation, S/W user documentation, Installation procedure, Release notes etc.
Requirements Review, Design Review, Code review, etc.

Walkthroughs: A static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a documentation, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.

Types of Quality Standards:

ISO Standards: International Organization for Standards
CMM Levels: Capability Maturity Model
Six Sigma

ISO Standards:
There are various types of ISO Standards various type of standards. It is applicable to IT and Non IT org.
ISO:9001:2001 (in this scenarios contains 9001 is quality or standards no., 2001 is certification released year)
ISO:9000: (Usually initial startup companies use this, it has minimum menu like guidelines follow)
ISO:9001: (Organization contains the Planning, Production, Testing, Marketing and Servicing capabilities)
ISO:9002: (Organization contains Production, Testing, Marketing and Servicing capabilities)
ISO:9003: (Organization having only Testing and QA capabilities)
ISO:9004: (Organization having activity of R&D and continual improvement will have the certification)

CMM
CMM: Capability Maturity Model. It is developed by (SEI Software Engineers Institution) initiated by US Defense Department to help improve s/w development processes. CMM only applicable for IT companies.
CMM defined five levels of process maturity:
Level 1: Initial (worship the hero)
Level 2: Repeatable (plan the work)
Level 3: Defined (work the plan)
Level 4: Managed (measure the work)
Level 5: Optimized (work the measures)
What Is ISO/QS 9000?
ISO 9000 is the common term for the family of international quality standards adopted by the International Standards Organization (ISO) that identify the fundamental requirements of a world class quality system. ISO 9001 is applicable when a company has responsibility for design, development, production, installation, and servicing of a product. ISO 9002 is applicable when a company has responsibility for production, installation, and servicing of a product. ISO 9003 is applicable when a company has responsibility for inspection and testing only of a product.
ISO 9000 has gained worldwide acceptance and ISO certification is quickly becoming the admission ticket to rapid success in the international market. The implementation of a Quality Management System, which meets the requirements of ISO 9000, is sound business practice.
QS 9000 is a quality standard adopted by the US automotive industry, which encompasses ISO requirements and also adds automotive industry specific requirements.
How Does ISO/QS Benefit Your Company?
ISO/QS 9000 is about effective management and control of key processes, efficient use of resources, and meeting the needs of all customers. This leads to substantial savings in operational cost, consistency in meeting requirements, and enhanced customer satisfaction. An effective Quality Management System also provides the baseline for further business improvement.

Capability Maturity Model® for Software (SW-CMM®)

The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
www.allwalkin.blogspot.com

SEI CMM LEVEL 5
Level 1 – Initial
Level 2 – Repeatable
Level 3 – Defined
Level 4 – Managed
Level 5 – Optimizing

Capability Maturity Model® (SW-CMM®) for Software


The Capability Maturity Model for Software describes the principles and practices underlying software process maturity and is intended to help software organizations improve the maturity of their software processes in terms of an evolutionary path from ad hoc, chaotic processes to mature, disciplined software processes. The CMM is organized into five maturity levels:
1) Initial. The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort and heroics.
2) Repeatable. Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.
3) Defined. The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization' s standard software process for developing and maintaining software.
4) Managed. Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.
5) Optimizing. Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.
Predictability, effectiveness, and control of an organization' s software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief.
Except for Level 1, each maturity level is decomposed into several key process areas that indicate the areas an organization should focus on to improve its software process.
The key process areas at Level 2 focus on the software project's concerns related to establishing basic project management controls. They are Requirements Management, Software Project Planning, Software Project Tracking and Oversight, Software Subcontract Management, Software Quality Assurance, and Software Configuration Management.
The key process areas at Level 3 address both project and organizational issues, as the organization establishes an infrastructure that institutionalizes effective software engineering and management processes across all projects. They are Organization Process Focus, Organization Process Definition, Training Program, Integrated Software Management, Software Product Engineering, Intergroup Coordination, and Peer Reviews.
The key process areas at Level 4 focus on establishing a quantitative understanding of both the software process and the software work products being built. They are Quantitative Process Management and Software Quality Management.
The key process areas at Level 5 cover the issues that both the organization and the projects must address to implement continual, measurable software process improvement. They are Defect Prevention, Technology Change Management, and Process Change Management.
Each key process area is described in terms of the key practices that contribute to satisfying its goals. The key practices describe the infrastructure and activities that contribute most to the effective implementation and institutionalizatio n of the key process area.
www.allwalkin.blogspot.com

No comments:

Receive All Free Updates Via Facebook.