Send Resumes to us softwarewalkins@gmail.com Jobseekers
Any company or any consultancy Need the Experienced or Fresher IT professional Resume please send the mail to us from your official mail id to this mail id softwarewalkins@gmail.com
Accounts / Finance Jobs
Jobs in Bangalore Jobs in Ahmedabad Jobs in Mumbai Jobs in Pune Jobs in Chandigarh Jobs in Chennai Jobs in Delhi Jobs in Hyderabad Jobs in Kolkata Jobs in Bhubaneshwar Jobs in Cuttack Jobs in Ghaziabad Jobs in Guwahati Jobs in Nagpur Jobs in Noida Jobs in Trichy Jobs in Vadodara Jobs in Thiruvananthapuram / Trivandrum
Accounts / Finance Jobs Advertising Jobs Architects / Civil Jobs Banking / Insurance Jobs Call Center / BPO Jobs Journalism Jobs Distribution / Courier Jobs Engineering Jobs Export / Import Jobs Fashion / Merchandiser Jobs Financial Services Jobs Graphics / Multimedia Jobs Health / Medical Jobs Hotels Jobs HR / Admin Jobs Marketing / Sales Jobs IT / Software Jobs IT Hardware / Networking Jobs Legal / Law Jobs Purchase / Logistics Jobs Packaging Jobs Retail / FMCG Jobs Pharmaceuticals / Biotech Jobs Secretary / Office Staff Jobs Teaching / Education Jobs Telecom Jobs Top Management Jobs Travel / Tourism Jobs
Testing FAQ
Any company or any consultancy Need the Experienced or Fresher IT professional Resume please send the mail to us from your official mail id to this mail id softwarewalkins@gmail.com
FREE
Resume posting for the Job Seekers in this site, (This site is
JobSeekers Social networking of resumes of you and your Frineds ) you
can send your resume from all over the world with
experience and qualification dont send the projects and personal
details please write the heading ex: 3+ years of expe in IT to this mail
id softwarewalkins@gmail.com. it is like naukri and monster
It
is open database of the resumes if you have any objections dont send
your resume. send the resumes from all over the world Advantage of
this site is you can copy the your Resume link in the address bar and
paste in your compose mail so u need not to upload the resume when u r
applying for any post
FREE Job Requirement posting on www.allwalkin.blogspot.com. Please forward to softwarewalkins@gmail.com
Do you have any material in any Technology in IT please send the soft copy to this mail id softwarewalkins@gmail.com
Any Govt jobs Material please send to us we will post in this site mail id softwarewalkins@gmail.comDo you have any material in any Technology in IT please send the soft copy to this mail id softwarewalkins@gmail.com
Accounts / Finance Jobs
Jobs in Bangalore Jobs in Ahmedabad Jobs in Mumbai Jobs in Pune Jobs in Chandigarh Jobs in Chennai Jobs in Delhi Jobs in Hyderabad Jobs in Kolkata Jobs in Bhubaneshwar Jobs in Cuttack Jobs in Ghaziabad Jobs in Guwahati Jobs in Nagpur Jobs in Noida Jobs in Trichy Jobs in Vadodara Jobs in Thiruvananthapuram / Trivandrum
Accounts / Finance Jobs Advertising Jobs Architects / Civil Jobs Banking / Insurance Jobs Call Center / BPO Jobs Journalism Jobs Distribution / Courier Jobs Engineering Jobs Export / Import Jobs Fashion / Merchandiser Jobs Financial Services Jobs Graphics / Multimedia Jobs Health / Medical Jobs Hotels Jobs HR / Admin Jobs Marketing / Sales Jobs IT / Software Jobs IT Hardware / Networking Jobs Legal / Law Jobs Purchase / Logistics Jobs Packaging Jobs Retail / FMCG Jobs Pharmaceuticals / Biotech Jobs Secretary / Office Staff Jobs Teaching / Education Jobs Telecom Jobs Top Management Jobs Travel / Tourism Jobs
Testing FAQ
Testing
presents an interesting anomaly for the software engineer. Earlier in
the software process, the engineer attempts to build software from an
abstract concept to a tangible implementation. Now comes testing. The
engineer creates a series of test cases that are intended to “demolish”
the software that has been built. In fact, testing is the one step in
the software engineering process that could be viewed as destructive
rather than constructive.
Software
developers are by their nature constructive people. Testing requires
that the developer discard preconceived notions of the correctness of
software just developed and over come a conflict of interest that occurs
when errors are uncovered.
Testing Principals
Davis suggests a set of testing principals, which have been adapted for use:
1. All tests should be traceable to customer requirements.
2. Tests should be planned long before testing begins.
3. The Pareto principal applies to software testing.
4. Testing should begin “in the small” and progress toward testing “in the large”.
5. Exhaustive testing is not possible.
6. To be most effective, testing should be conducted by an independent third party.
Pareto Principal:
Simply put, the Pareto principal implies that 80% of all errors
uncovered during testing will likely be traceable to 20% of all program
modules.
Testability
The checklist that follows provides a set of characteristics that lead to testable software.
1. Operability – The better it works, the more efficiently it can be tested.
2. Observability – What you see is what you test.
3. Controllability – The better we can control the software, the more the testing can be automated and optimized.
4.
Decomposability – By controlling the scope of testing we can more
quickly isolate problems and perform smarter retesting.
5. Simplicity – The less there is to test, the more quickly we can test it.
6. Stability – The fewer the changes, the fewer the disruptions to testing.
7. Understandability – The more information we have, the smarter we will test.
Testing Methods
White Box Testing
White
box testing sometimes called as glass box testing, is a test case
design method that uses the control structure of the procedural design
to derive test cases. Using White box testing methods, the test engineer
can derive test cases that:
1. Guarantee that all independent paths within a module have been exercised at least once.
2. Exercise all logical decisions on their true or false sides.
3. Execute all loops at their boundaries and within their operational bounds.
4. Exercise internal data structures to assure their validity.
Basis Path Testing
Basis
path testing is a white box testing technique first proposed by Tom
McCabe. The basis path method enables the test case designer to derive a
logical complexity measure of a procedural design and use this measure
as a guide for defining a basis set of execution paths. Test cases
derived to exercise the basis set are guaranteed to execute every
statement in the program at least one time during testing
Flow Graph Notation
The Flow Graph depicts logical control flow.
Cyclomatic Complexity
Cyclomatic
Complexity is a software metric that provides a quantitative measure of
the logical complexity of a program. When this metric is used in the
context of the basis path testing method, the value computed for
Cyclomatic Complexity defines the number of independent paths in the
basis set of a program and provides us with an upper bound for the
number of tests that must be conducted to ensure that all statements
have been executed at least once.
Black Box Testing
Black
box testing focuses on the functional requirements of the software.
That is, black box testing enables the software engineer to derive sets
of input conditions that will fully exercise all functional requirements
for a program. Black box testing is not an alternative to white box
techniques.
Black box testing attempts to find errors in the following categories:
1. Incorrect or missing functions.
2. Interface errors.
3. Errors in data structures or external data base access.
4. Performance errors, and
5. Initialization and termination errors.
Equivalence Partitioning
Equivalence
Partitioning is a black-box testing method that divides the input
domain of a program into classes of data from which test cases can be
derived. Test case design for equivalence partitioning is bases on an
evaluation of equivalence classes for an input condition.
Equivalence classes may be defined according to the following guidelines:
1. In an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence classes are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.
Boundary Value Analysis
Boundary
Value Analysis (BVA) leads to a selection of test cases that exercise
bounding values. BVA analysis is a test case design technique that
complements equivalence partitioning. Rather than selecting any element
of an equivalence class BVA leads, to the selection of test cases at the
edges of the class.
The guidelines for BVA are similar in many respects to those provide for equivalence partitioning.
1. If
an input condition specifies a range bounded by values a and b, test
cases should be designed with values a and b, just above and just below a
and b respectively.
2. If
an input condition specifies a number of values, test cases should be
developed that exercise the minimum and maximum numbers. Values just
above and below minimum and maximum are tested.
Inspection
A
manual testing technique in which program documents (requirements,
design, source code, user manuals etc) are examined in a very formal and
disciplined manner to discover errors, violations of standards and
other problems. Checklists are a typical vechile used in accomplishing
this technique.
Walk Through
A
manual testing error technique where program logic is tested manually
by a group with a small set of test cases, while the state of program
variables are manually monitored to analyze the programmer’s logic and
assumptions.
Review
A
process or meeting during which a work product or set of work products,
is presented to project personnel, managers, users, customers or other
interested parties for comment or approval. Types of review include code
review, design review, requirements review etc.
Cyclomatic Complexity
The
number of independent paths through a program. The Cyclomatic
complexity of a program to the number of decision statements plus 1.
Quality Control
The operational techniques and procedures used to achieve quality requirements.
Types of Testing
The following are the major types of testing.
1. Integration Testing.
2. System Testing.
3. Usability Testing.
4. Compatibility Testing.
5. Reliability Testing.
6. Test Automation.
7. Performance Testing.
8. Supportability Testing.
9. Security and Access Control Testing.
10. Content Management Testing.
11. API Testing.
Let us look at some basic definitions of testing.
Testing
The
process of operating a system or component under specified conditions,
observing or recording the results, and making an evaluation of some
aspect of the system or component. (2) The process of analyzing a
software item to detect the differences between existing and required
conditions, i.e., bugs, and to evaluate the features of the software
items. See: dynamic analysis, static analysis, software engineering.
Acceptance Testing
Testing
conducted to determine whether or not a system satisfies its acceptance
criteria and to enable the customer to determine whether or not to
accept the system. Contrast with testing, development; testing,
operational. See: testing, qualification, and user acceptance testing.
Alpha [a] Testing
Acceptance
testing performed by the customer in a controlled environment at the
developer's site. The software is used by the customer in a setting
approximating the target environment with the developer observing and
recording errors and usage problems.
Assertion Testing
A
dynamic analysis technique which inserts assertions about the
relationship between program variables into the program code. The truth
of the assertions is determined as the program executes. See: assertion
checking, instrumentation.
Beta [B] Testing
Acceptance
testing performed by the customer in a live application of the
software, at one or more end user sites, in an environment not
controlled by the developer. (2) For medical device software such use
may require an Investigational Device Exemption [ICE] or Institutional
Review Board (IRS] approval.
Boundary Value Analysis (BVA)
A
testing technique using input values at, just below, and just above, the
defined limits of an input domain; and with input values causing
outputs to be at, just below, and just above, the defined limits of an
output domain. See: boundary' value analysis; testing, stress.
Branch Testing
Testing
technique to satisfy coverage criteria which require that for each
decision point, each possible branch (outcome] be executed at least
once. Contrast with testing, path; testing, statement. See: branch
coverage.
Compatibility Testing
The
process of determining the ability of two or more systems to exchange
information. In a situation where the developed software replaces an
already working program, an investigation should be conducted to assess
possible comparability problems between the new software and other
programs or systems. See: different software system analysis; testing,
integration; testing, interface. program variables. Feasible only for
small, simple programs.
Formal Testing
Testing
conducted in accordance with test plans and procedures that have been
reviewed and approved by a customer, user, or designated level of
management. Antonym: informal testing.
Functional Testing
Testing
that ignores the internal mechanism or structure of a system or
component and focuses on the outputs generated in response to selected
inputs and execution conditions. (2) Testing conducted to evaluate the
compliance of a system or component with specified functional
requirements and corresponding predicted results. Syn: black-box
testing, input/output driven testing. Contrast with testing, structural.
Integration Testing
An
orderly progression of testing in which software elements, hardware
elements, or both are combined and tested, to evaluate their
interactions, until the entire system has been integrated.
Interface Testing.
Testing
conducted to evaluate whether systems or components pass data and
control correctly to one another. Contrast with testing, unit; testing,
system. See: testing, integration.
Invalid case Testing
A
testing technique using erroneous [invalid, abnormal, or unexpected]
input values or conditions. See: equivalence class partitioning.
Mutation Testing
A
testing methodology in which two or more program mutations are executed
using the same test cases to evaluate the ability of the test cases to
detect differences in the mutations.
Operational Testing
Testing
conducted to evaluate a system or component in its operational
environment. Contrast with testing, development; testing, acceptance;
See: testing, system.
Design based functional Testing.
The
application of test data derived through functional analysis extended to
include design functions as well as requirement functions. See:
testing, functional.
Development Testing
Testing
conducted during the development of a system or component, usually in
the development environment by the developer. Contrast with testing,
acceptance; testing, operational.
Exhaustive Testing
Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.
Parallel Testing
Testing
a new or an alternate data processing system with the same source data
that is used in another system. The other system is considered as the
standard of comparison. Syn: parallel run.
Path Testing
Testing
to satisfy coverage criteria that each logical path through the program
be tested. Often paths through the program are grouped into a finite
set of classes. One path from each class is then tested. Syn:path
coverage. Contrast with testing, branch; testing, statement; branch
coverage; condition coverage; decision coverage.
Performance Testing
Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements.
Qualification Testing.
Formal
testing, usually conducted by the developer for the consumer, to
demonstrate that the software meets its specified requirements. See:
testing, acceptance; testing, system.
Regression Testing
Rerunning
test cases which a program has previously executed correctly in order
to detect errors spawned by changes or corrections made during software
development and maintenance.
Special case Testing.
A
testing technique using input values that seem likely to cause program
errors; a.g., "0", "1", NULL, empty string. See: error guessing.
Statement Testing
Testing
to satisfy the criterion that each statement in a program be executed
at least once during program testing. Syn: statement coverage. Contrast
with testing, branch; testing, path; branch coverage; condition
coverage; decision coverage; multiple condition coverage; path coverage.
Storage Testing
This is a determination of whether or not certain processing conditions use more storage (memory] than estimated.
Stress Testing
Testing
conducted to evaluate a system or component at or beyond the limits of
its specified requirements. Syn: testing, boundary value.
Structural Testing
Testing
that takes into account the internal mechanism [structure] of a system
or component. Types include branch testing, path testing, statement
testing. (2) Testing to insure each program statement is made to execute
during testing and that each program statement performs its intended
function. Contrast with functional testing. Syn: white-box testing,
glass-box testing, logic driven testing.
System Testing.
The
process of testing an integrated hardware and software system to verify
that the system meets its specified requirements. Such testing may be
conducted in both the development environment and the target
environment.
Test Oracle
'Test
Oracle' is a mechanism, different from the program itself, that can be
used to check the correctness of the output of the program for the test
cases.
Unit Testing
Testing
of a module for typographic, syntactic, and logical errors, for correct
implementation of its design, and for satisfaction of its requirements.
(2) Testing conducted to verify the implementation of the design for
one software element; a.g., a unit or module; or a collection of
software elements. Syn: component testing.
Usability Testing
Tests
designed to evaluate the machine/user interface. Are the communication
device(s) designed in a manner such that the information is displayed in
a understandable fashion enabling the operator to correctly interact
with the system?
Valid case Testing
A testing technique using valid [normal or expected] input values or conditions. See: equivalence class partitioning.
Volume Testing
Testing
designed to challenge a system's ability to manage the maximum amount
of data over a period of time. This type of testing also evaluates a
system's ability to handle overload situations in an orderly fashion.
Note: These definitions are collected from many sources like IEEE, ISO, NBS etc.
No comments:
Post a Comment