Test Estimation - QA Process Interview Questions
Answer1:
QA & Testing are thankless jobs. In a software development company developer is a core person. As you are a fresh graduate, it would be good for you to work as a developer. From development you can always move to testing or QA or other admin/support tasks. But from Testing or QA it is little difficult to go back to development, though not impossible(as u are BE comp)
Seeing the job market, it is not possible for each & every fresher to get into development. But you can keep searching for it.
Some big company's have separate Verification & Validation groups where only testing projects are executed. Those teams have TLs, PLs who are testing experts. They earn good salary same as development people.
In technical projects the testing team does lot of technical work. You can do certifications to improve your technical skills & market value.
It all depends on your way of handling things & interpersonal, communication and leadership skills. If it is difficult for you to get a job in development or you really like testing, just go ahead. Try to achieve excellence as a testing professional. You will never have a job problem .Also you will always get onsite opportunities too!! You might have to struggle for initial few years like all other freshers.
Answer2:
QA and Testing are thankless only in some companies.
Testing is part of development. Rather than distinguish between testing and development,distinguish between testing and programming.
Programming is also thankless in some companies.
Not suggesting that anyone should or should not go into testing. It depends on your skills and interests. Some people are better at programming and worse at testing, some better at testing and worse at programming, some are not suited for either role. You should decide what you are good at and what fascinates you. What type of work would make you WANT to stay at work for 60-80 hours a week for a few years because it is so interesting?
Suggesting that there are excellent testing jobs out there, but there are bad ones too (in testing and in programming, both).
Have not seen any certification in software testing that improves the technical skill of anyone. Apparently, testing certification improves a tester's market value in some markets.
Most companies mean testing when they say "QA". Or they mean Testing plus Metrics, where the metrics tasks are low-skill data collection and basic data analysis rather than thinking up and justifying measurement systems appropriate to the questions at hand. In terms of skill, salary, intellectual challenge and value to the company, testing+metrics is the same as testing. Some companies see QA more strategically, and hire more senior people into their groups. Here is a hint--if you can get a job in a group called QA with less than 5 years of experience, it's a testing group or something equivalent to it.
Answer3:
Nothing is considered as great or a mean job. As long as you like and love to do, everything in that seems to be interesting.
I started as a developer and slowly moved to Testing. I find testing to be more challenging and interesting. I have solid 6 years of testing experience alone and many sernior people are there in my team, who are professional testers.
Answer4:
testing is low-skill work in many companies.
Scripted testing of the kind pushed by ISEB, ISTQB, and the other certifiers is low skill, low prestige, offers little return value to the company that pays for it, and is often pushed to offsite contracting firms because it isn't worth doing in-house. In many cases, it is just a process of "going through the motions" -- pretending to do testing (and spending a lot of money in the pretense) but without really looking for any important information and without creating any artifacts that will be useful to the project team.
The only reason to take a job doing this kind of work is to get paid for it. Doing it for too long is bad for your career.
There are much higher-skill ways to do testing. Some of them involve partial automation (writing or using programs to help you investigate the program more effectively), but automation tools are just tools. They are often used just as mind-numbingly and valueless as scripted manual testing. When you're offered this kind of position, try to find out how much judgment you will have to exercise in the analysis of the product under test and the ways that it provides value to the users and other stakeholders, in the design of tests to check that value and to check for other threats to value (security failures, performance failures, usability failures, etc.)--and how much this position will help you develop your judgment. If you will become a more skilled and more creative investigator who has a better collection of tools to investigate with, that might be interesting. If not, you will be marking time (making money but learning little) while the rest of the technical world learns new ideas and skills.
Answer1:
should apply black box testing techniques (boundary analysis, equivalence partitioning)
Answer2:
The Japanese and other East Asian Customers are very particular of the look and feel of the UI. So please make sure, there is no truncation at any place.
One Major difference between Japanese and English is that there is no concept of spaces between the words in Japanese. The line breaks in English usually happens whenever there is a Space. In Japanese this leads to a lot of problem with the wrapping on the text and if you have a table with defined column length, you might see text appearing vertical.
On the functionality side:
1. Check for the date format and Number format. (it should be in the native locale)
2. Check that your system accepts 2-byte numerals and characters.
3. If there is any fields with a boundary value of 100 characters, the field should accept, the same number of 2-byte character as well.
4. The application should work on a Native (Chinese, Japanese, Korean) OS as well as on an English OS with the language pack installed.
Writing a high level test plan for 2-byte support will require some knowledge of the application and its architecture.
Answer1:
The fixed defects can be tracked in the defect tracking tool. I think it is out of scope of a test case to maintain this.
The defect tracking tool should indicate that the problem has been fixed, and the associated test case now has a passing result.
If and when you report test results for this test cycle, you should provide this sort of information; i.e., test failed, problem report written, problem fixed, test passed, etc...
Answer2:
As using Jira (like Bugzilla) to manage your testcases as well as your bugs. When a test discovers a bug, you will link the two, marking the test as "in work" and "waiting for bug X". Now, when the developer resolves the bug and you retest it, you see the link to the tescase and retest/close it.
Answer1:
It is a mapping of one base lined object to another. For testers, the most common documents to be linked in the manner are a requirements document and the written test cases for that document.
In order to facilitate this, testers can add an extra column to their test cases listing the requirement being tested.
The requirements matrix is usually stored in a spreadsheet. It contains the test ids down the left side and the requirements ids across the top. For each test, you place a mark in the cell under the heading for that requirement it is designed to test. The goal is to find out which requirements are under-tested and which are either over tested or which are so large that too many tests have to be written to adequately test it.
Answer2:
The traceability matrix means mapping of all the work products (various design docs, testing docs) to requirements.
Answer1:
Mercury makes some decent products. Quick Test Pro can be used for a lot of your requirements... It can be costly and mind-numbing at times though.
Answer2:
Selenium is a test tool for web applications. Selenium tests run directly in a browser, just as real users do. And they run in Internet Explorer, Mozilla and FireFox on Windows, Linux, and Macintosh. No other test tool covers such a wide array of platforms.
* Browser compatibility testing. Test your application to see if it works correctly on different browsers and operating systems. The same script can run on any Selenium platform.
* System functional testing. Create regression tests to verify application functionality and user acceptance.
Answer3:
Ruby is becoming a preferred standard for testing
Perl is also used a great deal.
Answer 1:
Are you the programmer who has to fix them, the project manager who has to supervise the programmers, the change control team that decides which areas are too high risk to impact, the stakeholder-user whose organization pays for the damage caused by the defects or the tester?
The tester does not choose which defects to fix.
The tester helps ensure that the people who do choose, make a well-informed choice.
Testers should provide data to indicate the *severity* of bugs, but the project manager or the development team do the prioritization.
When I say "indicate the severity", I don't just mean writing S3 on a piece of paper. Test groups often do follow-up tests to assess how serious a failure is and how broad the range of failure-triggering conditions.
Priority depends on a wide range of factors, including code-change risk, difficulty/time to complete the change, which stakeholders are affected by the bug, the other commitments being handled by the person most knowledgeable about fixing a certain bug, etc. Many of these factors are not within the knowledge of most test groups.
Answer 2:
As a tester we don't fix the defects but we surely can prioritize them once detected. In our org we assign severity level to the defects depending upon their influence on other parts of products. If a defect doesn't allow you to go ahead and test test the product, it is critical one so it has to be fixed ASAP. We have 5 levels as
1-critical
2-High
3-Medium
4-Low
5-Cosmetic
Dev can group all the critical ones and take them to fix before any other defect.
Answer3:
Priority/Severity P1 P2 P3
S1
S2
S3
Generally the defects are classified in above-shown grid. Every organization / software has some target of fixing the bugs.
Example -
P1S1 -> 90% of the bugs reported should be fixed.
P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service packs or versions.
Thus the organization should decide its target and act accordingly.
Basically bug-free software is not possible.
Answer4:
Ideally, the customer should assign priorities to their requirements. They tend to resist this. On a large, multi-year project I just completed, I would often (in the lack of customer guidelines) rely on my knowledge of the application and the potential downstream impacts in the modeled business process to prioritize defects.
If the customer doesn't then I fell the test organization should based on risk or other, similar considerations.
What is Benchmark?
How it is linked with SDLC
(Software Development Life Cycle)?
or SDLC and Benchmark are two unrelated things.?
What are the compoments of Benchmark?
In Software Testing where Benchmark fits in?
A Benchmark is a standard to measure against. If you benchmark an application, all future application changes will be tested and compared against the benchmarked application.
or SDLC and Benchmark are two unrelated things.?
What are the compoments of Benchmark?
In Software Testing where Benchmark fits in?
A Benchmark is a standard to measure against. If you benchmark an application, all future application changes will be tested and compared against the benchmarked application.
Which things to consider to test
a mobile application through black box technique?
Answer1:
Not sure how your device/server is to operate, so mold these ideas to fit your app. Some highlights are:
Range testing: Ensure that you can reconnect when leaving and returning back into range.
Port/IP/firewall testing - change ports and ips to ensure that you can connect and disconnect. modify the firewall to shutoff the connection.
Multiple devices - make sure that a user receives his messages with other devices connected to the same ip/port. Your app should have a method to determine which device/user sent the message and only return to it. Should be in the message string sent and received. Unless you have conferencing capabilities within the application.
Cycle the power of the server and watch the mobile unit reconnect automatically.
Mobile unit sends a message and then power off the unit, when powering back on and reconnecting, ensure that the message is returned to the mobile unit.
Answer2:
Not clearly mentioned which area of the mobile application you are testing with. Whether is it simple SMS application or WAP application, you need to specify more details.If you are working with WAP then you can download simulators from net and start testing over it.
Not sure how your device/server is to operate, so mold these ideas to fit your app. Some highlights are:
Range testing: Ensure that you can reconnect when leaving and returning back into range.
Port/IP/firewall testing - change ports and ips to ensure that you can connect and disconnect. modify the firewall to shutoff the connection.
Multiple devices - make sure that a user receives his messages with other devices connected to the same ip/port. Your app should have a method to determine which device/user sent the message and only return to it. Should be in the message string sent and received. Unless you have conferencing capabilities within the application.
Cycle the power of the server and watch the mobile unit reconnect automatically.
Mobile unit sends a message and then power off the unit, when powering back on and reconnecting, ensure that the message is returned to the mobile unit.
Answer2:
Not clearly mentioned which area of the mobile application you are testing with. Whether is it simple SMS application or WAP application, you need to specify more details.If you are working with WAP then you can download simulators from net and start testing over it.
What's normal practices of the QA
specialists with perspective of software?
These are the normal practices of
the QA specialists with perspective of software
[note: these are all QC activities, not QA activities.]
1-Design Review Meetings with the System Analyst and If possible should be the part in Requirement gathering
2-Analyzing the requirements and the design and to trace the design specification with respect to the requirements
3-Test Planning
4-Test Case Identification using different techniques (With respect to the Web Based Applciation and Desktop Applications)
5-Test Case Writing (This part is to be assigned to the testing engineers)
6-Test Case Execution (This part is to be assigned to the testing engineers)
7-Bug Reporting (This part is to be assigned to the testing engineers)
8-Bug Review and their Analysis so that future bug can be removed by desgining some standards
[note: these are all QC activities, not QA activities.]
1-Design Review Meetings with the System Analyst and If possible should be the part in Requirement gathering
2-Analyzing the requirements and the design and to trace the design specification with respect to the requirements
3-Test Planning
4-Test Case Identification using different techniques (With respect to the Web Based Applciation and Desktop Applications)
5-Test Case Writing (This part is to be assigned to the testing engineers)
6-Test Case Execution (This part is to be assigned to the testing engineers)
7-Bug Reporting (This part is to be assigned to the testing engineers)
8-Bug Review and their Analysis so that future bug can be removed by desgining some standards
How to test and to get the
difference between two images which is in the same window?
Answer1:
How are you doing your comparison? If you are doing it manually, then you should be able to see any major differences. If you are using an automated tool, then there is usually a comparison facility in the tool to do that.
Answer2:
Jasper Software is an open-source utility which can be compiled into C++ and has a imgcmp function which compares JPEG files in very good detail as long as they have the same dimensions and number of components.
Answer3:
Rational has a comparison tool that may be used. I'm sure Mercury has the same tool.
Answer4:
The key question is whether we need a bit-by-bit exact comparison, which the current tools are good at, or an equivalency comparison. What differences between these images are not differences? Near-match comparison has been the subject of a lot of research in printer testing, including an M.Sc. thesis at Florida Tech. It's a tough problem.
How are you doing your comparison? If you are doing it manually, then you should be able to see any major differences. If you are using an automated tool, then there is usually a comparison facility in the tool to do that.
Answer2:
Jasper Software is an open-source utility which can be compiled into C++ and has a imgcmp function which compares JPEG files in very good detail as long as they have the same dimensions and number of components.
Answer3:
Rational has a comparison tool that may be used. I'm sure Mercury has the same tool.
Answer4:
The key question is whether we need a bit-by-bit exact comparison, which the current tools are good at, or an equivalency comparison. What differences between these images are not differences? Near-match comparison has been the subject of a lot of research in printer testing, including an M.Sc. thesis at Florida Tech. It's a tough problem.
When do we prepare a Test Plan?
When do we prepare a Test Plan?
[Always prepared a Test Plan for every new version or release of the product? ]
For four or five features at once, a single plan is fine. Write new test cases rather than new test plans. Write test plans for two very different purposes. Sometimes the test plan is a product; sometimes it's a tool.
[Always prepared a Test Plan for every new version or release of the product? ]
For four or five features at once, a single plan is fine. Write new test cases rather than new test plans. Write test plans for two very different purposes. Sometimes the test plan is a product; sometimes it's a tool.
How to Test a desktop systems ?
You will likely have to use a
programming or scripting language to interact with the service directly. You
will have more control over the raw information that way.
You will have to determine what the service is supposed to do and how it is supposed to interact with other applications and services. A data dictionary likely exists. It may not be called that however. What this document does is explain what commands the service will respond to and what sort of data should be sent. You will have to use this document to do your testing. Get close to the person or people who created the document or the service and expect them to keep you in the loop when changes take place (it doesn't help anyone if you report a defect and it's really only reflecting an expected change in the operation of the service).
Desktop applications are generally designed to run and quit. You have to be concerned with memory leaks and system usage.
You will have to determine what the service is supposed to do and how it is supposed to interact with other applications and services. A data dictionary likely exists. It may not be called that however. What this document does is explain what commands the service will respond to and what sort of data should be sent. You will have to use this document to do your testing. Get close to the person or people who created the document or the service and expect them to keep you in the loop when changes take place (it doesn't help anyone if you report a defect and it's really only reflecting an expected change in the operation of the service).
Desktop applications are generally designed to run and quit. You have to be concerned with memory leaks and system usage.
How do you create a test
strategy?
The test strategy is a formal description
of how a software product will be tested. A test strategy is developed for all
levels of testing, as required. The test team analyzes the requirements, writes
the test strategy and reviews the plan with the project team. The test plan may
include test cases, conditions, the test environment, a list of related tasks,
pass/fail criteria and risk assessment.
Inputs for this process:
* A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
* A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
* Testing methodology. This is based on known standards.
* Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
* Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
* An approved and signed off test strategy document, test plan, including test cases.
* Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.
Inputs for this process:
* A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
* A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
* Testing methodology. This is based on known standards.
* Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
* Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
* An approved and signed off test strategy document, test plan, including test cases.
* Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.
How to do Estimating Testing
effort ?
Time Estimation method for
Testing Process
Note : following method is based on use case driven specification.
Step 1 : count number of use cases (NUC) of system
step 2 : Set Avg Time Test Cases(ATTC) as per test plan
step 3 : Estimate total number of test cases (NTC)
Total number of test cases = Number of use cases X Avg test cases per a use case
Step 4 : Set Avg Execution Time (AET) per a test case (ideally 15 min depends on your system)
Step 5 : Calculate Total Execution Time (TET)
TET = Total number of test cases * AET
Step 6 : Calculate Test Case Creation Time (TCCT)
usually we will take 1.5 times of TET as TCCT
TCCT = 1.5 * TET
Step 7 : Time for ReTest Case Execution (RTCE) this is for retesting
useually we take 0.5 times of TET
RTCE = 0.5 * TET
Step 8 : Set Report generation Time (RGT
usually we take 0.2 times of TET
RGT = 0.2 * TET
Step 9 : Set Test Environment Setup Time (TEST)
it also depends on test plan
Step 10 : Total Estimation time = TET + TCCT+ RTCE + RGT + TEST + some buffer...;)
Example
Total No of use cases (NUC) : 227
Average test cases per Use cases(AET) : 10
Estimated Test cases(NTC) : 227 * 10 = 2270
Time estimation execution (TET) : 2270/4 = 567.5 hr
Time for creating testcases (TCCT) : 567.5*4/3 = 756.6 hr
Time for retesting (RTCE) : 567.5/2 = 283.75 hr
Report Generation(RGT) = 100 hr
Test Environment Setup Time(TEST) = 20 hr.
-------------------
Total Hrs 1727.85 + buffer
-------------------
here 4 means Number of testcases executed per hour
i.e 15 min will take for execution of each test case
Note : following method is based on use case driven specification.
Step 1 : count number of use cases (NUC) of system
step 2 : Set Avg Time Test Cases(ATTC) as per test plan
step 3 : Estimate total number of test cases (NTC)
Total number of test cases = Number of use cases X Avg test cases per a use case
Step 4 : Set Avg Execution Time (AET) per a test case (ideally 15 min depends on your system)
Step 5 : Calculate Total Execution Time (TET)
TET = Total number of test cases * AET
Step 6 : Calculate Test Case Creation Time (TCCT)
usually we will take 1.5 times of TET as TCCT
TCCT = 1.5 * TET
Step 7 : Time for ReTest Case Execution (RTCE) this is for retesting
useually we take 0.5 times of TET
RTCE = 0.5 * TET
Step 8 : Set Report generation Time (RGT
usually we take 0.2 times of TET
RGT = 0.2 * TET
Step 9 : Set Test Environment Setup Time (TEST)
it also depends on test plan
Step 10 : Total Estimation time = TET + TCCT+ RTCE + RGT + TEST + some buffer...;)
Example
Total No of use cases (NUC) : 227
Average test cases per Use cases(AET) : 10
Estimated Test cases(NTC) : 227 * 10 = 2270
Time estimation execution (TET) : 2270/4 = 567.5 hr
Time for creating testcases (TCCT) : 567.5*4/3 = 756.6 hr
Time for retesting (RTCE) : 567.5/2 = 283.75 hr
Report Generation(RGT) = 100 hr
Test Environment Setup Time(TEST) = 20 hr.
-------------------
Total Hrs 1727.85 + buffer
-------------------
here 4 means Number of testcases executed per hour
i.e 15 min will take for execution of each test case
Why Q/A should not report to
development?
Based on research from the
Quality Assurance Institute, the percent of quality groups in each location is
noted,
50% - reports to Senior IT Manager - This is the best positioning because it gives the Quality Manager immediate access to the IT Manager to discuss and promote Quality issues, when the quality manager reports elsewhere, quality issues may not be raised to the appropriate level or receive the necessary action.
25% - reports to Manager of systems/programming
15 % reports to Manger operations.
10 % outside IT function.
50% - reports to Senior IT Manager - This is the best positioning because it gives the Quality Manager immediate access to the IT Manager to discuss and promote Quality issues, when the quality manager reports elsewhere, quality issues may not be raised to the appropriate level or receive the necessary action.
25% - reports to Manager of systems/programming
15 % reports to Manger operations.
10 % outside IT function.
What stage of bug fixing is the
most cost effective?
Bug prevention techniques (i.e.
inspections, peer design reviews, and walk-through) are more cost effective
than bug detection.
What is Defect Life Cycle.?
Answer1:
Defect life cycle is....different stages after a defect is identified.
New (When defect is identified)
Accepted (when Development team and QA team accepts it's a Bug)
In Progress (when a person is working to resolve the issue-defect)
Resolved (once the defect resolved)
Completed (Some one who can take up the responsibly Team lead)
Closed/reopened (Retested by TE and he will update the Status of the bug)
Answer2:
Defect Life Cycle is nothing but the various phases a Bug undergoes after it is raised or reported.
A general Interview answer can be given as:
1. New or Opened
2. Assigned
3. Fixed
4. Tested
5. Closed.
Defect life cycle is....different stages after a defect is identified.
New (When defect is identified)
Accepted (when Development team and QA team accepts it's a Bug)
In Progress (when a person is working to resolve the issue-defect)
Resolved (once the defect resolved)
Completed (Some one who can take up the responsibly Team lead)
Closed/reopened (Retested by TE and he will update the Status of the bug)
Answer2:
Defect Life Cycle is nothing but the various phases a Bug undergoes after it is raised or reported.
A general Interview answer can be given as:
1. New or Opened
2. Assigned
3. Fixed
4. Tested
5. Closed.
What is the difference between a
software bug and software defect?
"Software bug" is nonspecific;
it means an inexplicable defect, error, flaw, mistake, failure, fault, or
unwanted behavior of a computer program. Other terms, e.g. "software
defect", or "software failure", are more specific.
While the word "bug" has been a part of engineering jargon for many-many decades; many-many decades ago even Thomas Edison, the great inventor, wrote about a "bug" - today there are many who believe the word "bug" is a reference to insects that caused malfunctions in early electromechanical computers.
While the word "bug" has been a part of engineering jargon for many-many decades; many-many decades ago even Thomas Edison, the great inventor, wrote about a "bug" - today there are many who believe the word "bug" is a reference to insects that caused malfunctions in early electromechanical computers.
In software testing, the
difference between "bug" and "defect" is small, and also
depends on the end client. For some clients, bug and defect are synonymous,
while others believe bugs are subsets of defects.
Difference number one: In bug reports, the defects are easier to describe.
Difference number two: In my bug reports, it is easier to write descriptions as to how to replicate defects. In other words, defects tend to require only brief explanations.
Commonality number one: We, software test engineers, discover both bugs and defects, before bugs and defects damage the reputation of our company.
Commonality number two: We, software QA engineers, use the software much like real users would, to find both bugs and defects, to find ways to replicate both bugs and defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality.
Commonality number three: We, software QA engineers, do not differentiate between bugs and defects. In our reports, we include both bugs and defects that are the results of software testing.
Difference number one: In bug reports, the defects are easier to describe.
Difference number two: In my bug reports, it is easier to write descriptions as to how to replicate defects. In other words, defects tend to require only brief explanations.
Commonality number one: We, software test engineers, discover both bugs and defects, before bugs and defects damage the reputation of our company.
Commonality number two: We, software QA engineers, use the software much like real users would, to find both bugs and defects, to find ways to replicate both bugs and defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality.
Commonality number three: We, software QA engineers, do not differentiate between bugs and defects. In our reports, we include both bugs and defects that are the results of software testing.
Are developers smarter than
tester? Any suggestion about the future prospects and technicality involved in the testing job?
Answer1:
QA & Testing are thankless jobs. In a software development company developer is a core person. As you are a fresh graduate, it would be good for you to work as a developer. From development you can always move to testing or QA or other admin/support tasks. But from Testing or QA it is little difficult to go back to development, though not impossible(as u are BE comp)
Seeing the job market, it is not possible for each & every fresher to get into development. But you can keep searching for it.
Some big company's have separate Verification & Validation groups where only testing projects are executed. Those teams have TLs, PLs who are testing experts. They earn good salary same as development people.
In technical projects the testing team does lot of technical work. You can do certifications to improve your technical skills & market value.
It all depends on your way of handling things & interpersonal, communication and leadership skills. If it is difficult for you to get a job in development or you really like testing, just go ahead. Try to achieve excellence as a testing professional. You will never have a job problem .Also you will always get onsite opportunities too!! You might have to struggle for initial few years like all other freshers.
Answer2:
QA and Testing are thankless only in some companies.
Testing is part of development. Rather than distinguish between testing and development,distinguish between testing and programming.
Programming is also thankless in some companies.
Not suggesting that anyone should or should not go into testing. It depends on your skills and interests. Some people are better at programming and worse at testing, some better at testing and worse at programming, some are not suited for either role. You should decide what you are good at and what fascinates you. What type of work would make you WANT to stay at work for 60-80 hours a week for a few years because it is so interesting?
Suggesting that there are excellent testing jobs out there, but there are bad ones too (in testing and in programming, both).
Have not seen any certification in software testing that improves the technical skill of anyone. Apparently, testing certification improves a tester's market value in some markets.
Most companies mean testing when they say "QA". Or they mean Testing plus Metrics, where the metrics tasks are low-skill data collection and basic data analysis rather than thinking up and justifying measurement systems appropriate to the questions at hand. In terms of skill, salary, intellectual challenge and value to the company, testing+metrics is the same as testing. Some companies see QA more strategically, and hire more senior people into their groups. Here is a hint--if you can get a job in a group called QA with less than 5 years of experience, it's a testing group or something equivalent to it.
Answer3:
Nothing is considered as great or a mean job. As long as you like and love to do, everything in that seems to be interesting.
I started as a developer and slowly moved to Testing. I find testing to be more challenging and interesting. I have solid 6 years of testing experience alone and many sernior people are there in my team, who are professional testers.
Answer4:
testing is low-skill work in many companies.
Scripted testing of the kind pushed by ISEB, ISTQB, and the other certifiers is low skill, low prestige, offers little return value to the company that pays for it, and is often pushed to offsite contracting firms because it isn't worth doing in-house. In many cases, it is just a process of "going through the motions" -- pretending to do testing (and spending a lot of money in the pretense) but without really looking for any important information and without creating any artifacts that will be useful to the project team.
The only reason to take a job doing this kind of work is to get paid for it. Doing it for too long is bad for your career.
There are much higher-skill ways to do testing. Some of them involve partial automation (writing or using programs to help you investigate the program more effectively), but automation tools are just tools. They are often used just as mind-numbingly and valueless as scripted manual testing. When you're offered this kind of position, try to find out how much judgment you will have to exercise in the analysis of the product under test and the ways that it provides value to the users and other stakeholders, in the design of tests to check that value and to check for other threats to value (security failures, performance failures, usability failures, etc.)--and how much this position will help you develop your judgment. If you will become a more skilled and more creative investigator who has a better collection of tools to investigate with, that might be interesting. If not, you will be marking time (making money but learning little) while the rest of the technical world learns new ideas and skills.
What's the difference between
priority and severity?
The word "priority" is
associated with scheduling, and the word "severity" is associated
with standards. "Priority" means something is afforded or deserves
prior attention; a precedence established by urgency or order of or importance.
Severity is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles. For example, a severe code of behavior.
The words priority and severity do come up in bug tracking. A variety of commercial, problem-tracking / management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it. The fixes are based on project priorities and severity of bugs. The severity of a problem is defined in accordance to the end client's risk assessment, and recorded in their selected tracking tool. A buggy software can severely affect schedules, which, in turn can lead to a reassessment and renegotiation of priorities.
Severity is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles. For example, a severe code of behavior.
The words priority and severity do come up in bug tracking. A variety of commercial, problem-tracking / management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it. The fixes are based on project priorities and severity of bugs. The severity of a problem is defined in accordance to the end client's risk assessment, and recorded in their selected tracking tool. A buggy software can severely affect schedules, which, in turn can lead to a reassessment and renegotiation of priorities.
How to test a web based
application that has recently been modified to give support for Double Byte
Character Sets?
Answer1:
should apply black box testing techniques (boundary analysis, equivalence partitioning)
Answer2:
The Japanese and other East Asian Customers are very particular of the look and feel of the UI. So please make sure, there is no truncation at any place.
One Major difference between Japanese and English is that there is no concept of spaces between the words in Japanese. The line breaks in English usually happens whenever there is a Space. In Japanese this leads to a lot of problem with the wrapping on the text and if you have a table with defined column length, you might see text appearing vertical.
On the functionality side:
1. Check for the date format and Number format. (it should be in the native locale)
2. Check that your system accepts 2-byte numerals and characters.
3. If there is any fields with a boundary value of 100 characters, the field should accept, the same number of 2-byte character as well.
4. The application should work on a Native (Chinese, Japanese, Korean) OS as well as on an English OS with the language pack installed.
Writing a high level test plan for 2-byte support will require some knowledge of the application and its architecture.
How to use methods/techniques to
test the bandwidth usage of a client/server application?
Bandwidth Utilization:
Basically at the client-server model you will be most concerned about the bandwidth usage if your application is a web based one. It surely is a part of concern when the throughput and the data transfer comes into the picture.
I suggest you to use the Rad view's Web load for the Load and Stress testing tool for the same.
Available at the demo ware.. you can record the scenarios of the normal user over the variable connection speed and then run it for hours to know about the bandwidth utilisation and the throughput and data transfer rate, hits per sec, etc... there is a huge list of parameters which can be tested over a n no of combinations..
Basically at the client-server model you will be most concerned about the bandwidth usage if your application is a web based one. It surely is a part of concern when the throughput and the data transfer comes into the picture.
I suggest you to use the Rad view's Web load for the Load and Stress testing tool for the same.
Available at the demo ware.. you can record the scenarios of the normal user over the variable connection speed and then run it for hours to know about the bandwidth utilisation and the throughput and data transfer rate, hits per sec, etc... there is a huge list of parameters which can be tested over a n no of combinations..
How to Read data from the Telnet
session?
Declared
[+] window DialogBox Putty
[ ] tag "* - PuTTY"
[ ]
[ ] // Capture the screen contents and return as a list of strings
[+] LIST OF STRING getScreenContents()
[ ]
[ ] LIST OF STRING ClipboardContents
[ ]
[ ] // open the system menu and select copy all to clipboard menu command
[ ] this.TypeKeys("<ALT-SPACE>o")
[ ]
[ ] // get the clipboard contents
[ ]
[ ] ClipboardContents = Clipboard.getText()
[ ] return ClipboardContents
I then created a function that searches the screen contents for the required data to validate. This works fine for me. Here it is to study. Hope it may help
void CheckOutPut(string sErrorMessage)
[ ]Putty.setActive ()
[ ]
[ ] // Capture screen contents
[ ] lsScreenContents = Putty.GetScreenContents ()
[ ] Sleep(1)
[ ] // Trim Screen Contents
[ ] lsScreenContents = TrimScreenContents (lsScreenContents)
[ ] Sleep(1)
[-] if (sBatchSuccess == "Yes")
[-] if (ListFind (lsScreenContents, "BUILD FAILED"))
[ ] LogError("Process should not have failed.")
[-] if (ListFind (lsScreenContents, "BUILD SUCCESSFUL"))
[ ] Print("Successful")
[ ] break
[ ] // Check to see if launcher has finished
[-] else
[-] if (ListFind (lsScreenContents, "BUILD FAILED") == 0)
[ ] LogError("Error should have failed.")
[ ] break
[-] else
[ ] // Check for Date Conversion Error
[-] if (ListFind (lsScreenContents, sErrorMessage) == 0)
[ ] LogError ("Error handle")
[ ] Print("Expected - {sErrorMessage}")
[ ] ListPrint(lsScreenContents)
[ ] break
[-] else
[ ] break
[ ]
[ ] // Raise exception if kPlatform not equal to windows or putty
[+] default
[ ] raise 1, "Unable to run console: - Please specify setting"
[ ]
[+] window DialogBox Putty
[ ] tag "* - PuTTY"
[ ]
[ ] // Capture the screen contents and return as a list of strings
[+] LIST OF STRING getScreenContents()
[ ]
[ ] LIST OF STRING ClipboardContents
[ ]
[ ] // open the system menu and select copy all to clipboard menu command
[ ] this.TypeKeys("<ALT-SPACE>o")
[ ]
[ ] // get the clipboard contents
[ ]
[ ] ClipboardContents = Clipboard.getText()
[ ] return ClipboardContents
I then created a function that searches the screen contents for the required data to validate. This works fine for me. Here it is to study. Hope it may help
void CheckOutPut(string sErrorMessage)
[ ]Putty.setActive ()
[ ]
[ ] // Capture screen contents
[ ] lsScreenContents = Putty.GetScreenContents ()
[ ] Sleep(1)
[ ] // Trim Screen Contents
[ ] lsScreenContents = TrimScreenContents (lsScreenContents)
[ ] Sleep(1)
[-] if (sBatchSuccess == "Yes")
[-] if (ListFind (lsScreenContents, "BUILD FAILED"))
[ ] LogError("Process should not have failed.")
[-] if (ListFind (lsScreenContents, "BUILD SUCCESSFUL"))
[ ] Print("Successful")
[ ] break
[ ] // Check to see if launcher has finished
[-] else
[-] if (ListFind (lsScreenContents, "BUILD FAILED") == 0)
[ ] LogError("Error should have failed.")
[ ] break
[-] else
[ ] // Check for Date Conversion Error
[-] if (ListFind (lsScreenContents, sErrorMessage) == 0)
[ ] LogError ("Error handle")
[ ] Print("Expected - {sErrorMessage}")
[ ] ListPrint(lsScreenContents)
[ ] break
[-] else
[ ] break
[ ]
[ ] // Raise exception if kPlatform not equal to windows or putty
[+] default
[ ] raise 1, "Unable to run console: - Please specify setting"
[ ]
How to trace fixed bug in test
case?
Answer1:
The fixed defects can be tracked in the defect tracking tool. I think it is out of scope of a test case to maintain this.
The defect tracking tool should indicate that the problem has been fixed, and the associated test case now has a passing result.
If and when you report test results for this test cycle, you should provide this sort of information; i.e., test failed, problem report written, problem fixed, test passed, etc...
Answer2:
As using Jira (like Bugzilla) to manage your testcases as well as your bugs. When a test discovers a bug, you will link the two, marking the test as "in work" and "waiting for bug X". Now, when the developer resolves the bug and you retest it, you see the link to the tescase and retest/close it.
After the migration done, how to
test the application (Front end hasn't changed just the database changed)
Answer1:
You can concentrate only on those testcases which involve DB transactions like insert,update,delete etc.
Answer2:
Focus on the database tests, but it's important to analyze the differences between the two schema's. You can't just focus on the front end. Also, be careful to look for shortcuts that the DBAs may be taking with the schema.
You can concentrate only on those testcases which involve DB transactions like insert,update,delete etc.
Answer2:
Focus on the database tests, but it's important to analyze the differences between the two schema's. You can't just focus on the front end. Also, be careful to look for shortcuts that the DBAs may be taking with the schema.
What is the difference between
reliability testing and load testing?
The term, reliability testing, is
often used synonymously with load testing. Load testing is a blanket term that
is used in many different ways across the professional software testing
community. Load testing generally stops short of stress testing. During stress
testing, the load is so great that errors are the expected results, though
there is gray area in between stress testing and load testing.
Some general guidelines on what
to test for web based applications.
1. Navigation: Users move to and
from pages, click on links, click on images (thumbnails), etc. Navigation in a
WebSite shoud be quick and error free.
2. Server Response. How fast the WebSite host responds influences whether a user (i.e. someone on the browser) moves on or gives up.
3. Interaction & Feedback. For passive, content-only sites the only real quality issue is availability. For a WebSite that interacts with the user, the big factor is how fast and how reliable that interaction is.
4. Concurrent Users. Do multiple users interact on a WebSite? Can they get in each others' way? While WebSites often resemble client/server structures, with multiple users at multiple locations a WebSite can be much different, and much more complex, than complex applications
5. Browser Independent. Tests should be realistic, but not be dependent on a particular browser
6. No Buffering, Caching. Local caching and buffering -- often a way to improve apparent performance -- should be disabled so that timed experiments are a true measure of the Browser response time.
7. Fonts and Preferences. Most browsers support a wide range of fonts and presentation preferences
8. Object Mode. Edit fields, push buttons, radio buttons, check boxes, etc. All should be treatable in object mode, i.e. independent of the fonts and preferences.
9. Page Consistency. Is the entire page identical with a prior version? Are key parts of the text the same or different?
10. Table, Form Consistency. Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the "right place".
11. Page Relationships. Are all of the links on a page the same as they were before? Are there new or missing links? Are there any broken links?
12. Performance Consistency, Response Times. Is the response time for a user action the same as it was (within a range)?
13. Image File Size. File size should be closely examined when selecting or creating images for your site. This is particularly important when your site is directed to an audience that may not enjoy the high-bandwidth and fast connection speeds available
14. Avoid the use of HTML "frames". The problems with frames-based site designs are well documented, including; the inability to bookmark subcategories of the site, difficulty in printing frame cell content, disabling the Web browser's "back" button as a navigation aid.
15. Security. Ensure data is encrypted before transferring sensitive information, wherever required. Test user authentication thoroughly. Ensure all back doors and test logins are disabled before going live with the web application.
16. Sessions. Ensure session validity is maintained throughout a web transaction, for e.g. filling a web form that spans over several web pages. Forms should retain information when using the 'back' button wherever required for user convenience. At the same time, forms need to be reset wherever security is an issue, like the password fields, etc.
17. Error handling. Web navigation should be quick and error free. However, sometimes errors cannot be avoided. It would be a good idea to have a standard error page that handles all errors. This is cleaner than displaying the 404 page. After displaying the error page, users can then be automatically redirected to the home page or any other relevant page. At this same time, this error can also be logged and a message can be sent to notify the admin.
2. Server Response. How fast the WebSite host responds influences whether a user (i.e. someone on the browser) moves on or gives up.
3. Interaction & Feedback. For passive, content-only sites the only real quality issue is availability. For a WebSite that interacts with the user, the big factor is how fast and how reliable that interaction is.
4. Concurrent Users. Do multiple users interact on a WebSite? Can they get in each others' way? While WebSites often resemble client/server structures, with multiple users at multiple locations a WebSite can be much different, and much more complex, than complex applications
5. Browser Independent. Tests should be realistic, but not be dependent on a particular browser
6. No Buffering, Caching. Local caching and buffering -- often a way to improve apparent performance -- should be disabled so that timed experiments are a true measure of the Browser response time.
7. Fonts and Preferences. Most browsers support a wide range of fonts and presentation preferences
8. Object Mode. Edit fields, push buttons, radio buttons, check boxes, etc. All should be treatable in object mode, i.e. independent of the fonts and preferences.
9. Page Consistency. Is the entire page identical with a prior version? Are key parts of the text the same or different?
10. Table, Form Consistency. Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the "right place".
11. Page Relationships. Are all of the links on a page the same as they were before? Are there new or missing links? Are there any broken links?
12. Performance Consistency, Response Times. Is the response time for a user action the same as it was (within a range)?
13. Image File Size. File size should be closely examined when selecting or creating images for your site. This is particularly important when your site is directed to an audience that may not enjoy the high-bandwidth and fast connection speeds available
14. Avoid the use of HTML "frames". The problems with frames-based site designs are well documented, including; the inability to bookmark subcategories of the site, difficulty in printing frame cell content, disabling the Web browser's "back" button as a navigation aid.
15. Security. Ensure data is encrypted before transferring sensitive information, wherever required. Test user authentication thoroughly. Ensure all back doors and test logins are disabled before going live with the web application.
16. Sessions. Ensure session validity is maintained throughout a web transaction, for e.g. filling a web form that spans over several web pages. Forms should retain information when using the 'back' button wherever required for user convenience. At the same time, forms need to be reset wherever security is an issue, like the password fields, etc.
17. Error handling. Web navigation should be quick and error free. However, sometimes errors cannot be avoided. It would be a good idea to have a standard error page that handles all errors. This is cleaner than displaying the 404 page. After displaying the error page, users can then be automatically redirected to the home page or any other relevant page. At this same time, this error can also be logged and a message can be sent to notify the admin.
What is the role of documentation
in QA?
Documentation plays a critical
role in QA. QA practices should be documented, so that they are repeatable.
Specifications, designs, business rules, inspection reports, configurations,
code changes, test plans, test cases, bug reports, user manuals should all be
documented. Ideally, there should be a system for easily finding and obtaining
of documents and determining what document will have a particular piece of
information. Use documentation change management, if possible.
How do you introduce a new
software QA process?
It depends on the size of the
organization and the risks involved. For large organizations with high-risk
projects, a serious management buy-in is required and a formalized QA process
is necessary. For medium size organizations with lower risk projects,
management and organizational buy-in and a slower, step-by-step process is
required. Generally speaking, QA processes should be balanced with
productivity, in order to keep any bureaucracy from getting out of hand. For
smaller groups or projects, an ad-hoc process is more appropriate. A lot
depends on team leads and managers, feedback to developers and good
communication is essential among customers, managers, developers, test
engineers and testers. Regardless the size of the company, the greatest value
for effort is in managing requirement processes, where the goal is requirements
that are clear, complete and testable.
What a Coverage Matrix is? or
What is a traceability matrix?
Answer1:
It is a mapping of one base lined object to another. For testers, the most common documents to be linked in the manner are a requirements document and the written test cases for that document.
In order to facilitate this, testers can add an extra column to their test cases listing the requirement being tested.
The requirements matrix is usually stored in a spreadsheet. It contains the test ids down the left side and the requirements ids across the top. For each test, you place a mark in the cell under the heading for that requirement it is designed to test. The goal is to find out which requirements are under-tested and which are either over tested or which are so large that too many tests have to be written to adequately test it.
Answer2:
The traceability matrix means mapping of all the work products (various design docs, testing docs) to requirements.
What is SRS and BRS . and what is
the difference between them?
Answer1:
SRS - Software Requirements Specification BRS - Business Requirements Specification
Answer2:
BRS - Biz Requirements Case
This doc has to be from the client stating the need for a particular module or a project. This basically tells you why a particular request is needed. Reasons have to be given. Mostly a lay persons document. This has to aproved by te Project Manager
SRS - Sq REq Specification
Follows the BRC after its approval etc. gives a detail func etc details about the project, requirement, use cases, refere..etc and how each module works in detal
your srs cannot start without a brc and an approval of the same.
SRS - Software Requirements Specification BRS - Business Requirements Specification
Answer2:
BRS - Biz Requirements Case
This doc has to be from the client stating the need for a particular module or a project. This basically tells you why a particular request is needed. Reasons have to be given. Mostly a lay persons document. This has to aproved by te Project Manager
SRS - Sq REq Specification
Follows the BRC after its approval etc. gives a detail func etc details about the project, requirement, use cases, refere..etc and how each module works in detal
your srs cannot start without a brc and an approval of the same.
What is API Testing?
An API (Application Programming
Interface) is a collection of software functions and procedures, called API
calls, that can be executed by other software applications. Application developers
code that links to existing APIs to make use of their functionality. This link
is seamless and end-users of the application are generally unaware of using a
separately developed API.
During testing, a test harness-an application that links the API and methodically exercises its functionality-is constructed to simulate the use of the API by end-user applications. The interesting problems for testers are:
1. Ensuring that the test harness varies parameters of the API calls in ways that verify functionality and expose failures. This includes assigning common parameter values as well as exploring boundary conditions.
2. Generating interesting parameter value combinations for calls with two or more parameters.
3. Determining the content under which an API call is made. This might include setting external environment conditions (files, peripheral devices, and so forth) and also internal stored data that affect the API.
4. Sequencing API calls to vary the order in which the functionality is exercised and to make the API produce useful results from successive calls.
During testing, a test harness-an application that links the API and methodically exercises its functionality-is constructed to simulate the use of the API by end-user applications. The interesting problems for testers are:
1. Ensuring that the test harness varies parameters of the API calls in ways that verify functionality and expose failures. This includes assigning common parameter values as well as exploring boundary conditions.
2. Generating interesting parameter value combinations for calls with two or more parameters.
3. Determining the content under which an API call is made. This might include setting external environment conditions (files, peripheral devices, and so forth) and also internal stored data that affect the API.
4. Sequencing API calls to vary the order in which the functionality is exercised and to make the API produce useful results from successive calls.
How to test a module(web based
developed in .NET) which would load data from the list(which is text file) into
the database(SQL Server)
How to test a module(web based
developed in .NET) which would load data from the list(which is text file) into
the database(SQL Server). It would touch approx 10 different tables depending
on data in the list.
The job is to verify that data which is suppose to get loaded gets loaded correctly. List might contain 60 millions of record. anyone suggest? * Compare the record counts before and after the load and match with the expected data load * Sample records should be taken to ensure the data integrity
* Include Test cases where the loaded data is visible functionally through the application. For eg: If the data loads new users to the system, tahn the login functionality using the new user login credentials should work etc...
Finally tools available in the market, you can be innovative in using the Functional Automation tools like Winrunner and adding DB Checkpoints, you can write SQL's to do the Backend testing. It is upon the Test scenario (Test Case) details that you would have to narrow upon the tools/techniques.
The job is to verify that data which is suppose to get loaded gets loaded correctly. List might contain 60 millions of record. anyone suggest? * Compare the record counts before and after the load and match with the expected data load * Sample records should be taken to ensure the data integrity
* Include Test cases where the loaded data is visible functionally through the application. For eg: If the data loads new users to the system, tahn the login functionality using the new user login credentials should work etc...
Finally tools available in the market, you can be innovative in using the Functional Automation tools like Winrunner and adding DB Checkpoints, you can write SQL's to do the Backend testing. It is upon the Test scenario (Test Case) details that you would have to narrow upon the tools/techniques.
What are the responsibilities of
a QA engineer?
Let's say, an engineer is hired
for a small software company's QA role, and there is no QA team. Should he take
responsibility to set up a QA infrastructure/process, testing and quality of
the entire product? No, because taking this responsibility is a classic trap
that QA people get caught in. Why? Because we QA engineers cannot assure
quality. And because QA departments cannot create quality.
What we CAN do is to detect lack of quality, and prevent low-quality products from going out the door. What is the solution? We need to drop the QA label, and tell the developers that they are responsible for the quality of their own work. The problem is, sometimes, as soon as the developers learn that there is a test department, they will slack off on their testing. We need to offer to help with quality assessment, only.
What we CAN do is to detect lack of quality, and prevent low-quality products from going out the door. What is the solution? We need to drop the QA label, and tell the developers that they are responsible for the quality of their own work. The problem is, sometimes, as soon as the developers learn that there is a test department, they will slack off on their testing. We need to offer to help with quality assessment, only.
What is the role of a QA
engineer?
The QA engineer's role is as
follows: We, QA engineers, use the system much like real users would, find all
the bugs, find ways to replicate the bugs, submit bug reports to the
developers, and provide feedback to the developers, i.e. tell them if they've
achieved the desired level of quality.
Whats the difference between ISO
vs CMM ?
Answe1:
CMM is much oriented towards S/W engg process improvements and never speaks of customer satisfaction whereas the ISO 9001:2000 speaks of process improvements generic to all organisations and also speaks of customer satisfaction.
A2:
FYI. There are 3 popular ISO standards that are commonly used for SW projects. They are 12270, 15540, and 9001 (subset or 9000). I hope I got the numbers correct. For CMM, the latest version is 1.1, however, it is already considered a legacy standard which is to be replaced by CMMI, the latest version is 1.1. For further information re CMM/I, visit the following:
http://www.sei.cmu.edu/cmm/
http://www.sei.cmu.edu/cmmi/
CMM is much oriented towards S/W engg process improvements and never speaks of customer satisfaction whereas the ISO 9001:2000 speaks of process improvements generic to all organisations and also speaks of customer satisfaction.
A2:
FYI. There are 3 popular ISO standards that are commonly used for SW projects. They are 12270, 15540, and 9001 (subset or 9000). I hope I got the numbers correct. For CMM, the latest version is 1.1, however, it is already considered a legacy standard which is to be replaced by CMMI, the latest version is 1.1. For further information re CMM/I, visit the following:
http://www.sei.cmu.edu/cmm/
http://www.sei.cmu.edu/cmmi/
Need to shut down network
connectivity mid transaction How to do this pragmatically via windows
interface?
From the command line, IPCONFIG
/RELEASE, should do it. or do the old fashion way. remove the cable on your
machine. if u r using a wireless connection, it is better to use ipconfig then.
How to write Test Case for
telephone ?
Answer1:
Test cases for telephone
test the "functionality" of telephone,
1. Test for presence of dial tone.
2. Dial Local number and check that receiver phone(dialled no.) rings.
3. Dial any STD number and check that intended phone number rings.
4. Dial the number of "under test" phone and check that it rings.
5. When ringing, pick it up and check that ringing stops.
6. When talking - then there should be no noise or disturbance.
7. Check that "redial" works properly.
8. Check STD lock facility works.
9. Check speed dialing facility.
10. Check for call waiting facility.
11. Check that only the caller can disconnect the call.
12. If "telephone Under test" is engaged with any caller and at this time if a third caller attempts to call the "telephone under test" then call between two other parties should not get disconnected.
13. If "telephone Under test" is engaged with any caller and at this time if a third caller attempts to call the "telephone under test" then third caller will listen to engage tone or message from exchange.
14. Check for volume(increase or decrease) of the handset.
15. Keep the hand set down from base unit and attempt to call the "telephone under test" then it should not ring.
16. Check for call transfer facility.
test the 'telephone itself
1. Check for extreme temperatures (hot and cold)
2. Check for different atmospheric conditions (humidity etc..)
3. Check for extreme power conditions
4. Check for button durability
5. Check for body strength
etc...
Test cases for telephone
test the "functionality" of telephone,
1. Test for presence of dial tone.
2. Dial Local number and check that receiver phone(dialled no.) rings.
3. Dial any STD number and check that intended phone number rings.
4. Dial the number of "under test" phone and check that it rings.
5. When ringing, pick it up and check that ringing stops.
6. When talking - then there should be no noise or disturbance.
7. Check that "redial" works properly.
8. Check STD lock facility works.
9. Check speed dialing facility.
10. Check for call waiting facility.
11. Check that only the caller can disconnect the call.
12. If "telephone Under test" is engaged with any caller and at this time if a third caller attempts to call the "telephone under test" then call between two other parties should not get disconnected.
13. If "telephone Under test" is engaged with any caller and at this time if a third caller attempts to call the "telephone under test" then third caller will listen to engage tone or message from exchange.
14. Check for volume(increase or decrease) of the handset.
15. Keep the hand set down from base unit and attempt to call the "telephone under test" then it should not ring.
16. Check for call transfer facility.
test the 'telephone itself
1. Check for extreme temperatures (hot and cold)
2. Check for different atmospheric conditions (humidity etc..)
3. Check for extreme power conditions
4. Check for button durability
5. Check for body strength
etc...
Answer2:
My company designs and build phone system software, so I am very familiar with phone testing. You could be dealing with an IVR system that has menu-driven logic, or you could be dealing with an auto-attendant with directory features. The basic idea is that you need to be able to define your expected results, and record your actual results. The medium is different, but the same basic concepts apply. In some ways the phone is easier because it can be a more linear process than say, a web system.
My company designs and build phone system software, so I am very familiar with phone testing. You could be dealing with an IVR system that has menu-driven logic, or you could be dealing with an auto-attendant with directory features. The basic idea is that you need to be able to define your expected results, and record your actual results. The medium is different, but the same basic concepts apply. In some ways the phone is easier because it can be a more linear process than say, a web system.
WHAT WILL BE TESTED ON A STATIC
WEB PAGE?
1. Testing all links are working
properly.
There are link checker programs that can help you verify if your links are broken
2. Test GUI design.
3. Test spelling and grammar for contents.
4. Test page fonts are consistent.
Again depending on the page, this may not be essential, but you can suggest to the designed to use a cascading style sheet to easily maintain a consistent style across pages.
5.Title bar message testing.
6.Status bar message testing.
7.Scroll bars presence at page.
8. Browser compatibility(IE and Netscape)
IE and FireFox. Ironically, Netscape 8 has two modes now that allows you to swicth between using the gecko render engine in FireFox and the internal IE render engine that ships with every Windows OS. It's very cool and it can save you a lot of time.
9.Changing browser options of IE from Tools --> Internet Options --->
10.Advanced tab?
11.Changing font for browser and also font size for browser.
12. Changing any privacy option from Tools --> Internet Options.
13. the images are present
14. conformance to W3C standards WRT tags".
That's a pretty big topic, but I can touch on it. Every HTML document should tell the browser about the DTD that it was built using. Things like
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
Each DTD version has different standards. Some allow frames others don't, etc. You will have to learn what the DTD is supposed to use and what it's not supposed to use. Only the best web designers have the various DTDs memorised. Fortunately the W3C has made a page that will validate your pages for you at http://validator.w3.org/
After your page passes through that you will get a report that lists errors and info. While most render engines will gloss over the errors and display the page "correctly", it may cause problems further down the road when editing the page. You can discuss these things with your web designer.
There are link checker programs that can help you verify if your links are broken
2. Test GUI design.
3. Test spelling and grammar for contents.
4. Test page fonts are consistent.
Again depending on the page, this may not be essential, but you can suggest to the designed to use a cascading style sheet to easily maintain a consistent style across pages.
5.Title bar message testing.
6.Status bar message testing.
7.Scroll bars presence at page.
8. Browser compatibility(IE and Netscape)
IE and FireFox. Ironically, Netscape 8 has two modes now that allows you to swicth between using the gecko render engine in FireFox and the internal IE render engine that ships with every Windows OS. It's very cool and it can save you a lot of time.
9.Changing browser options of IE from Tools --> Internet Options --->
10.Advanced tab?
11.Changing font for browser and also font size for browser.
12. Changing any privacy option from Tools --> Internet Options.
13. the images are present
14. conformance to W3C standards WRT tags".
That's a pretty big topic, but I can touch on it. Every HTML document should tell the browser about the DTD that it was built using. Things like
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
Each DTD version has different standards. Some allow frames others don't, etc. You will have to learn what the DTD is supposed to use and what it's not supposed to use. Only the best web designers have the various DTDs memorised. Fortunately the W3C has made a page that will validate your pages for you at http://validator.w3.org/
After your page passes through that you will get a report that lists errors and info. While most render engines will gloss over the errors and display the page "correctly", it may cause problems further down the road when editing the page. You can discuss these things with your web designer.
How to test a application in
flash?
Manually testing flash animations
is as simple as making sure that the objects do what they're supposed to do.
manually mostly because flash isn't really a programming language. Most
developers consider it to be a toy. So the big automation companies won't
consider plug-ins for the flash objects.
If the flash application is a: E learning Application:
1. Need to know the the Hardware Configuration, because if this animation contains some heavy images or movie files then it works slowly, and which is a error.
so each and every images and movie should be of light weight, as far the quality says.
2. File naming convention
3. Flash detection
4. Objects should do what they are supposed to do
5. Etc.
If the flash application is a: Web Application:
1. File size, should be light weight, because most of the user don't have high speed connection, load testing require
2. Quality of texts, images and movie.
If the flash application is a: E learning Application:
1. Need to know the the Hardware Configuration, because if this animation contains some heavy images or movie files then it works slowly, and which is a error.
so each and every images and movie should be of light weight, as far the quality says.
2. File naming convention
3. Flash detection
4. Objects should do what they are supposed to do
5. Etc.
If the flash application is a: Web Application:
1. File size, should be light weight, because most of the user don't have high speed connection, load testing require
2. Quality of texts, images and movie.
What kind of automated software
used to test a Web-based application with a .NET (ASP.NET and C#...also SQL
Server) framework?
Answer1:
Mercury makes some decent products. Quick Test Pro can be used for a lot of your requirements... It can be costly and mind-numbing at times though.
Answer2:
Selenium is a test tool for web applications. Selenium tests run directly in a browser, just as real users do. And they run in Internet Explorer, Mozilla and FireFox on Windows, Linux, and Macintosh. No other test tool covers such a wide array of platforms.
* Browser compatibility testing. Test your application to see if it works correctly on different browsers and operating systems. The same script can run on any Selenium platform.
* System functional testing. Create regression tests to verify application functionality and user acceptance.
Answer3:
Ruby is becoming a preferred standard for testing
Perl is also used a great deal.
What if organization is growing
so fast that fixed QA processes are impossible?
This is a common problem in the
software industry, especially in new technology areas. There is no easy
solution in this situation, other than...
* Hire good people
* Ruthlessly prioritize quality issues and maintain focus on the customer;
* Everyone in the organization should be clear on what quality means to the customer.
* Hire good people
* Ruthlessly prioritize quality issues and maintain focus on the customer;
* Everyone in the organization should be clear on what quality means to the customer.
How to write Nunit test cases
What Is NUnit?
NUnit is a unit-testing framework for all .Net languages. Initially ported from JUnit, the current production release, version 2.2, is the fourth major release of this xUnit based unit testing tool for Microsoft .NET. It is written entirely in C# and has been completely redesigned to take advantage of many .NET language features, for example custom attributes and other reflection related capabilities. NUnit brings xUnit to all .NET languages.
NUnit is a unit-testing framework for all .Net languages. Initially ported from JUnit, the current production release, version 2.2, is the fourth major release of this xUnit based unit testing tool for Microsoft .NET. It is written entirely in C# and has been completely redesigned to take advantage of many .NET language features, for example custom attributes and other reflection related capabilities. NUnit brings xUnit to all .NET languages.
How do you conduct peer reviews?
The peer review, sometimes called
PDR, is a formal meeting, more formalized than a walk-through, and typically
consists of 3-10 people including the test lead, task lead (the author of
whatever is being reviewed) and a facilitator (to make notes). The subject of
the PDR is typically a code block, release, or feature, or document. The
purpose of the PDR is to find problems and see what is missing, not to fix
anything. The result of the meeting is documented in a written report.
Attendees should prepare for PDRs by reading through documents, before the
meeting starts; most problems are found during this preparation.
Why is the PDR great? Because it is a cost-effective method of ensuring quality, because bug prevention is more cost effective than bug detection.
Why is the PDR great? Because it is a cost-effective method of ensuring quality, because bug prevention is more cost effective than bug detection.
How do you check the security of
an application?
To check the security of an
application, one can use security/penetration testing. Security/penetration
testing is testing how well a system is protected against unauthorized
internal, or external access, or willful damage. This type of testing usually
requires sophisticated testing techniques.
How to estimate product test
hours for new releases?
Answer1:
Your main task is to convince your company of the
- value of structured testing and the benefits it brings to the end product
- the risks of not testing properly (high maintenance, lots of bugs found in production (and these generally found by your customers!), loss of market reputation ("another crap product from xyz company).
Another approach might be to consider starting your test processes earlier (i am guessing from your message that you are following some kind of waterfall method) - its a sort of 'design a little, build a little, test a little, design a little ...' approach.
Answer2:
Tell the folks making decisions to read user feedback. No time for testing = angry users who want their money back or worse angry clients who suddenly hire a team of lawyers.
Warned all the stakeholders early on and then sent user feedback emails up the chain. Users can be brutal and they tell the truth! Comments like YOU SUCK!!
It may also convince them to get more support people instead of increasing testing.
Answer3:
The ratios:
3/1 Developers to QA (industry)
3/2 Developers to QA (Microsoft)
There is also a really good article called "A Better Bug Trap" published by The Economist in 2004, which is pretty telling: according to NIST 80% of a software project belongs to testing and debugging.
There is also the classic book called "Mythical Man Month". There are a couple of pertinent passages there:
1) Back when the book was written, the percentage quoted by NIST was 50%, which means that software development has become less efficient over the last 20 years or so.
2) There is a 30% that a change in any line of code will break something down stream.
3) There is another article published by McKinsey Quarterly called "What high tech can learn from slow-growth industries".
Your main task is to convince your company of the
- value of structured testing and the benefits it brings to the end product
- the risks of not testing properly (high maintenance, lots of bugs found in production (and these generally found by your customers!), loss of market reputation ("another crap product from xyz company).
Another approach might be to consider starting your test processes earlier (i am guessing from your message that you are following some kind of waterfall method) - its a sort of 'design a little, build a little, test a little, design a little ...' approach.
Answer2:
Tell the folks making decisions to read user feedback. No time for testing = angry users who want their money back or worse angry clients who suddenly hire a team of lawyers.
Warned all the stakeholders early on and then sent user feedback emails up the chain. Users can be brutal and they tell the truth! Comments like YOU SUCK!!
It may also convince them to get more support people instead of increasing testing.
Answer3:
The ratios:
3/1 Developers to QA (industry)
3/2 Developers to QA (Microsoft)
There is also a really good article called "A Better Bug Trap" published by The Economist in 2004, which is pretty telling: according to NIST 80% of a software project belongs to testing and debugging.
There is also the classic book called "Mythical Man Month". There are a couple of pertinent passages there:
1) Back when the book was written, the percentage quoted by NIST was 50%, which means that software development has become less efficient over the last 20 years or so.
2) There is a 30% that a change in any line of code will break something down stream.
3) There is another article published by McKinsey Quarterly called "What high tech can learn from slow-growth industries".
When testing the password field,
what is your focus?
When testing the password field,
one needs to focus on encryption; one needs to verify that the passwords are
encrypted.
What should test in BANKING
DOMAIN application ?
You would like to test:
Banking Workflows
Data Integrity issues
Security and access issues
Recovery testing
All the above needs to be tested in the expected banking environment (hard-wares, LAN, Op Sys, domain configurations, databases).
Banking Workflows
Data Integrity issues
Security and access issues
Recovery testing
All the above needs to be tested in the expected banking environment (hard-wares, LAN, Op Sys, domain configurations, databases).
How to test the memory leakage manually?
Answer1:
Here are tools to check this. Compuware DevPartner can help you test your application for Memory leaks if the application is complex. Also depending upon the OS on which you need to check for memory leaks you need to select the tool.
Answer2:
Tools are more effective to do so. the tools watch to see when memory is allocated and not freed You can use various tools manually to see if the same happens. You just won't be able to find the exact points where this happens.
In windows you would use task manager or process explorer (freeware from Sysinternals) and switch to process view and watch memory used. Record the baseline memory usage (BL) . Run an action once and record the memory usage (BLU). Perform the same actions repeatedly and then if the memory usage has not returned to at least BLU, you have a memory leak. The trick is to wait for the computer to clean up after the transactions have finished. This should take a few seconds.
Here are tools to check this. Compuware DevPartner can help you test your application for Memory leaks if the application is complex. Also depending upon the OS on which you need to check for memory leaks you need to select the tool.
Answer2:
Tools are more effective to do so. the tools watch to see when memory is allocated and not freed You can use various tools manually to see if the same happens. You just won't be able to find the exact points where this happens.
In windows you would use task manager or process explorer (freeware from Sysinternals) and switch to process view and watch memory used. Record the baseline memory usage (BL) . Run an action once and record the memory usage (BLU). Perform the same actions repeatedly and then if the memory usage has not returned to at least BLU, you have a memory leak. The trick is to wait for the computer to clean up after the transactions have finished. This should take a few seconds.
What is the checklist for credit
card testing?
In credit card testing the
following validations are considered
1)Testing the 4-DBC (Digit batch code) for its uniqueness (present on right corner of credit card)
2)The message formats in which the data is sent
3)LUHN testing
4)Network response
5) Terminal validations.
1)Testing the 4-DBC (Digit batch code) for its uniqueness (present on right corner of credit card)
2)The message formats in which the data is sent
3)LUHN testing
4)Network response
5) Terminal validations.
How do you test data integrity?
Data integrity is tested by the
following tests:
Verify that you can create, modify, and delete any data in tables.
Verify that sets of radio buttons represent fixed sets of values.
Verify that a blank value can be retrieved from the database.
Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur.
Verify that the default values are saved in the database, if the user input is not specified.
Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software.
Why do we perform data integrity testing? Because we want to verify the completeness, soundness, and wholeness of the stored data. Testing should be performed on a regular basis, because important data could, can, and will change over time.
Verify that you can create, modify, and delete any data in tables.
Verify that sets of radio buttons represent fixed sets of values.
Verify that a blank value can be retrieved from the database.
Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur.
Verify that the default values are saved in the database, if the user input is not specified.
Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software.
Why do we perform data integrity testing? Because we want to verify the completeness, soundness, and wholeness of the stored data. Testing should be performed on a regular basis, because important data could, can, and will change over time.
What is the definition of top down
design?
Top down design progresses from
simple design to detailed design. Top down design solves problems by breaking
them down into smaller, easier to solve subproblems. Top down design creates
solutions to these smaller problems, and then tests them using test drivers. In
other words, top down design starts the design process with the main module or
system, then progresses down to lower level modules and subsystems. To put it
differently, top down design looks at the whole system, and then explodes it
into subsystems, or smaller parts. A systems engineer or systems analyst
determines what the top level objectives are, and how they can be met. He then
divides the system into subsystems, i.e. breaks the whole system into logical,
manageable-size modules, and deals with them individually.
When the build comes to the QA
team, what are the parameters to be taken for consideration to reject the build
upfront without committing for testing ?
Answer 1:
Agree with R&D a set of tests that if one fails you can reject the build. I usually have some build verification tests that just make sure the build is stable and the major functionality is working.
Then if one test fails you can reject the build.
Answer 2:
The only way to legitimately reject a build is if the entrance criteria have not been met. That means that the entrance criteria to the test phase have been defined and agreed upon up front. This should be standard for all builds for all products. Entrance criteria could include:
- Turn-over documentation is complete
- All unit testing has been successfully completed and U/T cases are documented in turn-over
- All expected software components have been turned-over (staged)
- All walkthroughs and inspections are complete
- Change requests have been updated to correct status
- Configuration Management and build information is provided, and correct, in turn-over
The only way we could really reject a build without any testing, would be a failure of the turn-over procedure. There may, but shouldn't be, politics involved. The only way the test phase can proceed is for the test team to have all components required to perform successful testing. You will have to define entrance (and exit) criteria for each phase of the SDLC. This is an effort to be taken together by the whole development team. Developments entrance criteria would include signed requirements, HLD doc, etc. Having this criteria pre-established sets everyone up for success
Answer 3:
The primary reason to reject a build is that it is untestable, or if the testing would be considered invalid.
For example, suppose someone gave you a "bad build" in which several of the wrong files had been loaded. Once you know it contains the wrong versions, most groups think there is no point continuing testing of that build.
Every reason for rejecting a build beyond this is reached by agreement. For example, if you set a build verification test and the program fails it, the agreement in your company might be to reject the program from testing. Some BVTs are designed to include relatively few tests, and those of core functionality. Failure of any of these tests might reflect fundamental instability. However, several test groups include a lot of additional tests, and failure of these might not be grounds for rejecting a build.
In some companies, there are firm entry criteria to testing. Many companies pay lipservice to entry criteria but start testing the code whether the entry criteria are met or not. Neither of these is right or wrong--it's the culture of the company. Be sure of your corporate culture before rejecting a build.
Answer 4:
Generally a company would have set some sort of minimum goals/criteria that a build needs to satisfy - if it satisfies this - it can be accepted else it has to be rejected
For eg.
Nil - high priority bugs
2 - Medium Priority bugs
Sanity test or Minimum acceptance and Basic acceptance should pass The reasons for the new build - say a change to a specific case - this should pass Not able to proceed - non - testability or even some more which is in relation to the new build or the product If the above criterias don't pass then the build could be rejected.
Agree with R&D a set of tests that if one fails you can reject the build. I usually have some build verification tests that just make sure the build is stable and the major functionality is working.
Then if one test fails you can reject the build.
Answer 2:
The only way to legitimately reject a build is if the entrance criteria have not been met. That means that the entrance criteria to the test phase have been defined and agreed upon up front. This should be standard for all builds for all products. Entrance criteria could include:
- Turn-over documentation is complete
- All unit testing has been successfully completed and U/T cases are documented in turn-over
- All expected software components have been turned-over (staged)
- All walkthroughs and inspections are complete
- Change requests have been updated to correct status
- Configuration Management and build information is provided, and correct, in turn-over
The only way we could really reject a build without any testing, would be a failure of the turn-over procedure. There may, but shouldn't be, politics involved. The only way the test phase can proceed is for the test team to have all components required to perform successful testing. You will have to define entrance (and exit) criteria for each phase of the SDLC. This is an effort to be taken together by the whole development team. Developments entrance criteria would include signed requirements, HLD doc, etc. Having this criteria pre-established sets everyone up for success
Answer 3:
The primary reason to reject a build is that it is untestable, or if the testing would be considered invalid.
For example, suppose someone gave you a "bad build" in which several of the wrong files had been loaded. Once you know it contains the wrong versions, most groups think there is no point continuing testing of that build.
Every reason for rejecting a build beyond this is reached by agreement. For example, if you set a build verification test and the program fails it, the agreement in your company might be to reject the program from testing. Some BVTs are designed to include relatively few tests, and those of core functionality. Failure of any of these tests might reflect fundamental instability. However, several test groups include a lot of additional tests, and failure of these might not be grounds for rejecting a build.
In some companies, there are firm entry criteria to testing. Many companies pay lipservice to entry criteria but start testing the code whether the entry criteria are met or not. Neither of these is right or wrong--it's the culture of the company. Be sure of your corporate culture before rejecting a build.
Answer 4:
Generally a company would have set some sort of minimum goals/criteria that a build needs to satisfy - if it satisfies this - it can be accepted else it has to be rejected
For eg.
Nil - high priority bugs
2 - Medium Priority bugs
Sanity test or Minimum acceptance and Basic acceptance should pass The reasons for the new build - say a change to a specific case - this should pass Not able to proceed - non - testability or even some more which is in relation to the new build or the product If the above criterias don't pass then the build could be rejected.
Any recommendation for estimation
how many bugs the customer will find till gold release?
Answer 1:
If you take the total number of bugs in the application and subtract the number of bugs you found, the difference will be the maximum number of bugs the customer can find.
Seriously, I doubt you will find any sort of calculations or formula that can answer your question with much accuracy. If you could reference a previous application release, it might give you a rough idea. The best thing to do is insure your test coverage is as good as you can make it then hope you've found the ones the customer might find.
Remember Software testing is Risk Management!
Answer 2:
For doing estimation :
1.)Find out the Coverage during testing of ur software and then estimate keeping in mind 80-20 principle.
2.)You can also look at the deepening of your test cases e.g. how much unit level testing and how much life cycle teting have you performed (Believe that most of the bugs from customer comes due to real use of lifecycle in the software)
3.)You can also refer the defect density from earlier releases of the same product line.
by doing these evaluation you can find out the probability of bugs at an approximately optimum estimation.
Answer 3:
You can look at the customer issues mapping from previous release (If you have the same product line) to the current release ,This is the best way of finding estimation for gold release of migration of any product.Secondly, till gold release most of the issues comes from various combination of installation testing like cross-platform,i18 issues,Customization,up-gradation and migration.
So ,these can be taken as a parameter and then can estimation be completed.
If you take the total number of bugs in the application and subtract the number of bugs you found, the difference will be the maximum number of bugs the customer can find.
Seriously, I doubt you will find any sort of calculations or formula that can answer your question with much accuracy. If you could reference a previous application release, it might give you a rough idea. The best thing to do is insure your test coverage is as good as you can make it then hope you've found the ones the customer might find.
Remember Software testing is Risk Management!
Answer 2:
For doing estimation :
1.)Find out the Coverage during testing of ur software and then estimate keeping in mind 80-20 principle.
2.)You can also look at the deepening of your test cases e.g. how much unit level testing and how much life cycle teting have you performed (Believe that most of the bugs from customer comes due to real use of lifecycle in the software)
3.)You can also refer the defect density from earlier releases of the same product line.
by doing these evaluation you can find out the probability of bugs at an approximately optimum estimation.
Answer 3:
You can look at the customer issues mapping from previous release (If you have the same product line) to the current release ,This is the best way of finding estimation for gold release of migration of any product.Secondly, till gold release most of the issues comes from various combination of installation testing like cross-platform,i18 issues,Customization,up-gradation and migration.
So ,these can be taken as a parameter and then can estimation be completed.
Why back-end testing is required,
if we are going to check the front-end ....?
Why we need to do unit testing,
if all the features are being tested in System testing. What extra things are
tested in unit testing, which can not be tested in System testing.
Answer 1:
Assume that you're thinking client-server or web. If you test the application on the front end only you can see if the data was stored and retrieved correctly. You can't see if the servers are in an error state or not. many server processes are monitored by another process. If they crash, they are restarted. You can't see that without looking at it.
The data may not be stored correctly either but the front end may have cached data lying around and it will use that instead. The least you should be doing is verifying the data as stored in the database.
It is easier to test data being transferred on the boundaries and see the results of those transactions when you can set the data in a driver.
Answer2:
Back-End testing : Basically the requirement of this testing depends on ur project. like Say if ur project is .Ticket booking system,Front end u will provided with an Interface , where u can book the ticket by giving the appropriate details ( Like Place to go, and Time when u wanna go etc..). It will have a Data storage system (Database or XL sheet etc) which is a Back end for storing details entered by the user.
After submitting the details ,U might have provided with a correct acknowledgement.But in back end , the details might not updated correctly in Database becoz of wrong logic development. Then that will cause a major problem.
and regarding Unit level testing and System testing Unit level testing is for testing the basic checks whether the application is working fyn with the basic requirements.This will be done by developers before delivering to the QA.In System testing , In addition to the unit checks ,u will be performing all the checks ( all possible integrated checks which required) .Basically this will be carried out by tester
Answer 3:
Ever heard about divide and conquer tactic ? It is a same method applied in back-end and front-end testing.
A good back end test will help minimize the burden of front-end test.
Another point is you can test the back-end while develop the front-end. A true parallelism could be achieved.
Back-end testing has another problem which must addressed before front end could use it. The problem is concurrency. Building a scenario to test concurrency is formidable task.
A complex thing is hard to test. To create such scenarios will make you unsure which test you already done and which you haven't. What we need is an effective methods to test our application. The simplest method i know is using divide and conquer.
Answer 4:
A wide range of errors are hard to see if you don't see the code. For example, there are many optimizations in programs that treat special cases. If you don't see the special case, you don't test the optimization. Also, a substantial portion of most programs is error handling. Most programmers anticipate more errors than most testers.
Programmers find and fix the vast majority of their own bugs. This is cheaper, because there is no communication overhead, faster because there is no delay from tester-reporter to programmer, and more effective because the programmer is likely to fix what she finds, and she is likely to know the cause of the problems she sees. Also, the rapid feedback gives the programmer information about the weaknesses in her programming that can help her write better code.
Many tests -- most boundary tests -- are done at the system level primarily because we don't trust that they were done at the unit level. They are wasteful and tedious at the system level. I'd rather see them properly done and properly automated in a suite of programmer tests.
Assume that you're thinking client-server or web. If you test the application on the front end only you can see if the data was stored and retrieved correctly. You can't see if the servers are in an error state or not. many server processes are monitored by another process. If they crash, they are restarted. You can't see that without looking at it.
The data may not be stored correctly either but the front end may have cached data lying around and it will use that instead. The least you should be doing is verifying the data as stored in the database.
It is easier to test data being transferred on the boundaries and see the results of those transactions when you can set the data in a driver.
Answer2:
Back-End testing : Basically the requirement of this testing depends on ur project. like Say if ur project is .Ticket booking system,Front end u will provided with an Interface , where u can book the ticket by giving the appropriate details ( Like Place to go, and Time when u wanna go etc..). It will have a Data storage system (Database or XL sheet etc) which is a Back end for storing details entered by the user.
After submitting the details ,U might have provided with a correct acknowledgement.But in back end , the details might not updated correctly in Database becoz of wrong logic development. Then that will cause a major problem.
and regarding Unit level testing and System testing Unit level testing is for testing the basic checks whether the application is working fyn with the basic requirements.This will be done by developers before delivering to the QA.In System testing , In addition to the unit checks ,u will be performing all the checks ( all possible integrated checks which required) .Basically this will be carried out by tester
Answer 3:
Ever heard about divide and conquer tactic ? It is a same method applied in back-end and front-end testing.
A good back end test will help minimize the burden of front-end test.
Another point is you can test the back-end while develop the front-end. A true parallelism could be achieved.
Back-end testing has another problem which must addressed before front end could use it. The problem is concurrency. Building a scenario to test concurrency is formidable task.
A complex thing is hard to test. To create such scenarios will make you unsure which test you already done and which you haven't. What we need is an effective methods to test our application. The simplest method i know is using divide and conquer.
Answer 4:
A wide range of errors are hard to see if you don't see the code. For example, there are many optimizations in programs that treat special cases. If you don't see the special case, you don't test the optimization. Also, a substantial portion of most programs is error handling. Most programmers anticipate more errors than most testers.
Programmers find and fix the vast majority of their own bugs. This is cheaper, because there is no communication overhead, faster because there is no delay from tester-reporter to programmer, and more effective because the programmer is likely to fix what she finds, and she is likely to know the cause of the problems she sees. Also, the rapid feedback gives the programmer information about the weaknesses in her programming that can help her write better code.
Many tests -- most boundary tests -- are done at the system level primarily because we don't trust that they were done at the unit level. They are wasteful and tedious at the system level. I'd rather see them properly done and properly automated in a suite of programmer tests.
How do you perform integration
testing?
To perform integration testing,
first, all unit testing has to be completed. Upon completion of unit testing,
integration testing begins. Integration testing is black box testing. The purpose
of integration testing is to ensure distinct components of the application
still work in accordance to customer requirements. Test cases are developed
with the express purpose of exercising the interfaces between the components.
This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, or acceptable, based on client input.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, or acceptable, based on client input.
What are the five dimensions of
the Risks?
Schedule: Unrealistic schedules,
exclusion of certain activities when chalking out a schedule etc. could be
deterrents to project delivery on time. Unstable communication link can be
considered as a probable risk if testing is carried out from a remote location.
Client: Ambiguous requirements definition, clarifications on issues not being readily available, frequent changes to the requirements etc. could cause chaos during project execution.
Human Resources: Non-availability of sufficient resources with the skill level expected in the project are not available; Attrition of resources - Appropriate training schedules must be planned for resources to balance the knowledge level to be at par with resources quitting. Underestimating the training effort may have an impact in the project delivery.
System Resources: Non-availability of /delay in procuring all critical computer resources either hardware and software tools or licenses for software will have an adverse impact.
Quality: Compound factors like lack of resources along with a tight delivery schedule and frequent changes to requirements will have an impact on the quality of the product tested.
Client: Ambiguous requirements definition, clarifications on issues not being readily available, frequent changes to the requirements etc. could cause chaos during project execution.
Human Resources: Non-availability of sufficient resources with the skill level expected in the project are not available; Attrition of resources - Appropriate training schedules must be planned for resources to balance the knowledge level to be at par with resources quitting. Underestimating the training effort may have an impact in the project delivery.
System Resources: Non-availability of /delay in procuring all critical computer resources either hardware and software tools or licenses for software will have an adverse impact.
Quality: Compound factors like lack of resources along with a tight delivery schedule and frequent changes to requirements will have an impact on the quality of the product tested.
How to choose which defect to
remove in 1000000 defects? (because It will take too much resources in order to
remove them all.)
Answer 1:
Are you the programmer who has to fix them, the project manager who has to supervise the programmers, the change control team that decides which areas are too high risk to impact, the stakeholder-user whose organization pays for the damage caused by the defects or the tester?
The tester does not choose which defects to fix.
The tester helps ensure that the people who do choose, make a well-informed choice.
Testers should provide data to indicate the *severity* of bugs, but the project manager or the development team do the prioritization.
When I say "indicate the severity", I don't just mean writing S3 on a piece of paper. Test groups often do follow-up tests to assess how serious a failure is and how broad the range of failure-triggering conditions.
Priority depends on a wide range of factors, including code-change risk, difficulty/time to complete the change, which stakeholders are affected by the bug, the other commitments being handled by the person most knowledgeable about fixing a certain bug, etc. Many of these factors are not within the knowledge of most test groups.
Answer 2:
As a tester we don't fix the defects but we surely can prioritize them once detected. In our org we assign severity level to the defects depending upon their influence on other parts of products. If a defect doesn't allow you to go ahead and test test the product, it is critical one so it has to be fixed ASAP. We have 5 levels as
1-critical
2-High
3-Medium
4-Low
5-Cosmetic
Dev can group all the critical ones and take them to fix before any other defect.
Answer3:
Priority/Severity P1 P2 P3
S1
S2
S3
Generally the defects are classified in above-shown grid. Every organization / software has some target of fixing the bugs.
Example -
P1S1 -> 90% of the bugs reported should be fixed.
P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service packs or versions.
Thus the organization should decide its target and act accordingly.
Basically bug-free software is not possible.
Answer4:
Ideally, the customer should assign priorities to their requirements. They tend to resist this. On a large, multi-year project I just completed, I would often (in the lack of customer guidelines) rely on my knowledge of the application and the potential downstream impacts in the modeled business process to prioritize defects.
If the customer doesn't then I fell the test organization should based on risk or other, similar considerations.
Is regression testing performed
manually?
The answer to this question
depends on the initial testing approach. If the initial testing approach was
manual testing, then the regression testing is usually performed manually.
Conversely, if the initial testing approach was automated testing, then the
regression testing is usually performed by automated testing.
What’s the difference between QA
and testing?
TESTING means “Quality Control”;
and
QUALITY CONTROL measures the quality of a product; while
QUALITY ASSURANCE measures the quality of processes used to create a quality product.
QUALITY CONTROL measures the quality of a product; while
QUALITY ASSURANCE measures the quality of processes used to create a quality product.
How do you create a test
plan/design?
Test scenarios and/or cases are
prepared by reviewing functional requirements of the release and preparing
logical groups of functions that can be further broken into test procedures.
Test procedures define test conditions, data to be used for testing and
expected results, including database updates, file outputs, report results.
Generally speaking...
* Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
* Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.
* It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
* Test scenarios are executed through the use of test procedures or scripts.
* Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
* Test procedures or scripts include the specific data that will be used for testing the process or transaction.
* Test procedures or scripts may cover multiple test scenarios.
* Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
* Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
* Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
* A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.
Inputs for this process:
* Approved Test Strategy Document.
* Test tools, or automated test tools, if applicable.
* Previously developed scripts, if applicable.
* Test documentation problems uncovered as a result of testing.
* A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software complexity data.
Outputs for this process:
* Approved documents of test scenarios, test cases, test conditions, and test data.
* Reports of software design issues, given to software developers for correction.
* Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
* Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.
* It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
* Test scenarios are executed through the use of test procedures or scripts.
* Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
* Test procedures or scripts include the specific data that will be used for testing the process or transaction.
* Test procedures or scripts may cover multiple test scenarios.
* Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
* Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
* Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
* A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.
Inputs for this process:
* Approved Test Strategy Document.
* Test tools, or automated test tools, if applicable.
* Previously developed scripts, if applicable.
* Test documentation problems uncovered as a result of testing.
* A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software complexity data.
Outputs for this process:
* Approved documents of test scenarios, test cases, test conditions, and test data.
* Reports of software design issues, given to software developers for correction.