Official Blog of Azilen

The Techie Explorations

In depth Guide for Manual Testing

Quality is never an accident, it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives, rightly stated by William A. Foster.

Most of the companies today use modern automated testing tools and latest technology for testing their solution, however the testing cycle does not get complete without manual testing. If we go through the actual stats, 70-75% of the solution is being tested manually and only 25% of it is tested using automated testing scripts.

The 25% of software solution covers a business process which deals with huge chunks of data and verified from multiple sources.

Why Manual Testing is More Important than the Automation Testing?

Manual testing is the hidden crunch without which a solution cannot be successfully launched in the market. Here are some points why we say so:

  • There are so many limitations while the automated testing will detect most bugs in the software system. For example, the automated tools can’t test for visual considerations like gestures, image color or font size. It is not possible to test the User Experience and User Interface through the automation testing. Changes in these can only be detected and done by manual testing, which means that not all testing can be done with automatic tools. Manual testing is preferable for products with engineered user experience and GUIs with constant updates.
  • As the name “Automation Testing” which means just refers to “Automatic”. They are just robotic and can’t act as a real user prospective. Manual testing, on the other end, allows the developing program to be used as it would be upon launch. Any bugs that may pop up when a user handles the program in a certain way are more likely to be caught with manual testing.
  • Many times situations arise when there are runtime changes in the functionality of some modules either as an enhancement or behavior change of the module. In this case the time and precision of functionality play a major role. Now before starting the automated testing, the tester has to set up test cases, program it into the automated tool, and then run the tests. But with manual testing, the QA can quickly test and see the results. Automatic tests take more time to set up, which doesn’t allow testing ideas quickly and easily.
  • By using the Automation tool, it is impossible to run again the test case of any application which had already executed before.
  • Ad-hoc testing can’t be performed using Automation.
  • Manual testing is the only one option during the initial stage of the application. When the application is in regression phase and stable then the user can automate the basic functions within the application. In an unstable build if the user automate something in the next changed builds that will surely break. Moreover, it has been observed most of good defects are found by doing some exploratory kind of testing instead of just going through the test steps written as a test case. So test automation cannot substitute the experience and underlying knowledge of the tester to find good bugs in the application.
  • Negative testing can be done more rigorously via manual testing.
  • If a test fails the automation test will only result the failure and could not perform a workaround to test other areas even if one thing fails. In this manual test can only work that out.

The QA, who tests the software, drafts all the test cases and executes those test cases manually. One major advantage of manual testing includes ease of testing for customized modules as per the requirements which are defined and input output deliverable as discussed. Also, it can be executed with ultimate ease and perfection without fancy coding and special programs.


Let’s check out the mastermind using which the manual testing is executed:

Test Plan

A Test plan is the blueprint of systematic approach the tester creates before testing the machine or software. It’s a detailed document which covers below listed information sections:

  • Objective of the software
  • Scope
  • Approach
  • Software testing effort

The test plan is a very needful to think through the efforts needed to validate the acceptability of a software product. That completed document will help the people outside the test group to understand the “WHY” and “HOW” of the product validation.

Test Strategy

“Specific, Practical, Justified”: These three words belong to a good test strategy which means how the company is planning to complete the application/product with a proper development as well as proper assessment of quality. The main aim of preparing the Test strategy is to clearly mention the important activities and the challenges that need to do for the test project. By other names the test strategy is known as test Approach and Test Architecture. Basically the test strategy describes which types of testing should be done for projects.


It involves User Acceptance Testing (UAT), functional testing, load testing, performance testing, security testing and lots more.

Steps for Writing Test Cases

Let’s see some tips on how to write test cases, test case procedures and some basic test case definitions.

Analyzing Requirements: It is the most important steps that the QA needs to understand the requirement of the product/application while drafting the test cases for the same i.e. What are the things/steps needs to test, which things are the requirement of the product/application and what should be the expected results for the executed testing steps.

Writing Test cases (Designing the test): Based on the requirement, a test case should be drafted with high level scenarios. So every requirement should have at least one test or more than that.

Executing test case (Execution test): Once the test cases are drafted and the testing team receives the solution built, they should be executed. Development and the testing should be done by parallel. At the time of testing, always there is a chance that the expected result of the test cases may differ from the actual results of the test cases. In that scenario, it will be an issue/Bug. That particular issue/Bug should be reported to the developer/development team and as per the priority of the issue/bug, that issue should be resolved by the development team of the project. For each test case, the test cases generally having the unique test case ID and below mentioned the sequence of the test case details.

  • Purpose: Purpose of the test case
  • Executed Steps: List of steps that the QA has to follow while executes the test case while testing.
  • Expected results: The results i.e. expected for the test cases after the execution
  • Actual result: Actual result of the test case
  • Status: Identify the test case whether the test case is passed, failed, Blocked, skipped or Cancel after the execution.
  • Pass: The Actual results is same with the expected results of the test case
  • Failed: The Actual results are not matched with the expected results of the test case and issue found for the same.
  • Blocked: QA is not able to continue the testing and execute the test case due to any issue.
  • Skipped: Test case is not executed in the testing round and which is not affected in any requirement of the product/application.
  • Bug ID

QA Matrix (Attach the requirement): QA Matrix refers to a sheet which maintains how many numbers of requirements are tested and for that how many numbers of test cases are drafted. It is just a form of an excel sheet where it will show that whether it is covered the requirement or not. In case, any requirement is missed which needs to be tested and there is a possibility that issue arising in that case it is better to refer the QA Matrix.

Sample of a QA matrix

Test Case: File Open #

Test Description

Test Cases/ Samples


No. of Bugs



N/A Setup for [Project Name] setup
1.1 Test that file types supported by the program can be opened 1.1 P/F # #
1.2 Verify all the different ways to open file (mouse, keyboard and accelerated keys) 1.2 P/F # #

Defect Life Cycle


Raise Defects

The issues can be arises in any stages of the SDLC(Software development life cycle) instead of during testing cycles. The issue will be arise When the expected results of the test cases is not same with the actual results of the test case. While executing the test cases for a product/application, if the expected result of the test case is not same with the actual result of the test cases, then the issue/defect/bug can be raised against the development team. At the first time of reporting the issue, the status of the issue should be in “Open/New”. The Developer/development team will verify whether that reported issue is valid or not and can change the status of issue to reject or fixed of cancelled. After they verify the issue from the development team, the status of the issues should change to “Cancelled” or “Pending” up to that time until the issues is fixed. When the issue/bug is fixed, then the status can be changed to “Resolved” by the development team and assigned to the QA to verify the issues either it is solved or not.

  • Open Defects: The issues which are listed out in the bug tracking system without resolve status. There should be the access of the bug tracking system to the development team. There should be an issue ID, title of issue, issue found area, any attached image for which it should be easy to track the open issue.
  • Cancelled Defects: The issues which are reported by mistakely on the bug tracking system or the issues which are reported out of the requirement should be listed out in the cancelled status.
  • Pending Defects: The list of defects remaining in the defect tracking system with a status of pending. Pending refers to any defect waiting on a decision from the project manager or Business Analyst before a developer addresses the problem.
  • Fixed Defects: The issues which are resolved from the development team and waiting for the verification from the QA side should be categories under the fixed defects.
  • Closed Defects:  The issues which are verified by the QA side and fixed by the QA during the project life cycle must be listed out in closed defect list.
  • Retest and correct defects: Once the issues are resolved from the development team, the test cases which are failed initially need to be retested and check for any new defects.

To conclude the blog on Manual testing, there should always need for manual testing in the software Industry. I hope I have given the brief overview on Manual testing, which gives an understanding on software manual testing as well as the importance of manual testing. Share your thoughts on manual testing by commenting below.

Leave a reply