A Beginner’s Guide to Defects in Software Testing

In a perfect world, the software development might run smoothly, and the testing and verification phases would be quick, with almost no flaws to find.

But since that is just an imagination, testers in the real world have to deal with anomalies and malfunctions in every software application.

The quality of the software is determined by how clean or how bug-free software is?

Thus, it is essential that testers look through complex multi-layered applications and identify, document, track, and fix the numerous bugs or defects. 

Table of Contents

What does defect mean?

A defect is a condition that occurs in a software product when it does not meet the software requirements or expectations. 

It can be an error in the coding or logic of the program that causes it to malfunction.

They are defined as irregularities or deviations from the expected results resulting from mistakes in the development phase by a developer.

Defects or bugs or faults are deficiency or imperfection that prevent the software application to perform what it is expected to do.

Defects lead to the failure of a software application.

A few of the methods adopted for preventing the introduction of bugs by programmers during development are:

  • Peer Review
  • Code Analysis
  • Programming techniques adopted
  • Software Development Methodologies

Classification

The defect classifications given below are just guidelines. The basis of classifying the defects depends on the organization and the defect tracking tool used by them.

The team members must have a pre-agreement on the defect classification to be used to avoid future conflicts.

Defects can be classified based on:

Severity

The severity of a defect depends on the defect’s complexity to be fixed from the development perspective. 

The status of severity gave the tester an idea about the deviation that the software has from the specifications.

It can be:

  • Critical – These defects reflect crucial functionality deviations in the software application without fixing which a QA tester cannot validate.
  • Major – These defects are found when a crucial module in the application is malfunctioning, but the rest of the system works fine. The QA team needs to fix these issues, but it can also validate the rest of the application irrespective of whether the major defect is fixed or not.
  • Medium – These defects are related to issues in a single screen or single function, but they do not affect the system’s functioning as a whole. The defects do not block any functionality.
  • Low – These defects do not impact the software application’s functionality at all. It can be related to UI inconsistency, cosmetic defects, or suggestions to improve the user’s UI experience. 

Probability

The probability of a defect can be defined as the likelihood of a defect in a feature of the system.

The probability of finding a bug depends on the usage of that particular feature. If the attribute is rarely used, then the defect found will have a high chance. Still, if the attribute is widely used, then the defect might have a low probability, depending on the rarity of discovering the defect.

It can be:

  • High – If a defect is easily found by all or most of the feature users, then the defect has a high probability.
  • Medium – If a defect is found by at least 50% of the feature users, then the defect has a medium probability of occurrence.
  • Low – If very few users of the feature encounter a defect, then the defect has a low probability.

Priority

The importance or urgency of fixing a particular defines the priority of defect.

The priority of a defect is set by a software tester and finalized by the project manager.

It can be categorized as:

  • Urgent – Immediate resolution of this category defect is required as these defects can severely affect the application and cause costly repairs if left untreated.
  • High – Immediate resolution of these defects is essential as these defects affect the application modules that are rendered useless until the defects are fixed.
  • Medium – These are not such essential defects and can be scheduled to be fixed in later releases.
  • Low – Once the above critical defects are fixed, the tester may or may not fix them.

Phase Injected

The Phase of the software development life cycle in which the defect was first introduced or injected can also be used to classify the defects.

The Phase injected of a defect is found after proper root-cause analysis. 

The defect can be injected in any of the following phases:

  • Requirements Development
  • Detailed Design
  • High-Level Design
  • Coding
  • Deployment

Phase detected

After the Phase injected comes, the Phase detected. The Phase in which the particular defect was identified is called Phase detected.

There are the following phases in Software Testing:

Related Module

The module in which the particular defect was detected can be used as a classification for the defect. 

The related module classification provides information about the module containing the most bugs.

Related Dimension of Quality

These are hundreds of software quality dimensions such as accessibility, compatibility, concurrency, and many more. 

The decision as for which dimension is more important and should be fixed first is subjective. It depends solely on the dimension of more values by the tester in that particular situation.

Requirements and specification defect

The customer gap or producer (tester or developer) gap can cause requirement-related defects in which the developer or tester fails to understand the customer’s requirements.

The requirement documents should be written in unambiguous, uncontradictory, clear, non-redundant, and precise requirements to reduce these defects. 

Design defects

The interactions between incompatible or incorrectly designed components and modules lead to defects in the system.

The algorithms, logic or data elements, module interface descriptions, algorithms, and external software or hardware UI descriptions should be correctly designed to avoid design defects.

Coding defects

Wrong implementations of designs or codes due to the absence of development or coding standards can arise coding defects. 

These defects are closely related to defects in designing classes, primarily if the pseudo-classes are used in detailed design. 

Sometimes, it can be challenging to classify a defect as a coding or design defect.

Testing defects

Wrong testing or defects in test artifacts lead to testing defects. 

These can be of three types:

  • Test-tool defect – Test tools used by testers are also capable of introducing defects in the software. A manual test will be used to find the defects caused by automated tools.
  • Test-design defect – Defects in test artifacts such as test plans, test scenarios, test data definition, and test cases are referred to as test-design defects. 
  • Test-environment defect – When the test environment, i.e., hardware, software, testing people, and simulator, is not set, test-environment defects arise.

Status

A defect can be classified on the status or the state of the defect life cycle that the defect is currently in. 

These states can be:

  • Open
  • Closed
  • Deferred
  • Canceled

Word Product

Based on the work product or the documents or final products of a development phase, a defect can be classified as:

  • Source code – A defect found in the source code of the application.
  • SSD – A defect found in the System Study document of the software.
  • User Documentation – A defect found in the User manuals or operating manuals of the software.
  • ADS – A defect found in the Architectural Design Document of the software.
  • Test Plan or Test Case – A defect found in the test plan or test cases of the software.
  • FSD – A defect found in the Functional Specification document of the software.
  • DDS – A defect found in the Detailed Design Document of the software.

Types

A few of the basic types of defects in software development are:

Types of defects

Logical Defects

During the code implementation, a programmer might not understand the problem clearly or might think in the wrong way. In such cases, logical defects occur.

Logical defects are related to the software’s core and can occur if the programmer does not appropriately implement corner cases.

Performance Defects

When a system or the software application fails to deliver the desired and expected results and isn’t able to fulfill the user’s requirements, performance defects occur.

These defects also include the response of the system on varying the load of the system.

Multithreading Defects

Multiple task execution at the same time is called multithreading, and it leads to very complex debugging.

The issues of Deadlock and Starvation in multithreading can lead to a system’s failure.

Arithmetic Defects

An excess amount of work or less knowledge can cause a programmer to make mistakes in arithmetic expressions and their solutions, known as arithmetic defects.

Code congestion can render a programmer unable to read the written code properly and can also cause arithmetic defects.

Syntax Defects

Sometimes, developers, while writing the code, make mistakes related to the code’s style or syntax. 

These mistakes can be as small as omitting a symbol.

An example of syntax defects can be semicolon’s omission (;) while writing the code in C++ or Java.

Interface defects

The interaction between the user and the software can be affected by defects. These are called interface defects.

Complicated, unclear, or platform-based interfaces can cause interface defects in the software.

Defect Metrics

The accurate estimation of quality, cost, and effectiveness of the project being tested are fundamental.

Defect Metrics help us in estimating the aspects as mentioned earlier of software application.

Defect Density Metrics

The thickness or concentration of defects within the software is referred to as defect density metrics.

Defect Density Metrics can be defined as the percentage of the number of defects identified per requirement or test case.

Formula:

Defect Density = (No. of Defects identified / Size) * 100

Where size is the number of required test cases.

Example:

Suppose in software, there are 1000 test cases, out of which 600 are passed, and 400 are failed. The defect density, in this case, will be,

Defect density = (400/1000) * 100 = 40%

Thus, the defect density is 40%. Thus, 40% of the test cases failed during execution, or only 40% of the test design could catch defects.

Total Defect Metrics

The relative assessment of total defects v/s module size and complexity of the software can explain the testing efforts’ effectiveness.

NO one can believe a developer that says “Zero defect found” in any software.

Defect Distribution by Priority and Severity

We have already understood that defects are classified based on their priority and severity.

If all the defects identified will be of low severity or priority defects, then this means that the test team hasn’t tested the system properly.

Total defects give a good idea about testing efforts, and priority and severity can help with the testing and build quality.

Weighted Defect Density

The average defect severity per test case can be defined as weighted defect density.

Formula:

Weighted Defect Density = [(5 * High + 3 *Medium + 1 * Low defects) / Size] * 100

Where size is the number of test cases or requirements. 

Based on the severity, the weights 5,3 and 1 are assigned.

Cost of finding a Defect

Be it development or testing, cost plays an essential factor in any project. 

If a client is paying huge sums, they expect that the software delivered is defect-free and effective.

Formula:

Cost for finding a defect = Total money spent on testing / total defects found.

The total money can be calculated using complete resources, billing rates, and duration of testing.

Defect Removal Efficiency

The testing team’s competence to effectively remove and identify the maximum number of defects before sending the software n the next Phase is called defect removal efficiency metrics.

Formula:

Defect Removal Efficiency = [No. of defects found during testing / (No. of defects found during testing + No. of defects uncovered during next Phase)] * 100

Example:

Consider software with 200 defects in the Initial Phase. After fixing, retesting, and performing regression testing on the software, the User Acceptance Team tests and finds 40 more defects that could have been identified and resolved during previous testing phases. These 40 defects were leaked into the subsequent phases and were not removed from the system. Thus,

Defect removal Efficiency = [200 / (200+40)] * 100 = 83.33%

Thus, the software testing team was able to identify and fix only 83.33% of the total defects and leaked the rest of 16.67% in the subsequent phases.

Defect Leakage Metrics

Contrary to defect removal, defect leakage metrics show the percentage of defects leaked from the current stage of testing to the next or subsequent Phase.

Defect leakage metrics should be minimal to improve the team’s worth.

Formula:

Defect Leakage = [No. of defect found during next phase / (No. of defects found during testing + No. of defects uncovered during next Phase)] * 100

Example:

Consider software with 200 defects in the Initial Phase. After fixing, retesting, and performing regression testing on the software, the User Acceptance Team tests and finds 40 more defects that could have been identified and resolved during previous testing phases. These 40 defects were leaked into the subsequent phases and were not removed from the system. Thus,

Defect Leakage = [40 / (200 + 40)] * 100 = 16.67%

Therefore, 16.67% of the defects were leaked into the next Phase.

Defect Age

The defects’ lifecycle starts from identifying a new defect and ends when they are fixed or kept for the next release and documented.

Defect age is the time lapse between identification and closure of a defect.

Defect Age (in time) = Current date (closed date) – Defect detection date

The defect can be in hours or days—the lesser the age of the defect, the better the testing team’s responsiveness.

Classification of Defects

Not all defects are the same or require a testers’ immediate attention.

This is why it is essential to classify defects.

Defects are defined with a level of severity or priority so that the development team can understand which one to resolve first.

1. Defect Severity

Defect Severity or Bug Severity, meaning in layman terms, can be explained as an impact that the defect has on the software’s development or execution.

According to ISTQB, Defect severity can be defined as the degree of impact of the defect on the development or operation of a system’s component or the entire system.

It is the QA team’s responsibility to define the level of severity for each identified defect.

With efficient defect severity measures, the QA team can resolve critical defects and other severities in the system to control the potential business impact of the end-user.

Example:

The spelling of “Add to Cart” is wrong is a non-critical defect, but the functionality of “Add to Cart” not working in an e-commerce website is a critical defect.

Why is it important?

The importance of defining the levels of severity of the issue is:

  • Testing teams can determine the efficiency of the testing process.
  • The improper functioning of different aspects of the software can be easily checked.
  • Bug tracking will be increased using defect severity, which will improve the quality of the software.
  • The QA team can determine and test the higher severity defects first.
  • Bug monitoring and management systems can become more systematic and organized using defect or bug severity.
  • The defects can be allocated more efficiently depending on the developer’s skills and experience level and level of severity and level.

Types of Defects based on Defect Severity

Based on Defect Severity, there are four levels of severities found in software:

(i) Critical

Represented by S1, this is the highest level of severity or the highest priority severity that can be found in software.

These defect types obstruct the software’s execution entirely and sometimes cause a complete shutdown of the system.

It is of utmost importance to remove these defects as they don’t have a workaround and disrupt the software’s further testing.

For example, even after entering the correct id and password, the user cannot log in and access an application is a critical defect.

(ii) Major

A significant defect occurs when the system is given a set of inputs but cannot provide the desired set of results. S2 represents it.

These defects might cause a system to crash but still leave a few functionalities operable, thereby leaving a tiny room to work around them for further testing.

For example, not being able to add more than one item in the cart on an e-commerce website is a significant defect but not critical as the user will still be able to shop for the one item.

(iii) Minor

When a component does not produce the expected results or meets its requirements but has a negligible impact on the overall system is called a minor or moderate defect.

Represented by S3, minor defects affect only a small part of the application that causes the software to exhibit some unnatural behavior but does the system as a whole.

For example, even after ordering the items, the item is visible in the ordered items and can be tracked, but a message of “Item not ordered” prompt window is shown. This is a minor defect, as the prompt is just the wrong one.

(iv) Trivial

Trivial errors of defects are low severity defects that only affect the software’s look and feel but do not affect any functionality or malfunction in the system or software.

This S4 level of severity defect might not affect the functionality but is still a valid defect and must be fixed.

For example, Misalignment or spelling mistake in the “Terms/Conditions” page of the website is a trivial defect.

Caution before defining a level of severity

As the defect severity is an essential aspect of software testing, great caution should be exercised while defining them.

Each level of severity must be clearly defined as this could lead to differences between the development and QA team.

2. Defect Priority

The importance or urgency of resolving a defect is determined by the priority of the defect or bug.

The project manager assigns each defect’s priority based on the user’s business needs and the severity of the defect.

The priority of a defect is subjective as it is determined after comparison with other defects.

Apart from the project manager, the higher or lower priority of the defects are defined by product owners, business analysts, and business stakeholders.

Types of Defects based on Defect Priority

There are four priority levels to classify defects:

(i) Immediate

The defects that affect the system and the business requirements severely need to be fixed immediately.

Represented by P1, these defects restrict the system from performing its core functions or making the system unstable, thereby restricting any further testing. 

All critical severity defects have immediate priority unless the sequence is re-prioritized by business/stakeholders.

For example, a Misspelled company’s name might not be a high or critical severity defect, but it is an immediate priority since it affects its business.

(ii) High

These P2 or high priority defects are the next in line to resolve once the immediate priority ones are resolved.

The defects that do not match their ‘exit criteria’ or make a component deviate from the pre-defined performance measures. These measures can be dependent on coding or environmental factors.

The business or the customer is not directly affected by the defect but is still urgent to be fixed.

For example, not adding products to a shopping cart belongs to a high priority category.

(iii) Medium

The defects that do not fall into the high and immediate category fall under medium priority. 

These defects must be fixed as soon as the above ones are fixed as they could contain defects relating to functionality.

Represented by P3, these defects can sometimes include trivial or cosmetic errors such as wrong error messages during failure.

These defects can also be fixed in the next releases.

For example, even after successful login, an error message of “login id/password invalid” is prompting, then it is a medium priority error.

(iv) Low

These are mostly low severity defects that do not require immediate attention and can be resolved once other critical and essential defects are fixed.

Represented by P4, these defects could be typing errors, cosmetic errors, or suggestions related to enhancements for improving user experience.

Since they do not require immediate attention, they can be resolved in the future.

For example, the contact tab is not located on the home page and is hidden inside the other menu in the navigation bar.

Guidelines before selecting Priority and Severity

For smooth transaction and communication between development and testing teams, some guidelines must be decided before choosing the level of severity and priority.

Some of these guidelines are:

  • It is essential to understand the severity priority concept.
  • Before deciding the impact of the defect, understand the priority of the defect for the user.
  • List all the sequences related to the defect’s occurrence and operation.
  • First, isolate the defect and then determine the depth of its impact.
  • Based on complexity and verification time, determine the time required to fix each defect.
  • Determine the class of input that the defect supports.
  • Assign severity level based on the type of defect as it will also determine its priority.

Difference between Defect Severity and Priority

Defect SeverityDefect Priority
It is the degree of impact of a defect on the system.It is the order in which the developer will resolve the defect.
QA testers set the level of severity.Product managers and other business stakeholders set the priority sequence.
The severity, once decided, does not change with time.Defect priority can change based on comparison with other defects.
It is of four types:CriticalMajorMinorTrivialIt is of four types:ImmediateHighMediumLow
It indicated the seriousness of the defect.It indicates the business importance of the defect and how soon it needs to be fixed.
Severity is based on functionality that the defect affects.Priority is based on the business value that is affected by the defect.
During SIT, defects are fixed based on severity first and then priority.During UAT, defects are fixed based on priority.

Combining Severity and Priority

Since priority and severity are two of the most crucial classification of defects, many organizations combine the severity and priority level to define different levels.

1. High Priority, High Severity

The critical or major business requirement defects fall under this category and need to be fixed immediately as no further testing can happen until they are fixed.

For example, after paying for products in your shopping cart on an e-commerce website, the item is still not ordered, but the payment from your account has been deducted. 

2. High Priority, Low Severity

The defects that do not affect the functional requirement but affect the business requirement or the user experience, then those defects fall under this category.

For example, the website’s fonts and alignment are not user-friendly, then fewer customers will click on the website, thereby affecting the business.

3. High Severity, Low Priority

These defects have a high severity in terms of functional requirements but do not affect the business requirements.

For example, the application can only be accessed from newer versions of a particular browser with a high severity but low priority defect.

4. Low Severity, Low Priority

Spelling mistakes, font, misalignment on later pages of the application that the users frequently will not access belong to this category.

For example, spelling mistakes in the term and conditions page or privacy policy pages are low severity and low priority defects.

3. Defect Probability

The probability of a defect is the likelihood of the visibility of the defect by the user. 

Defect probability can also be called Defect Visibility, Bug Visibility or Bug Probability and is denoted by percentage (%).

The probability of a defect is determined concerning a particular feature or component but not the entire system.

Thus, a low probability defect might be present in a widely used feature, whereas a high probability defect is rarely used.

Types of Defect Probability

(i) High

The high probability defects are those that can be easily encountered by the users. 

These defects are present in almost all the features of user access.

(ii) Medium

While testing a particular feature, if a defect is encountered by more than 50% of its users, the defect is categorized as a medium probability defect.

(iii) Low 

Low probability defects are encountered by very few users and do not need to be resolved right away.

Defect Life Cycle

A defect life cycle or bug life cycle has defined a set of states that a defect undergoes during its entire life-form, being found to be resolved/rejected/deferred.

The bug life cycle can be understood as the growth cycle of a bug defined to make the coordination and communication between various teams easier.

The bug cycle varies depending on the organization, tools (JIRA, QC, etc.) and the type of project.

Defect States

There are various states in the defect life cycle or bug life cycle in software testing. They are as follows:

New: When the testing team comes across a mistake or error in the developed application, the software bug cycle initiates, and the bug is said to be in a “New” state.

Assigned: After the defect is found, the test lead or project manager approves the defect and assigns it to the development team by changing the defect’s status to “Assigned.”

Open: When the defect is kept under progress by the development team, it is assigned an “Open” state. The developers might start fixing the bug if it does not fall under these categories of the bug life cycle:

  • Rejected: If the developer does not consider the bug to be genuine or as a coding mistake, then the defect status is changed to “Rejected.” The defect can be rejected if it is a duplicate, non-reproducible, or not a defect at all.
  • Duplicate: If the defect is repeated or corresponds to another defect in the defect log with the same concept, then such defects are placed in a “Duplicate” state.
  • Deferred: If the defect is not of higher priority and can be postponed to be fixed in later releases, then the defect’s status in the bug cycle is changed to “Deferred.” Some of the cases in which a defect can be assigned deferred state are:
    • The defect is expected to be fixed in later versions.
    • The bug is minor and does not require immediate attention.
    • The customer might change the customer requirement.
    • The bug is unrelated to the current build of the software.  
  • Not a bug: On testing the defect, if the bug does not affect the application, then the defect is given a “Not a bug” status in the defect life cycle.

Fixed: After resolving the defect, the developing team passes on the defect to the testing team for retesting by assigning it “Fixed” status.

Pending Retest: When the testing team receives the “Fixed” state defect from the previous step of the defect life cycle, the bug is assigned “Pending retest” status.

Retest: Once the re-testing of the application has begun to check if the defect has been fixed or not, then the defect is assigned “Retest” status. After the re-testing of the defect has completed, the testing team can assign it either of the two states of the bug cycle:

Reopen: If the defect is not resolved even after re-testing, the testing assigns “Reopen” status to the bug and sends it back to the development team.

Verified: If the defect has been entirely resolved, the defect is assigned “Verified” status implying that the defect has been resolved and verified by the QA.

Closed: Once the bug is “Fixed” by the developer and “Verified” by the tester, then the status of the defect is changed to “Closed.” The defect life cycle also comes to an end with the completion of this state.

Guidelines for Defect Life Cycle

There are specific guidelines to be considered before defining the states of the defect life cycle. They are:

  • A team leader or project manager should ensure that the team members know their responsibility towards the defect and its status.
  • The status of the defect should be assigned and maintained.
  • Ensure that the entire defect life cycle is well-documented and understood by each member of your team.
  • Before changing the status of the defect, a plausible and detailed reason must be specified.

Defect Root Cause Analysis

Now that we have seen the defects and the defect life cycle classification, let’s check out the root cause analysis (RCA) of the defects to understand its root causes.

If this analysis is performed systematically, then the current software will benefit not only but also the entire organization.

Root Cause Analysis helps find defects early, thereby reducing the overall cost of fixing them in the long run. It can be done on design defects, testing defects, as well as product defects.

However, remember that Root Cause Analysis is not aimed at treating or resolving the defect but only finding or diagnosing them.

By considering a case study of a refrigerator mechanic, we can understand Root cause Analysis better. A good mechanic will first find the root cause of our refrigerator’s problems and then employ appropriate methods of fixing it.

Root Cause Analysis can be considered a reverse engineering mechanism in which the chain of events goes backward, starting from the last possible action.

Process (5W1H Method)

With the help of the below-given four questions, Root Cause Analysis can be performed:

What?

The first and essential step in the root cause analysis process is determining “WHAT” the problem is. 

It will be impossible to determine the root cause of the problem if we don’t know exactly the problem.

For example, your website users have complained that they cannot log in with the correct id and password.

When?

The next step is determining when a particular problem had occurred.

For example, Users were unable to login between 4:00 AM to 5:00 AM.

Where?

This step determines where the problem occurred, i.e., on which page of the website or the location on the software.

For example, the users could not log in on the login page, so the problem occurred on the login page.

Who?

This question deals with the person who was involved with the resolution of the problem.

For example, the server administrator was responsible for server maintenance, which led to login issues with the users.

Why?

This step deals with identifying the reason behind the problem’s occurrence and the detailed analysis of its underlying root cause.

For example, the servers were down for maintenance, which interfered with the login of the users.

How?

This is the final step in the Root Cause Analysis process. It helps us in ensuring how the problem won’t occur in the future.

For example, ensuring that an email or SMS is sent to all the registered users whenever the system is under maintenance will help us prevent its future occurrence.

Step by Step Root Cause Analysis

It is essential to implement a structural and organized approach for root cause analysis.

The steps to be followed are:

1. Form a Root Cause Analysis team

Every team’s job is to have an RCA manager who will collect details related to problems and initiate the RCA process.

Personnel from each team, i.e., Requirement, Testing, Quality, Design, Support & Maintenance, and Documentation, should be in the RCA team.

Each team will be responsible for tracing the defect and determining what went wrong in their phases with the incident report, problem evidence, etc.

2. Define

Once the incident report, problem evidence, and other documents have been collected, the RCA team will then study the problem and try to determine:

The sequence of events leading the user to the problem,

  • Systems involved in the problem,
  • The period of the problem,
  • Impact of the problem, and
  • The person involved in the problem.

3. Identification

After defining the problem, the RCA team can use the Fishbone or 5 Why Analysis to determine the root cause.

The RCA manager will help then set up a set of rules for the brainstorming session’s smooth functioning.

4. Implementing Root Cause Corrective Action (RCCA)

With the delivery manager’s help, the delivery date and the versions requiring the fix will be defined, after which the correction action will begin.

For example, when a support team provides the users with a fix for their problem, it will be temporary. RCCA implementation will provide a permanent fix so that the defect won’t occur in the future.

5. Implementing Root Cause Preventive Action (RCPA)

RCPA deals with preventing such similar issues in the future with an updated instruction manual, updated assessment checklist for the team, improved skill set, etc.

Advantages of Root Cause Analysis

RCA is helping in

  • Reducing rework on the same issues again and again.
  • Preventing recurring problems.
  • Reducing the cost of fixing problems.
  • Providing customers and stakeholders with negligible defect software

Error vs Defect vs Failure

Generally, the testing newbies are confused about what term to use for the abnormality or unexpected output in the system/application. 

They call these abnormalities as errors, bugs, defects, or failures and use these terms interchangeably without clearly understanding what they mean.

Let’s see how the errors, defects and failures differ from each other.

Terms

First, we will discuss a few of the different terms that are used by the newbie testers.

Error: An error can be a mistake, misunderstanding, or a misconception occurring on the part of the developer. Developers can be programmers, software engineers, analysts, and testers. Error causes change in functionality in the program. Example, it might happen that while writing a program, the variable name is misused. This might lead to looping or other syntactical error.  

Defect: It is the point where the application or software deviates from the customer’s requirements. It can be any trouble related to the external or internal behaviour and features of the system.

Bug: Error that is found by the tester is called a bug. An error when not handled properly, might get shipped to the customer and causes the program to work poorly and produce unexpected results. It might also lead to crashes in the application.

Wrong: When the developed software is not implemented as per the requirements, then it is said to be wrong.

Extra: It is the unwanted or unspecified aspect of the software that was not specified in the user’s requirement. This attribute might or might not be required by the user and hence is considered a defect.

Fault: Errors can lead to faults. It is an incorrect stage, process or definition in the program that causes it to misbehave. A fault is an anomaly that causes the program to perform in an unexpected, unanticipated, or unintended manner.

Missing: Opposite of extra, missing is a defect categorization when the software does not fulfil a user’s requirement. It occurs when one or more specifications from the user’s need are not implemented.

Failure: When any defect reaches the end customer, it is called a failure. It is defined as the inability of the software system or component to perform in the way it is supposed to, i.e. within specified requirements.

Introduction to error, defect and failure

Someone can understand the error, defect and failure with the following example:

While creating an attendance software, the developer forgets to add the half-day functionality. This is a mistake or an error. It leads to a defect or fault or bug in the software as this functionality is essential. Now if this application is developed and delivered to the customer, then the software will be marked absent if someone in the organization takes a half day. Thus, the system fails to do what it was supposed to do, and this causes a failure.

Error

“To err is human”.

Error is a human mistake caused by a mistake, misunderstanding or misconception on the part of the developer. 

A single error can cause a series of defects in the software.

It can be due to:

  • A logical mistake by the developer cause error.
  • Design flaw by an architect.
  • Human habits like lethargy, procrastination, or others cause an error.
  • Errors can be caused by incorrect or insufficient information from customers.
  • Misinterpretation of requirements by a business analyst causes an error.
  • Miscommunication of requirements between analyst and developer lead to errors.

Types of Error

  • Syntactic Error: Errors related to grammar or spelling in a sentence that is very much evident while testing software GUI is syntactic errors.
  • Error handling error: Errors that occur when the errors are not handled properly are called error handling error. When a user interacts with a system, an error might occur. If this is not handled correctly, it leads to confusion and is termed as error handling errors.
  • Hardware error: It might happen that the hardware component that is used with the software is not compatible or has come missing drivers then hardware errors occur.
  • User Interface error: When the user interacts with the system, they might face missing or incorrect functionality errors. These are termed as user interface errors. 
  • Testing error: Failure is scanning or reporting a bug that occurs while test execution is called a testing error.
  • Calculation error: Error that occurs due to incorrect formulas, mismatching of data types, flawed logic, or others are called calculation errors.
  • Flow Control error: When the program flows in an unexpected direction or passes control in the wrong direction, then these errors are called flow control errors. These errors lead to infinite loops, run-time syntactical errors, etc.

Causes of errors

Some possible reasons why the errors occur in a software product are:

  • Human Error: Humans are the only intelligence that we can rely on, and they are prone to errors. By being careful, a few of the mistakes can be avoided but not all.
  • Miscommunication: A software is developed and tested by several people, and this might lead to conflicts which result in improper coordination and poor communication. Misinterpretation and misunderstanding lead to errors in the software which would have been otherwise avoided. 
  • Intra-system and inter-system interfaces: The chances of errors in inter-system and intra-system interface establishment are very high.
    • Intra-system interface – The integration of different modules and features within a system or application leads to intra-system interface errors.
    • Inter-system interface – The compatibility between two different applications while working together can lead to inter-system interface errors.
  • Environment: Floods, fires, earthquakes, tsunamis, lightning and others affect the computer system and lead to errors.
  • Pressure: Working under limited or insufficient resources and on unrealistic deadlines lead to less availability of time for developers to test the code themselves before passing it on to the testing team. 
  • Inexperience: The allocation of tasks in a project must be according to the experience or skills of the team member. If this is not followed, it might lead to errors as the inexperienced or insufficiently skilled members won’t have proper knowledge of the task.
  • Complexity: Highly complex codes, designs, architecture or technology, can pave the way for critical bugs as a human mind can only take a certain level of complexity.
  • Unfamiliar technology: The developers and testers who do not stay updated with the recent technological development will face a problem while on projects that are based on technologies outside of their knowledge domain and thereby cause an error.

Error vs Defect vs Failure

ErrorDefectFailure
It is a mistake.The Error causes a defect.Defect leads to failure.
Human action generates unexpected results.The product or software has a specific deficiency due to which it isn’t able to meet the required specifications.The component or system fails to perform the required activities within the predefined criteria.
All errors do not lead to defects. Example, a simple grammatical mistake isn’t considered a defect.All defects do not lead to failures. Example, certain preconditions might be required by a particular function but are not necessary and thus do not cause failure.All failures are not critical. Example, the client might need to improve the version of the system while using the software as the client’s version is outdated.

Final Words

The appearance or occurrence of one discrepancy leads to other problems, and thus it is all interrelated. 

Errors are a result of human mistakes which lead to deviation from requirements and result in defects. 

A defect may or not be critical, and thus it doesn’t necessarily mean an error in the code. 

It can be an unimplemented function defined in the requirements as well.

So, check carefully before jumping to the conclusion that the code is erroneous.

Defect Report

A defect report or bug report is a document identifying and describing a defect that is detected by a tester. The defect report can be used to identify similar type of defects in the future to avoid it.

Define Defect in Software Testing

A bug or a defect is the result or consequence of a coding fault that does not meet actual requirements. 

When testers conduct testing of the test cases, then the point of application’s variation or deviation from the end user’s requirements that causes incorrect or unexpected results is a defect.

Issues, bugs, problems, or incidents are commonly referred to as names for software testing defects.

Define Defect Report

Documentation that specifies the occurrence, status, and nature of the defect transparently so that developers can easily replicate and fix it is called a defect report.

The document contains details about the defect, such as its description, name of the tester who found it, date when the defect was found, the developer who fixed it, and more. 

Template for Defect Report

The template for defect reports may vary from one tool to another. However, a general defect report template will be as follows:

IDAutomated or manual unique identifier to identify the defect
ProjectName of the project
ProductName of the product
Release VersionProduct’s release version
ModuleThe module where the defect was detected.
Detected Build VersionProduct’s build version where defect was detected.
SummaryA clear and concise summary of the found defect.
DescriptionA simple and comprehensive description of the defect. Do not use anything repeatedly or use complicated words.
Steps to ReplicateStep by step description for defect reproduction with several steps.
Actual ResultsThe actual results were obtained after following the steps.
Expected ResultsThese results were expected to be obtained after following the steps.
AttachmentsAdditional information like screenshots and logs.
RemarksAdditional comments related to the defect.
Defect ProbabilityThe probability for the defect.
Defect SeverityThe severity of the defect.
Reported ByThe person’s name who reported the defect.
Assigned ToThe person’s name to whom the defect is assigned to for analyzing or fixing.
StatusDefect’s status
Fixed Build VersionBuild version of the product after fixing of the defect.

Defect Management Process

A systematic process to identify and fix bugs is called a defect management process.

There are the following six stages in a defect management cycle:

  • Discovery – of defect
  • Categorization – of defect
  • Fixing/Resolution – of the defect by developers
  • Verification – by testers
  • Closure – of defect
  • Defect Report

1. Discovery

This stage involves finding out as many defects as possible by the testers before the end-user discovers them.

Once the defect is acknowledged and accepted by the developers, it can be called discovered.

2. Categorization

Categorization refers to dividing various defects according to their priority so that developers can fix them accordingly.

There are four categories:

  1. Critical: These defects need an immediate fix as they can cause significant damage to the product if left unchecked.
  2. High: These defects impact the main features of a product.
  3. Medium: These defects cause minimal deviation from the software requirement.
  4. Low: These defects have a minor effect on the operation of the product.

3. Resolution

In software testing, fixing of defects or defect resolution is generally a four-step process that helps to fix and track defects easily. 

Starting from assigning defects to developers, followed by defect fixing schedule, after which defects are fixed, the process ends when a report of resolution is sent to the test manager.

  1. Assignment: The defect to fix is assigned to a developer, and the status is changed to Responding.
  2. Schedule fixing: According to the defect priority, the developers create a schedule to fix the defects.
  3. Fixing the defect: The test Manager compares the development team’s defect fixing procedure with their schedule.
  4. Resolution: The defect report from developers is sent to the test manager after the defects are fixed.

4. Verification

The testing team verifies whether the defects fixed by the development team have been resolved or not.

5. Closure

After successful resolution and verification of the defect, the defect status is changed to closed. If this does not happen, then testers recheck the defect.

6. Reporting

The management board should understand the defect management process and have a right to know defect status.

The test manager prepares a defect report and sends it for feedback to the management team. The management team checks the defect management process and sends its feedback or support as needed. 

The reporting of defects leads to better communication and tracking of the defects.

Need for Defect Management Process

The requirement of a concrete defect management process can be better understood with the help of the following example:

Suppose a “Task Manager” application is being tested. 

While testing, some bugs, say 80, were found by the testing team and were reported to the test manager. 

The test manager informs the development team about the defects, which respond after a week with 70 defects fixed. 

The test manager sends this information to the testing team. 

The testing team responded a week later, stating that the 70 defects were fixed, but 20 new ones occurred.

Now, this whole process can be a bit complicated due to only verbal communication. The tracking and fixing of defects will be difficult. 

Thus, a defect management process is needed.

Effective Reporting of Defects

The defect reports must be generated effectively. 

This helps save time and unnecessary effort in trying to understand then reproduce the defect.

The following measures can be followed to ensure useful defect reports:

1. Reproduce:

Once a defect has been uncovered, replicate it once more for surety. If replication is not possible, recall the exact test conditions that led to the defect and try again. However, if the defect still does not reappear, send the detailed defect reports and all the trial results for further investigation.

2. Be Specific:

Every action in the defect reports should be specific. 

For example, instead of saying “Log in,” there should be (1) Go to Home Page, (2) Click login button, (3) Enter user ID and password, and (4) Click Login.

If, for instance, there are multiple paths, then mention the exact path that was followed and lead to the defect.

Refrain from using vague pronouns like “it” and specify what “it” stands for.

3. Objective:

Instead of using subjective statements like “Lousy application,” avoid emotions and objectively state “Yes” or “No” or “Accepted” or “Rejected.”

4. Review:

After writing the writing, submit it only after reviewing it and removing typos.

5. Detail-oriented:

Provide information in a detailed manner. Developers should not be at a lack of information.

Defect Metrics

Defect metrics are used to measure the quality of the test execution. There are two parameters:

Defect Rejection Ratio (DRR) = (Number of Defects rejected / Total no. of defects raised) * 100

Defect Leakage Ration (DLR) = (Number of defects missed / Total no. of defects of software) * 100

If the DRR and DLR values are small, the quality of test execution will be better. The test manager defines the accepted ranges of these values.

These ratios can be reduced by

Improving the testing skills of team members.

Spending more time for test execution and reviewing the results.

For example, suppose the “Task Manager” project has 80 defects, but the testing team could identify only 60. Out of these, 10 defects were rejected by the development team. Thus, 20 defects were missed.

So, the DRR ratio will be: 

(10/60) * 100 = 16.66%

And the DLR ratio will be:

(20/80) * 100 = 25%

Defect FAQ

What is a defect example?

An example of a software defect can be a grammatical error in UI or coding errors in the program.

What is another word for defect?

In software testing, a defect can be called a bug, error, fault, and many more.

What is defect fix?

A defect is said to be fixed when its status is marked as closed. The defect’s status is changed to closed when it is either adequately verified or is kept for fixing in subsequent releases.

What is defect life cycle?

Defect Life Cycle or Bug Life cycle is the journey of defect from being identified to fixed. In its life cycle, a defect goes from being identified to assigned to active to being testing, verified, and closed. The defect can also go into a rejected or deferred state or reopened state. These states vary from organization to organization, depending on the tool used for testing.

How do you control defects?

The defects can be reduced by:
1. Effectively executing defect analysis
2. Thoroughly analyzing software requirements
3. Using error monitoring software
4. Aggressive regression testing
5. Frequent Code refactoring.

Recommended Articles