Software Testing Principles

I am writing in response to the blog post at https://www.guru99.com/software-testing-seven-principles.html titled “7 Software Testing Principles: Learn with Examples”.

This blog post highlights some useful general guidelines for software testing. In order to write effective test cases, it is important to follow some logical approach toward determining what to test for and how to test it, and this is a guide that describes such a set of principles to follow that are well suited to capture the logic involved in software testing.

The first principle is that exhaustive testing is not possible. I am not sure I believe this. In general it is useful to assume there exists no perfect test, but for simple enough applications where the number of possible interactions are enumerable, I would think that it would be possible to achieve exhaustive testing, much like an exhaustive proof, where every possible path is covered and verified. Maybe I am missing the implausible event that the test is correct but the computer running the tests is corrupted in such a way that certain tests are not run. This is relevant to a point made in this area, which is risk assessment.

The second and third principles make similar points. Always running the same tests will eventually not cover certain issues. If all of the same methods for testing are always applied exactly the same, then eventually there will be some scenario which the particular method is not suited for, and it will miss something. This leads into the later principles: the absence of a failure is not proof of success, and context is important. Developing tests suited for the particular application is necessary to ensure the correlation between tests passing and the program functioning correctly, and just because every test passed does not mean the program is going to work perfectly.

This set of software testing principles can be summarized in a few basic points. Develop test cases that are well suited specifically for the application that is being tested, consider the risk of certain operations causing a failure, and do not assume that everything works perfectly just because every test case passed.

Advertisements

Software Engineer Qualification

I am writing in response to the blog post at https://www.shiftedup.com/2015/05/07/five-programming-problems-every-software-engineer-should-be-able-to-solve-in-less-than-1-hour titled “Five programming problems every Software Engineer should be able to solve in less than 1 hour”.

This blog post shares a story about a history of people who apply for the position of a software engineer and claim some loosely related skills without actually having any chance of understanding or performing the job. The frustration of the author is expressed, and the author lists five programming tasks to disqualify any supposed “software developer” who would not be able to complete them in under an hour. I attempted them myself and they only took five minutes.

I am not sure what motivation people have to apply for a job that they are in no way capable of performing, but the author of this blog post seems to be fed up with how common it is. Supposedly, though, people who do not know what programming is are attempting to become software engineers.

I think that the list of five programming problems and the time constraint of one hour is a generous filter to sort out all of the people who have never written a program in any language ever before. It certainly would not be enough to qualify for the position of a software engineer, but that is not what the problems are meant to indicate upon fulfillment, it is simply what they are meant to reject upon failure. Somebody who claims to be a “developer” and fails to accomplish these simple tasks should revisit their resume.

The problems themselves are very basic. Find the sum of some numbers using loops or recursion, combine elements in two arrays, calculate Fibonacci numbers, and the last two problems are more peculiar but still simple demonstrations of basic problem solving. It should be evident in much less than an hour whether a person is capable of solving them, and any experienced software engineer should only need ten minutes.

The blog post acknowledges some feedback about the last two problems that are a bit less conventional than the others, but I think that the ability to solve unconventional problems is important, and I think anyone who writes code in something besides a markup language or an object notation could solve them.

Procedural and Object Oriented Programming

I am writing in response to the blog post at https://www.codementor.io/learn-programming/comparing-programming-paradigms-procedural-programming-vs-object-oriented-programming titled “Comparing Programming Paradigms: Procedural Programming vs Object-Oriented Programming”.

Object oriented programming seems to be the focus of all that is ever taught in a computer science course after the basics of syntax and control structures are covered, which are the basis for procedural programming. The shift into object oriented programming seems to mostly be for the sake of establishing proper design principles such as encapsulation and normalization to reduce redundancy, but these are not mutually exclusive features of the object oriented programming paradigm; it is still entirely feasible to write procedural code that is still “good” code.

The blog post does not directly define what procedural programming is about, but it alludes to the writing of straightforward code that makes use of variables, scope, functions and control loops. Then comes the brief anecdote of writing thousand-line long programs that start to become difficult to maintain, and how the object oriented programming paradigm is the solution. Object oriented design is definitely helpful for improving the scalability of a large program by introducing better organizational practices to the code structure, but the principles of encapsulation and modularity can be applied directly to the poorly maintained program anyway without changing paradigms. This is not to say that object oriented programming is bad or unnecessary, but the point is that procedural programming is not bad and does not need object oriented programming to “fix” it. Procedural code happens to be the most common poorly written code because it is most commonly used by beginners, who learn about better coding practices once introduced to object oriented programming.

Some of the faults with object oriented programming are described in the blog post, adding that it is not the best idea to avoid the use of procedural programming for the sake of adopting the exclusive use of object oriented programming. Modularity regarding class extensions and modification of a class can make things difficult in languages that focus on object oriented programming, where overriding a method or re-implementing a class may have adverse effects on subclasses. It is ultimately decided that multi-paradigm programming is a good choice, where the benefits of procedural and object oriented programming can be combined.

Positive and Negative Testing

I am writing in response to the blog post at https://www.guru99.com/positive-vs-negative-testing.html titled “Positive Vs Negative testing”.

Positive and negative testing have some features in common with the details of boundary value testing that we covered in class. Robust boundary value testing, in particular, tests values not only on the boundaries but also inside and outside of the boundaries. Testing input values that are inside the boundaries is called positive testing, and testing input values that are outside of the boundaries is called negative testing.

The blog post confirms as much. Positive testing is about providing valid inputs and testing that the application behaves as expected. Negative testing is about providing invalid inputs and testing that the application does not do anything that it is not expected to do. Boundary value analysis and equivalence partitioning are listed as techniques for positive and negative testing. Testing input data that is chosen within the boundary qualifies as positive testing, and testing input data that is chosen outside the boundary is negative testing. Equivalence class partitioning has partitions that are also valid or invalid based on whether they are inside the boundary. Testing valid partitions is positive testing, and testing invalid inputs is negative testing.

“Positive testing” and “negative testing” are separate types of testing by themselves, but it does not necessarily make sense to only use one of them; they do not provide a complete analysis of the program’s behavior. Positive testing will yield no conclusions about the behavior of the program given invalid inputs, and negative testing does not verify that the program behaves as it is supposed to when it is given valid inputs. Boundary value testing and equivalence class testing are specific methods that use positive and negative testing, and the process of testing inputs directly falls under black box testing, which does not make the distinction between whether an input is categorized as valid or not, only whether the program behaves correctly. Doing both positive testing and negative testing will provide sufficient information about the behavior of the program to be confident about whether the program is going to behave correctly.

Grey Box Testing

I am writing in response to the blog post at https://www.guru99.com/grey-box-testing.html titled “Grey Box Testing: Process, Techniques, Strategy, Challenges”.

Grey box testing, as the name suggests, is meant to be a combination of black box testing and white box testing. In white box testing, the internal structure of the code is known, and in black box testing, the structure of the code is unknown, and in grey box testing, it is partially known. Grey box testing combines the benefits of black box testing and white box testing, combining the input of developers while testing from the perspective of a user.

An example of grey box testing is given where a tester is testing a website and something is wrong with the HTML code, so the tester is able to look at and make changes to the code themselves to continue testing. A few other types of testing are listed as qualifying as being in the category of grey box testing: matrix testing, regression testing, orthogonal array testing, and pattern testing. These methods follow the strategy of black box testing while integrating some white box testing practices to focus more on what the test cases should be looking for.

A sequence of steps is provided for how to perform grey box testing. The first two steps, identifying inputs and outputs, do not seem to be directly related to black box testing or white box testing alone. In black box testing, there would be no need to explicitly identify the inputs and outputs; they would already be given in the test specification. The creation of the specification does require access to the code, which relates to white box testing, so grey box testing in this sense seems like a real-time testing approach where there is not necessarily a pre-developed specification and learning about the environment is a part of the testing process.

Challenges in grey box testing that are listed include when a test executes and the result is incorrect, and when something being tested encounters some failure. These do not really sound like challenges, they seem to just be examples of what testing is for.

Object Oriented Programming

I am writing in response to the blog post at https://blog.codinghorror.com/your-code-oop-or-poo/ titled “Your Code: OOP or POO?”

Most of the code I have ever written has been founded in the imperative programming paradigm. I began pulling concepts from object oriented programming in the last few years to help organize large projects and keep the structure of things more adaptable. It is a useful way to develop a large framework, but relying on objects for absolutely everything seems like it would tend to cause a lot more problems than it would ever be meant to solve.

This blog post refers to “POO” as “Programming fOr Others”, as opposed to plain Object Oriented Programming (OOP). One of the purposes of object oriented programming, and one of the reasons it is such a dogmatically adhered-to convention, is that it makes it a lot easier for people who did not write the code to understand what it is doing and work with it themselves. The blog post discusses on the topic that object oriented programming can be over-used, and it is not the only solution to writing code that is readable and easy to be understood by others.

There was an excerpt about how there was a type of programmer who would only write five or ten lines of code, preceded by twenty lines of comments, and object oriented programming basically allows the two to be combined. Instead of concise code accompanied by an explanation, there is a large file filled with descriptive language embedded into the structure of the code itself.

Another excerpt mentioned the use of object oriented programming for trivial tasks. I do not think that it makes any sense to go through the effort of supporting scalability and maintainability for something that can be started and completed and discarded so easily without the extra work. A simple command line in an imperative programming language would be far more practical for basic tasks.

Part of the argument made is that a programmer should focus on the principles of object oriented programming, rather than the name. Encapsulation, simplicity, code re-use and maintainability are the main ideas, not just objects.

Premature optimization

I am writing in response to the blog post at https://9clouds.com/blog/premature-optimization-the-root-of-all-evil/ titled “Premature Optimization: The Root of All Evil”.

Premature optimization is about focusing on making sure the code will be able to run as fast as possible before anything actually works yet. It is time consuming and, by definition, “premature”, so it is not a good thing to do. The blog post quotes Donald Knuth who said “Premature optimization is the root of all evil.” For sizable projects, premature optimization is practically procrastination. It is about making sure that the program works well in theory before making sure that it works at all. The blost post seems to be referring to premature optimization on the scale of businesses, where I am only considering the effect on individual programs and projects, but the effects are the same. It makes more sense to finish something that works and then improve it than it does to improve it first and then finish it.

I have done a lot of premature optimization on small projects, and it certainly does take up a lot of time. I tend to do it just because it is interesting, though, to try to come up with such an implementation that performs optimally. It is a secondary challenge that puts off the original task. If it becomes tedious or starts to seem wasteful then I just write something that works and move on to the next part, and if it matters or makes a difference, and it never does, I can just look into how to make things faster again. The idea of actually producing something in a professional sense and becoming caught up in the performance details on a small scale before the rest of the product is finished definitely seems like it would be a waste of time. Premature optimization requires making predictions of how things are going to work after they are made, and devising a plan to make things fit together smoothly before the things even exist yet. It makes much more sense to make the things first, and then make them work together better afterwards.