Update & Share tips on Testing, Develoment, Design & Other New Tech Trends

Post Top Ad

Post Top Ad

Thursday, July 20, 2017

Software Testing Process For Both Manual And Automated Testing

Mike Cohn and others divide testing into 4 types: Automated and Manual Testing, Automated, Manual, and Tool-Based. I suppose a footnote belongs here, but as this little tidbit is as much from experience and a variety of media as it is his book (I am not sure which one, sorry).


Software Testing Process


  • Automated and Manual Testing involved prototyping, simulations, functional testing, and user story testing. I personally extend this to service testing (interface testing, touch point testing, etc). For all of these we have information driving acceptance criteria (did it pass?) and the creation of the test (what are we trying to do?) These lead to trackable bugs and potentially, issues. “Fail fast” is made easy by prototyping.
  • Automated Testing includes having a Continuous Integration server, Unit tests (this is looking at and optimizing actual code in an XP peer programming model or not and specifically not doing manual testing where you enter orders or send an email to multiple recipients), and Component testing (also done in Tool-driven testing if you have systems within the System configured in a way to allow for it). Your CPU and compiler will tell you a lot. Automatic processes should be set up so that they take place before anything goes live (or even into a sandbox environment). I have seen development shops where if one of these tests failed, a flashing red light went off and in Toyota’s “stop the line” fashion (“Everyone stop developing and look at this!”) people fix what is broken before taking one more step. These lead to trackable bugs and potentially, issues, however these are more likely to be pounced upon immediately because you do not want something icky in the soup, so to speak.
  • Manual testing is what we have been calling Unit testing: poking at the GUI, verifying data is validated, things work from a UX/UI perspective, UAT (User Acceptance Testing), and others. UAT is the most important in my opinion. The vendor shows the Client functionality and it either passes or fails. You can have formal acceptance criteria (unscrupulous vendors mandate defined and exact Scope as well as the leverage inherent in highly detailed acceptance criteria). When I am representing a Client who interfaces with vendors, I try to steer clear of hard acceptance criteria and change orders and instead I lean towards reality where things change even if they are correct according to original spec. Clients have had bugs marked as fixed before anyone at the Client even checked or were able to check. This creates haze, and bugs are lost among larger issues, and the excuse is valid, a natural extension of chaos. Vendors have professional responsibility, and this is not my opinion. A company I worked for lost a court case because they did not perform what the court considered part of their responsibility as a solutions provider: proper management of issues. That is why I stopped worked for development shops. Being honest and forthright costs consulting teams money because delivering something measurable that cannot remain the same but must pretend to hold shape will lead to change orders and cries of “OOS! (Out of Scope)” while defining the system in terms of what it has to do at a high level leaves room for change in ways that do not impact behavior. Maintenance contracts bring in a lot of revenue.
  • Finally, there is Tool-driven testing. This is load testing, security testing, and hardware testing. This is stuff that your point a tool at and let it crank away and come back with analysis. All projects have a front end that is connected to a back end (not ALL I guess) which is why separating the sites from SAP has always been something I have not understood and so Tool-driven testing is extremely important). Also in this group would be analytics, for SEO, SEM, E-commerce stats, etc).
So I would first recommend enforcing a real, proper, full lifecycle SDLC (Agile if the project is new but if the project is not new, something as close to what exists) and in doing that, most of the testing problems will fall away because Test-Driven Development would catch the silly bugs and if stakeholders change their mind, they will get to weigh the cost of changing their mind before altering something that works.

Find more information:



I would recommend an iterative (every 2 weeks or maybe 3) process where the developers are testing before they say their code is ready and the tasks/bugs that make up the body of work are both large and small but always high in value. That means, in part, developers have to know what they are building against and to have UAT feedback come back to them through a Product Owner (the singular person who approves the list of features and bugs that are coming in the next release and has final Product signoff).

I would also mandate a versioning tool like Subversion properly where you do not create a fork until there is a release or spawn a separate effort that will not be merged again down the line. Without proper version control, you can track bugs and issues all day and it will be rendered an exercise in futility if the software is not versioned correctly. There is a marked inability to define what code is shared and what code is not. Some like to track the version of the software where they found the bug. I find that annoying, because we are moving forward, always.

Software Testing And Bug


Bugs: There are a billion tracking tools out there. I like any of the automated tools besides Trac (meant for developers, when you look at who winds up using it). The main things we want to capture as a team where vendors have an appropriate level of visibility into the open issues and the ability to assign items (again, keeping the power within the Client instead of the black box of “we understand and you will see it next build”) are present in all of them, by and large. Again, ask a friend. For bugs I think small teams can use Excel but like the workflow and quick views as well as processes that can be enforced by many bug tracking systems.

Identity Field: May not be visible to the user of an automated system, but if possible, I like to show it for purposes of quick reference.

What caused the bug? Evidence, like narratives, screenshots, are important and maybe even critical. If it cannot be reproduced, it should not be presented as a bug unless it cannot be reproduced on another machine but can be reproduced on a singular, or set of, machines. This would be the place for Brower type, version, OS, etc (if needed).

What can we call the bug? A summary of what is wrong.

Who owns it? If it is to make the entry in the tracker more informative, to gather more information, to approve in an approval queue, or to fix, this “owner” may be almost any type of team member. Product Owner will or should give the final approval that it is not active.

No comments:

Post a Comment