0:02
Hi. In this video,
we'll talk a little bit about Testing Strategies.
Incremental Testing is our first look at Regression Testing.
You start with say,
two modules A and B,
and three test cases numbered one, two, three.
When you're done testing A and B,
you don't get rid of the tests, you keep them.
When you then add Module C,
you also add the test case,
or cases used to test just Module C in isolation, the unit tests.
You add that test to the test for Modules A and B then, you run them all.
This way, you can determine if something has changed in
the previously correct code based on something you added,
as well as testing that the current modules still works as intended.
You keep adding modules and their tests and re-running all the tests as you go.
This technique of re-running older tests in a larger suite is called Regression Testing.
That's a big part of Incremental Testing.
Now, when you're developing Top-down you have to develop
something to stand in for the elements at lower levels that you haven't created yet.
These are what we call Stubs.
So, we have Level One Software,
that's the software that we've been building.
But the lower level software that it relies on, Level Two,
in various different entities that we're going to rely on,
for example, an object that I instantiate to do some task.
Maybe there's three or four of those.
Well, they haven't been built yet,
but I still need to be able to do
those tasks in order to make sure that my program works.
So, one of the things that we can do is write a Stub.
A Stub is typically something that is maybe a single line,
or a few lines of code that when called it essentially just returns
a hard coded value that stands in for a real return value, something like that.
The same kind of thing can be done with what's called a Mock.
And if you take software testing,
for example, you'll see the differences between Stubs and Mocks.
A Mock is something where you don't actually hard code something,
you just say, was this method called, yes, you move on.
So, when it comes to Stub or a Mock eventually,
you will build out that
Level Two software potentially all the way across the board, something like this.
And then, those levels, of course,
rely on again underlying software potentially.
So, you'd have to make Stubs for Level Three.
So, as you move down you continue to build levels of software
down and Stubs below those to continue your work down towards the underlying levels.
The opposite case then,
is when you're developing from bottom-up.
When you have the lower lying implementation's complete but you don't have
the larger picture integration execution driver hence, Drivers.
These Drivers walk through the process of what possible
calls to our lower level in this case Level Three,
elements might be and makes reasonable calls to ensure that Level Three is operational.
The issue with building good Drivers is that it's
sometimes hard to know the kinds of inputs and
the order of inputs that would be necessary to
properly use Level Three before having built Level Two software.
But you do the best you can and again it's usually hard coded.
You try and make your best assumption of what the most common,
or most important orders of operations are going to
be and make sure that all your Level Three operations is complete.
Once those are done, you start building Level Two software.
And of course, with Level Two software we need to build a level above that.
So, you could have, for example,
a Level One Driver that drives all Level Two software.
Then, we have Back to Back Testing.
Back to Back Testing is one way that we make use of
earlier iterations of a program as an effective automated Oracle.
This is particularly useful for expanding test data without necessarily
including expected output or if you don't already have automated tests from before.
The idea is that the program worked before, at least we think it did.
For all the things that worked before,
you run test data for that
working behaviors through both the old version and the new version.
The outputs then, since it worked before should continue to work,
the output should be the same.
So, we can just do a direct comparison of the output.
Alternatively, anything that developers have modified hopefully to fix something,
or add some feature you run the test data through
both iterations again to make sure that they are different.
It still takes some manual inspection to show that the change in
the changed result is what you
actually wanted to change and that it changed in the right way.
But at least it's a start, especially,
when you're working with scratch you don't have any automated tests from the beginning.
So, we have some this overall idea of Test Scaffolding.
The goal is setting up an environment for executing your tests. So, we have the Driver.
The Driver initializes non-local variables,
initializes parameters and activates units under test.
Then, your Stubs will use templates
of modules not actual working modules usually, that's why it's a Stub.
Templates of the modules used by the unit
including the functions called and templates of any other entity,
or data structure that is used within the unit,
that is the Program Unit.
The Oracle then, is at the end which
verifies the correspondence between produced and expected results.
Again, often times the Oracle is us just a human,
you run it, you make sure that what happened is what you expected.
But there are increasingly automated Oracles
that we are using in things like the Star unit, JUnit,
PI unit testing frameworks that we can use to automatically verify that our Stub,
Driver, and Program Unit have operated properly.
So, there's kind of a bit of a trade-off here.
You can build very sophisticated,
well-designed Drivers and Stubs, very,
very high effort in developing those drivers and Stubs.
But you get a lot of lower effort in text execution and
regression by the nature of having produced these very sophisticated Drivers and Stubs.
On the other side,
we have the poorly designed Stubs.
These are things really truly simple single return values of a single hard-coded value,
return true, return three,
return the string go.
Those are poorly designed Drivers and Stubs.
It doesn't take very long at all in development
but there are really isn't a whole lot of reuse you can have.
So, trying to reuse anything like that gets be very difficult as things change
throughout the program. So, who should test?
The strategy here is well,
if you're a developer,
if you built the code that's your baby.
We have an egotistical view of our own code,
we're great programmers, we think our stuff is awesome.
Therefore, you tend to treat it a little differently.
You also have a good understanding of the system.
The problem with that means is that you tend to
test what you built not what you should have built, or what the user wanted.
You tend to test only what you wanted to build.
That can leave some dark corners of programs that tend not to get tested.
So, you also tend to test very gently.
And you're driven by deadlines.
You need to be done by executing start working on the next project,
next module, the next method.
You're driven by let's move on.
The tester on the other hand,
has a much more, I'm going to break things because breaking things is good.
That increases quality perspective.
They do have to learn the systems so,
there is an uptick and a learning curve for the testers in that case.
But, they do have a much better job, generally speaking,
of being an independent voice for breaking the program and ensuring the quality.
So, there's a number of Axioms of Testing that go
along with the strategies of testing idea.
Anytime you have defects found in a piece of software,
is that number increases?
The probability of more bugs also increases because the more likelihood
that we see bugs happening it's more likely that there are more bugs to be found.
Now, this is one of the,
I don't know relatively controversial Axioms.
They use it as,
assign your best programmer to testing.
The reason for that is that the best programmers have the best understanding of quality,
of programming, of things to break of knowing how to break things,
knowing how to break things well.
They also have a good idea of overall design.
So, they can do a better job of making sure especially,
in the integration test standpoint
of making sure that things are coming together properly,
of having a better view of the overall system.
So, those developers also then can help go
back to the junior developers and help them develop
better while they're doing the testing and debugging and defect reporting process.
You should also understand that Exhaustive Testing isn't possible.
In most cases, running every combination inputs is just not going to happen.
Therefore, we have to provide some kind of strategy that attacks the most important,
or the most critical aspects of programs while we're testing.
You cannot do everything so,
you have to prioritize what you're doing, okay?
Even if you do find the last bug, you'll never know.
There's no way to know that there's not another bug sitting there.
Remember that testing only exposes bugs.
It doesn't prove their absences.
So, when you do run all your test and they're all passing,
it doesn't mean that the program is without defects,
it's just your test can't find anymore.
And it will always take more time than you have to test less than you'd like.
Again, it goes back to that prioritization standpoint.
Because you will run out of time before you run a test cases.
If that's not true that means you as the tester have not done your job.
If you still have time to test,
you need to create more test cases.
The Strategies of testing,
drive the actual act of testing units.
Recall that Pure top-down and Pure bottom-up,
they don't really exist.
We can talk about having drivers and Stubs for
individual elements and building down somewhat.
The idea that you would build one Level one,
all of Level two then,
all of Level three and so on,
doesn't really happen in the practical space.
You tend to have certain elements,
certain silos of code that need to be done before anything else can really move on.
So, you tend to start seeing more depth first searches rather than going layer by layer.
But you should still understand the idea that any individual layer can have
both Drivers and Stubs built for them when you're building these things.
So, programs cannot be tested completely.
You have to have some idea of what are we going to test with the time that we have,
with the budget that we have, with the money that we have.
The practical budget testing constraint
is probably the most common one in the real world.
So, you have to have an idea of,
this is what we're going to test because this is the most important for users.
This is the most safety-critical.
This is going to have the most impact on performance.
And test those things first and test as far as you can.
Because if you do have the wonderful unicorn experience of testing everything, great.
But that's probably not going to happen.