Discussion:
Unit-test mock/stub assumptions rots
Ole Rasmussen
2012-03-15 17:09:51 UTC
Permalink
I just finished the GOOS book and while I find the mock-focused
practice attractive for many reasons, there is one special scenario (issue
really) I cannot get out of my head.

When we unit test our objects we want to mock or stub the collaborators.
Mocking/stubbing may seem innocent but if we take a more detailed look at
what's actually happening it seems we might be creating grounds for a very
annoying problem in the future.

Assume we're writing a test for an object B that has interface A as
collaborator. Mocking A requires looking at the interface specification and
"simulating" part of that in the mock behavior. The important part is that
whatever behavior we are simulating it is correct according to the
interface. At the time we are writing the test this is obviously the case
because we are looking directly at the interface and "copying" its
protocols (hopefully).

Now imagine we finished writing the test for object B that mocks interface
A. At some point later in time we figure out we need to change interface A.
We make the change, run out unit tests, and see most tests for classes
implementing interface A fail. This is good and expected. We fix the
classes and run the tests once more. Everything is green.

At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that
worked differently than what it does now. The core of the problem is that
the assumptions/simulations we made in the mock for interface A at the time
we wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.

For a concrete example take this Java code:

public class ARuntimeException extends RuntimeException {}
public class BRuntimeException extends RuntimeException {}
public class CRuntimeException extends RuntimeException {}

public interface AInterface {
int func() throws ARuntimeException;
}

public class BClass {
public void doSomething(AInterface arg) throws BRuntimeException {
try {
arg.func();
}
catch (ARuntimeException e) {
throw new BRuntimeException();
}
}
}

public class BTest {
@Test(expected = BRuntimeException.class)
public void
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException() throws
Exception {
AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
when(aStub.func()).thenThrow(new ARuntimeException());

new BClass().doSomething(aStub);
}
}

If we change the interface the tests still run but on a false assumption:
public interface AInterface {
int func() throws CRuntimeException;
}

I hope I have described the problem clearly, but to sum up: when mocking an
interface we make assumptions about it's behavior. When the interface
changes those assumptions are no later true.

I understand that the UNIT tests for object B theoretically shouldn't fail
because even though interface A changes object B still works isolated.
However, I also consider the UNIT tests for object B a kind of integration
test between object B and the mock. The whole reason the unit tests have
any value to us, is that this "integration test" is based on a mock that
actually does what the interface says it should. But when we change the
interface it doesn't, so the unit test theoretically is completely wrong.

The question is if we can do something about this? I would like my tests to
alert me if something like this happens. Is this not possible? What do you
guys do about it? I'm sure it must be a problem for most of the test and
especially TDD practitioners.
David Peterson
2012-03-15 19:46:38 UTC
Permalink
What happens if you use checked exceptions, rather than unchecked
exceptions? I suspect it will solve the problem (though I'm not in front of
my computer at the moment so I can't experiment).

If the exceptions are meant to be handled (i.e. are an official part of the
protocol) then I'd really recommend making them checked exceptions, and
only throw RuntimeExceptions for reporting bugs (i.e. incorrect coding). I
know that checked exceptions can make code verbose, when you're not
interested in handling the exceptions, but in my view it's a price worth
paying in most situations.

David
Post by Ole Rasmussen
I just finished the GOOS book and while I find the mock-focused
practice attractive for many reasons, there is one special scenario (issue
really) I cannot get out of my head.
When we unit test our objects we want to mock or stub the collaborators.
Mocking/stubbing may seem innocent but if we take a more detailed look at
what's actually happening it seems we might be creating grounds for a very
annoying problem in the future.
Assume we're writing a test for an object B that has interface A as
collaborator. Mocking A requires looking at the interface specification and
"simulating" part of that in the mock behavior. The important part is that
whatever behavior we are simulating it is correct according to the
interface. At the time we are writing the test this is obviously the case
because we are looking directly at the interface and "copying" its
protocols (hopefully).
Now imagine we finished writing the test for object B that mocks interface
A. At some point later in time we figure out we need to change interface A.
We make the change, run out unit tests, and see most tests for classes
implementing interface A fail. This is good and expected. We fix the
classes and run the tests once more. Everything is green.
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that
worked differently than what it does now. The core of the problem is that
the assumptions/simulations we made in the mock for interface A at the time
we wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
public class ARuntimeException extends RuntimeException {}
public class BRuntimeException extends RuntimeException {}
public class CRuntimeException extends RuntimeException {}
public interface AInterface {
int func() throws ARuntimeException;
}
public class BClass {
public void doSomething(AInterface arg) throws BRuntimeException {
try {
arg.func();
}
catch (ARuntimeException e) {
throw new BRuntimeException();
}
}
}
public class BTest {
@Test(expected = BRuntimeException.class)
public void
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException() throws
Exception {
AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
when(aStub.func()).thenThrow(new ARuntimeException());
new BClass().doSomething(aStub);
}
}
public interface AInterface {
int func() throws CRuntimeException;
}
I hope I have described the problem clearly, but to sum up: when mocking
an interface we make assumptions about it's behavior. When the interface
changes those assumptions are no later true.
I understand that the UNIT tests for object B theoretically shouldn't fail
because even though interface A changes object B still works isolated.
However, I also consider the UNIT tests for object B a kind of integration
test between object B and the mock. The whole reason the unit tests have
any value to us, is that this "integration test" is based on a mock that
actually does what the interface says it should. But when we change the
interface it doesn't, so the unit test theoretically is completely wrong.
The question is if we can do something about this? I would like my tests
to alert me if something like this happens. Is this not possible? What do
you guys do about it? I'm sure it must be a problem for most of the test
and especially TDD practitioners.
Ole Rasmussen
2012-03-15 20:31:18 UTC
Permalink
Post by David Peterson
What happens if you use checked exceptions, rather than unchecked
exceptions? I suspect it will solve the problem (though I'm not in front of
my computer at the moment so I can't experiment).
If the exceptions are meant to be handled (i.e. are an official part of
the protocol) then I'd really recommend making them checked exceptions, and
only throw RuntimeExceptions for reporting bugs (i.e. incorrect coding). I
know that checked exceptions can make code verbose, when you're not
interested in handling the exceptions, but in my view it's a price worth
paying in most situations.
You have a valid point. The reason I chose unchecked exceptions for this
example was exactly that had they been checked the BTest code would have
complained after the change. More preciselyI believe Mockito will throw an
exception telling you that you can't throw ExceptionA according to the
interface. That is good but in reality most exceptions we want to catch and
wrap are unchecked.

If we have a class on some high(er) abstraction level than its
collaborators then ideally we want to package all the collaborator
exceptions into something that fits the higher level of abstraction. As an
example consider the ByteBuffer in java.nio which throws
BufferUnderFlowException which is unchecked when you read past the end of
the buffer. Let's say you want to wrap it as a file so you create a
ByteBufferFile. In Java "files" throw IOException when something goes wrong
so you want to catch BitBufferUnderFlowException and wrap it in an
IOException. This ensures your class (which should be an interface but lets
go with it), ByteBufferFile, has a consistent interface with a stable level
of abstraction. In this case you can't change the exception to checked even
if you wanted to, and of course you want to use TDD to drive the design of
ByteBufferFile.
David Peterson
2012-03-15 21:32:15 UTC
Permalink
I think catching and wrapping a BufferUnderFlowException in an IOException gives it a false sense of respectability. A better solution is not to read past the end of the buffer.

David
What happens if you use checked exceptions, rather than unchecked exceptions? I suspect it will solve the problem (though I'm not in front of my computer at the moment so I can't experiment).
If the exceptions are meant to be handled (i.e. are an official part of the protocol) then I'd really recommend making them checked exceptions, and only throw RuntimeExceptions for reporting bugs (i.e. incorrect coding). I know that checked exceptions can make code verbose, when you're not interested in handling the exceptions, but in my view it's a price worth paying in most situations.
You have a valid point. The reason I chose unchecked exceptions for this example was exactly that had they been checked the BTest code would have complained after the change. More preciselyI believe Mockito will throw an exception telling you that you can't throw ExceptionA according to the interface. That is good but in reality most exceptions we want to catch and wrap are unchecked.
If we have a class on some high(er) abstraction level than its collaborators then ideally we want to package all the collaborator exceptions into something that fits the higher level of abstraction. As an example consider the ByteBuffer in java.nio which throws BufferUnderFlowException which is unchecked when you read past the end of the buffer. Let's say you want to wrap it as a file so you create a ByteBufferFile. In Java "files" throw IOException when something goes wrong so you want to catch BitBufferUnderFlowException and wrap it in an IOException. This ensures your class (which should be an interface but lets go with it), ByteBufferFile, has a consistent interface with a stable level of abstraction. In this case you can't change the exception to checked even if you wanted to, and of course you want to use TDD to drive the design of ByteBufferFile.
Steve Freeman
2012-03-15 19:47:46 UTC
Permalink
Post by Ole Rasmussen
[..]
public interface AInterface {
int func() throws CRuntimeException;
}
Nice. An argument in favour of checked exceptions :)
Post by Ole Rasmussen
I hope I have described the problem clearly, but to sum up: when mocking an interface we make assumptions about it's behavior. When the interface changes those assumptions are no later true.
I understand that the UNIT tests for object B theoretically shouldn't fail because even though interface A changes object B still works isolated. However, I also consider the UNIT tests for object B a kind of integration test between object B and the mock. The whole reason the unit tests have any value to us, is that this "integration test" is based on a mock that actually does what the interface says it should. But when we change the interface it doesn't, so the unit test theoretically is completely wrong.
The question is if we can do something about this? I would like my tests to alert me if something like this happens. Is this not possible? What do you guys do about it? I'm sure it must be a problem for most of the test and especially TDD practitioners.
Yes. This is a possible problem. In practice, however, it just doesn't seem to be a problem. It's not what I find myself being caught by on real systems. First, we will have been writing at least some higher level tests to drive out the unit-level requirements. These will catch gross errors when objects just don't fit together. Second, my style, at least, now tends to more, smaller domain objects so that some changes will get caught by the type system. Third, I find that rigorously TDD'd code, with a strong emphasis on expressiveness and simplicity, is just easier to work with, so I'm more likely to catch such problems.

What do other people find?

S.

Steve Freeman

Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com

+44 797 179 4105
Twitter: @sf105
Higher Order Logic Limited
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 7522677
Kevin Rutherford
2012-03-15 19:57:19 UTC
Permalink
Joe Rainsberger talks about using Contract tests here, I think.
The tests define the semantics of the interface, and every implementer
of the interface must pass them. I think he has the implementer's
tests inherit the contract tests, but I can't quickly find a
reference. Joe...?
Cheers,
Kevin
--
http://www.kevinrutherford.co.uk
http://myonepage.com/kevinrutherford
+44 (0) 797 356 3521
Ole Rasmussen
2012-03-15 20:37:10 UTC
Permalink
Post by Kevin Rutherford
Joe Rainsberger talks about using Contract tests here, I think.
The tests define the semantics of the interface, and every implementer
of the interface must pass them. I think he has the implementer's
tests inherit the contract tests, but I can't quickly find a
reference. Joe...?
That sounds interesting and not too far from the idea I have of
automatically verifying that the assumptions of our tests are correct. I'm
eagerly waiting for more info.
Ben Biddington
2012-03-15 20:54:26 UTC
Permalink
Definitely found this in ruby. It's easy to test-drive against a particular
role and have implementations get entirely out of sync.

I have found integration/acceptance tests excellent protection against
these types of errors.

<bb />
Post by Steve Freeman
Post by Ole Rasmussen
[..]
public interface AInterface {
int func() throws CRuntimeException;
}
Nice. An argument in favour of checked exceptions :)
Post by Ole Rasmussen
I hope I have described the problem clearly, but to sum up: when mocking
an interface we make assumptions about it's behavior. When the interface
changes those assumptions are no later true.
Post by Ole Rasmussen
I understand that the UNIT tests for object B theoretically shouldn't
fail because even though interface A changes object B still works isolated.
However, I also consider the UNIT tests for object B a kind of integration
test between object B and the mock. The whole reason the unit tests have
any value to us, is that this "integration test" is based on a mock that
actually does what the interface says it should. But when we change the
interface it doesn't, so the unit test theoretically is completely wrong.
Post by Ole Rasmussen
The question is if we can do something about this? I would like my tests
to alert me if something like this happens. Is this not possible? What do
you guys do about it? I'm sure it must be a problem for most of the test
and especially TDD practitioners.
Yes. This is a possible problem. In practice, however, it just doesn't
seem to be a problem. It's not what I find myself being caught by on real
systems. First, we will have been writing at least some higher level tests
to drive out the unit-level requirements. These will catch gross errors
when objects just don't fit together. Second, my style, at least, now tends
to more, smaller domain objects so that some changes will get caught by the
type system. Third, I find that rigorously TDD'd code, with a strong
emphasis on expressiveness and simplicity, is just easier to work with, so
I'm more likely to catch such problems.
What do other people find?
S.
Steve Freeman
Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com
+44 797 179 4105
Higher Order Logic Limited
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 7522677
J. B. Rainsberger
2012-03-15 21:09:17 UTC
Permalink
Post by Ben Biddington
Definitely found this in ruby. It's easy to test-drive against a particular
role and have implementations get entirely out of sync.
Equivalent statement: It's easy to write the wrong code with sloppy thinking.

Keeping collaboration and contract tests in correspondence with each
other requires attention and discipline. It can be tedious, but I
prefer that tedium to chasing mistakes down between two objects
disagree on the contract in between. Others might swing the tradeoff
in the other direction.
Post by Ben Biddington
I have found integration/acceptance tests excellent protection against these
types of errors.
Scam. :)
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
Mauricio Aniche
2012-03-19 13:52:43 UTC
Permalink
Hi,

I often hear it when discussing "unit tests", in which you test a
class and mock its collaborations, against "integration tests", in
which you plug all your classes together. Well, there is no right
answer, IMHO. Both techniques have pros and cons.

When you do integration testing, you can rapidly notice any break in a
class contract. However, it can also make your test really hard to
write. A class "A" collaborates with another class "B" which, in its
turn, collaborates with another class "C", and so on. Usually, class
"A" doesn't care about "B" using "C". Now you should instantiate all
of them in a test and, if some dependency changes in the graph, you
would have to change all your tests. Sounds like maintainability gets
harder when doing it.

When using mocks, your tests are less coupled, but less effective. You
don't perceive a possible break in a contract. However, in practice, I
tend to use the mocking style of writing tests. When I change a
contract of something, I press "Ctrl+Shift+G" in Eclipse, and it shows
me all tests that are using that interface, so I can review and change
the behavior of the class accordingly (which will change, right? After
all, you changed a contract!).

Regards,
Mauricio Aniche
Post by Steve Freeman
Post by Ole Rasmussen
[..]
public interface AInterface {
    int func() throws CRuntimeException;
}
Nice. An argument in favour of checked exceptions :)
Post by Ole Rasmussen
I hope I have described the problem clearly, but to sum up: when mocking an interface we make assumptions about it's behavior. When the interface changes those assumptions are no later true.
I understand that the UNIT tests for object B theoretically shouldn't fail because even though interface A changes object B still works isolated. However, I also consider the UNIT tests for object B a kind of integration test between object B and the mock. The whole reason the unit tests have any value to us, is that this "integration test" is based on a mock that actually does what the interface says it should. But when we change the interface it doesn't, so the unit test theoretically is completely wrong.
The question is if we can do something about this? I would like my tests to alert me if something like this happens. Is this not possible? What do you guys do about it? I'm sure it must be a problem for most of the test and especially TDD practitioners.
Yes. This is a possible problem. In practice, however, it just doesn't seem to be a problem. It's not what I find myself being caught by on real systems. First, we will have been writing at least some higher level tests to drive out the unit-level requirements. These will catch gross errors when objects just don't fit together. Second, my style, at least, now tends to more, smaller domain objects so that some changes will get caught by the type system. Third, I find that rigorously TDD'd code, with a strong emphasis on expressiveness and simplicity, is just easier to work with, so I'm more likely to catch such problems.
What do other people find?
S.
Steve Freeman
Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com
+44 797 179 4105
Higher Order Logic Limited
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 7522677
--
Mauricio Aniche
www.aniche.com.br
@mauricioaniche
J. B. Rainsberger
2012-03-19 18:54:23 UTC
Permalink
Post by Mauricio Aniche
When using mocks, your tests are less coupled, but less effective.
…as change detectors, but they provide stronger feedback about the design.
Post by Mauricio Aniche
You
don't perceive a possible break in a contract. However, in practice, I
tend to use the mocking style of writing tests. When I change a
contract of something, I press "Ctrl+Shift+G" in Eclipse, and it shows
me all tests that are using that interface, so I can review and change
the behavior of the class accordingly (which will change, right? After
all, you changed a contract!).
Exactly.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
Mauricio Aniche
2012-03-19 20:01:22 UTC
Permalink
Post by J. B. Rainsberger
Post by Mauricio Aniche
When using mocks, your tests are less coupled, but less effective.
…as change detectors, but they provide stronger feedback about the design.
Yeah. It would be awesome if we find out a way to have both effective
internal and external feedback, wouldn't it!? Am I dreaming too much?
:)

Regards,
Mauricio Aniche
Josue Barbosa dos Santos
2012-03-19 23:08:01 UTC
Permalink
On Mon, Mar 19, 2012 at 5:01 PM, Mauricio Aniche
Post by Mauricio Aniche
Yeah. It would be awesome if we find out a way to have both effective
internal and external feedback, wouldn't it!? Am I dreaming too much? :)
Unit tests (internal), some acceptance tests (external). No?
--
Abraços,
Josué
http://twitter.com/josuesantos



On Mon, Mar 19, 2012 at 5:01 PM, Mauricio Aniche
Post by Mauricio Aniche
Post by J. B. Rainsberger
Post by Mauricio Aniche
When using mocks, your tests are less coupled, but less effective.
…as change detectors, but they provide stronger feedback about the design.
Yeah. It would be awesome if we find out a way to have both effective
internal and external feedback, wouldn't it!? Am I dreaming too much?
:)
Regards,
Mauricio Aniche
J. B. Rainsberger
2012-03-15 21:07:07 UTC
Permalink
Post by Ole Rasmussen
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that worked
differently than what it does now. The core of the problem is that the
assumptions/simulations we made in the mock for interface A at the time we
wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
Yes, so follow this rule:

* A stub in a collaboration test must correspond to an expected result
in a contract test
* An expectation in a collaboration test must correspond to an action
in a contract test

This provides a /systematic/ way to check that B remains in sync with
the implementations of A. In your situation, I do this:

1. Change the implementation A1 of A, noticing a change in the contract of A.
2. For each change in the contract of A:
2.1 If an action in A has changed (parameter changed, method name
changed), then look for expectations of that action in collaboration
tests, and change them to match the new action.
2.2 If a response from A has changed (return type, value, what the
value means), then look for stubs of that action in collaboration
tests, and change them to match the new response.

Done.
<snip />

When I change func() to return CRuntimeException, I'm changing a
return type. This means that I look for all tests that stub func(),
and I find the BTest.
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException test. I
change the stub to throw CRuntimeException, then decide whether I have
to change the test or fix the implementation of B.

Done.

When people in my training classes tell me that they worry about doing
this correctly, I point out that the rule of correspondence between
stub and expected result or expectation and action tells us exactly
what to look for when we change the contract of any interface. It
takes discipline, but not more discipline than the rest of TDD or
good, modular design takes.
Post by Ole Rasmussen
The question is if we can do something about this? I would like my tests to
alert me if something like this happens. Is this not possible? What do you
guys do about it? I'm sure it must be a problem for most of the test and
especially TDD practitioners.
Someone's working on this as a PhD dissertation. Follow @t_crayford on
the Twitter.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
Ole Rasmussen
2012-03-15 21:23:17 UTC
Permalink
Post by J. B. Rainsberger
* A stub in a collaboration test must correspond to an expected result
in a contract test
* An expectation in a collaboration test must correspond to an action
in a contract test
This provides a /systematic/ way to check that B remains in sync with
1. Change the implementation A1 of A, noticing a change in the contract of A.
2.1 If an action in A has changed (parameter changed, method name
changed), then look for expectations of that action in collaboration
tests, and change them to match the new action.
2.2 If a response from A has changed (return type, value, what the
value means), then look for stubs of that action in collaboration
tests, and change them to match the new response.
That sounds like what I am doing now except I didn't think or perform it as
a controlled process like you describe. Very insightful!

However I'm not sure I understand exactly what you mean by "contract test"?
I take it that the "collaboration tests" are the unit tests of an object
and not integration tests (unless you view the unit tests as integration
tests with mocks:))?
J. B. Rainsberger
2012-03-15 21:27:43 UTC
Permalink
Post by Ole Rasmussen
Post by J. B. Rainsberger
* A stub in a collaboration test must correspond to an expected result
in a contract test
* An expectation in a collaboration test must correspond to an action
in a contract test
This provides a /systematic/ way to check that B remains in sync with
1. Change the implementation A1 of A, noticing a change in the contract of A.
    2.1 If an action in A has changed (parameter changed, method name
changed), then look for expectations of that action in collaboration
tests, and change them to match the new action.
    2.2 If a response from A has changed (return type, value, what the
value means), then look for stubs of that action in collaboration
tests, and change them to match the new response.
That sounds like what I am doing now except I didn't think or perform it as
a controlled process like you describe. Very insightful!
Cheers. I teach it this way quite regularly. So far, no stories of
disastrous results.
Post by Ole Rasmussen
However I'm not sure I understand exactly what you mean by "contract test"?
I take it that the "collaboration tests" are the unit tests of an object and
not integration tests (unless you view the unit tests as integration tests
with mocks:))?
B uses mock A correctly => collaboration tests
A1 implements A correctly => contract tests

A contract test is a test that checks an aspect of the contract of A.
You can start by writing a test for implementation A1, then abstract
away from that test the knowledge of A1 so that the test only knows
about A. It's now a contract test.

More reading:

http://link.jbrains.ca/zziQcE
http://link.jbrains.ca/yG9kqS

Enjoy.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
Ole Rasmussen
2012-03-16 09:49:10 UTC
Permalink
Post by J. B. Rainsberger
B uses mock A correctly => collaboration tests
A1 implements A correctly => contract tests
A contract test is a test that checks an aspect of the contract of A.
You can start by writing a test for implementation A1, then abstract
away from that test the knowledge of A1 so that the test only knows
about A. It's now a contract test.
http://link.jbrains.ca/zziQcE
http://link.jbrains.ca/yG9kqS
Enjoy.
Very interesting! I have read your articles and looked at your examples. I
don't feel like I have fully grasped the concept, at least not to an extent
where I can reflect on it with great confidence. I do, however, grasp the
idea and there I set out to play with it.

If I understand you correctly the usefulness of contract tests are
extremely dependent on the fact that when you change an interface something
will tell you "Hey I'm changing an interface, I should also change the
contract tests for it to match!". That "something" might be sheer
experience or just intuition or maybe even a third thing. The point is if
you don't update the contract tests when updating the interfaces all is
lost. Therefore the question is whether the technique encourages people to
do this actively or if it's more intuitive than before or if you in reality
just pushed the problem further up the test hierarchy.

My second thought is that I'm not sure I follow how you practically utilize
the contract tests. From your description it sounds like you never actually
run the contract tests. All they are present for is for you to get that
"aha I am chaning a contract test, so I must do one of the following now to
stubs or mocks: [insert your practical guide here]". After changing the
contract test you manually go through the touched stubs/mocks. This seems
ok, but couldn't we do better?

When I think about contract tests, I actually* want them to tell me at test
time* if any stub or mock has the wrong assumptions. I think this is
possible, but again please bear with me if I haven't grasped all
consequences of the technique yet.

Let's try a concrete example derived from my previous one:
public abstract class AContractTest {
abstract AInterface createAObject();

@Test(expected = ARuntimeException.class)
public void testFuncThrowsAException() {
AInterface aObject = createAObject();
aObject.func();
}
}

public class BTest extends AContractTest {
.... // Same earlier example
@Override
AInterface createAObject() {
AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
when(aStub.func()).thenThrow(new ARuntimeException());
return aStub;
}
}

This is a first try at automating the correspondence checking. I know it
doesn't really make sense; we're duplicating the mock code and what happens
if we have more than a single mock? We solve this later, but right now the
point is that this setup actually WILL tell you at test time whether the
assumptions of the mock are correct. If I change AContractTest to expect a
CRuntimeException the contract test using the mock as a concrete object
will fail. The defect location of the test is utterly useless though but
that's another thing I bet we could solve.

Looking at what I did above I see it can be improved. What I *really* want
to do is to test my mocks and stubs at runtime (I create them by
reflection) against the contract for the interface they are mocking. What
if I could do something like this when creating a stub/mock:
public class BTest {
@Test(expected = BRuntimeException.class)
public void
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException() throws
Exception {
AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
when(aStub.func()).thenThrow(new ARuntimeException());

*verifyMockCompliesWithContract(AContractTest.class, aStub);*

new BClass().doSomething(aStub);
}
}

With a single call I could check that the mock I just created complies with
the contract for AInteface. If it doesn't the verification will throw a
descriptive exception possibly containing useful things like which test in
AContractTest was broken by which mock etc. I wouldn't have to worry about
mismatched specs and assumptions anymore because they are checked for me.
Of course there is still the possibility that AContractTest doesn't
represent the contract of AInterface but you can only take it so far:) It
might be possible to automate it even further to maybe generate parts of
AContractTest automatically, but for now I think the above solution already
provides enormous benefits.

What do you think? I certainly hope I haven't missed something important
(which I should have caught) that makes the idea irrelevant or useless, but
it happens from time to time, I guess that's why I like discussing my ideas
with others :)
Josue Barbosa dos Santos
2012-03-16 18:25:15 UTC
Permalink
Hello Ole,
Post by Ole Rasmussen
...
The point is if you don't update the contract tests when updating the interfaces all is lost.
As said Rainsberger, many other things requires disciple. TDD requires
discipline. But I see many people claiming that they do TDD and don´t
worry to do properly the refactoring step. I see many that don´t
eliminate the duplication. The elimination of duplication is so
important that the original definition of TDD in Kent Beck book is:
write a failing test; write the code to make the test pass; eliminate
duplication. But they "forget" to do that. My point is, many things
requires discipline. And this one more thing.


But I like your idea of automatically verify that mocks respect the
interface contract. Today I rely on acceptance tests to warranty that
everything works together.
--
Abraços,
Josué
http://twitter.com/josuesantos
Post by Ole Rasmussen
B uses mock A correctly => collaboration tests
A1 implements A correctly => contract tests
A contract test is a test that checks an aspect of the contract of A.
You can start by writing a test for implementation A1, then abstract
away from that test the knowledge of A1 so that the test only knows
about A. It's now a contract test.
http://link.jbrains.ca/zziQcE
http://link.jbrains.ca/yG9kqS
Enjoy.
Very interesting! I have read your articles and looked at your examples. I
don't feel like I have fully grasped the concept, at least not to an extent
where I can reflect on it with great confidence. I do, however, grasp the
idea and there I set out to play with it.
If I understand you correctly the usefulness of contract tests are extremely
dependent on the fact that when you change an interface something will tell
you "Hey I'm changing an interface, I should also change the contract tests
for it to match!". That "something" might be sheer experience or just
intuition or maybe even a third thing. The point is if you don't update the
contract tests when updating the interfaces all is lost. Therefore the
question is whether the technique encourages people to do this actively or
if it's more intuitive than before or if you in reality just pushed the
problem further up the test hierarchy.
My second thought is that I'm not sure I follow how you practically utilize
the contract tests. From your description it sounds like you never actually
run the contract tests. All they are present for is for you to get that "aha
I am chaning a contract test, so I must do one of the following now to stubs
or mocks: [insert your practical guide here]". After changing the contract
test you manually go through the touched stubs/mocks. This seems ok, but
couldn't we do better?
When I think about contract tests, I actually want them to tell me at test
time if any stub or mock has the wrong assumptions. I think this is
possible, but again please bear with me if I haven't grasped all
consequences of the technique yet.
public abstract class AContractTest {
    abstract AInterface createAObject();
    public void testFuncThrowsAException() {
        AInterface aObject = createAObject();
        aObject.func();
    }
}
public class BTest extends AContractTest {
 .... // Same earlier example
    AInterface createAObject() {
        AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
        when(aStub.func()).thenThrow(new ARuntimeException());
        return aStub;
    }
}
This is a first try at automating the correspondence checking. I know it
doesn't really make sense; we're duplicating the mock code and what happens
if we have more than a single mock? We solve this later, but right now the
point is that this setup actually WILL tell you at test time whether the
assumptions of the mock are correct. If I change AContractTest to expect a
CRuntimeException the contract test using the mock as a concrete object will
fail. The defect location of the test is utterly useless though but that's
another thing I bet we could solve.
Looking at what I did above I see it can be improved. What I really want to
do is to test my mocks and stubs at runtime (I create them by reflection)
against the contract for the interface they are mocking. What if I could do
public class BTest {
    public void
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException() throws
Exception {
        AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
        when(aStub.func()).thenThrow(new ARuntimeException());
        verifyMockCompliesWithContract(AContractTest.class, aStub);
        new BClass().doSomething(aStub);
    }
}
With a single call I could check that the mock I just created complies with
the contract for AInteface. If it doesn't the verification will throw a
descriptive exception possibly containing useful things like which test in
AContractTest was broken by which mock etc. I wouldn't have to worry about
mismatched specs and assumptions anymore because they are checked for me. Of
course there is still the possibility that AContractTest doesn't represent
the contract of AInterface but you can only take it so far:) It might be
possible to automate it even further to maybe generate parts of
AContractTest automatically, but for now I think the above solution already
provides enormous benefits.
What do you think? I certainly hope I haven't missed something important
(which I should have caught) that makes the idea irrelevant or useless, but
it happens from time to time, I guess that's why I like discussing my ideas
with others :)
J. B. Rainsberger
2012-03-16 23:02:20 UTC
Permalink
Post by Ole Rasmussen
If I understand you correctly the usefulness of contract tests are extremely
dependent on the fact that when you change an interface something will tell
you "Hey I'm changing an interface, I should also change the contract tests
for it to match!".
A contract test is simply a test for the expected behavior from an
interface, so it's not substantially different from a test for any
object, except that no-one can instantiate an interface.
Post by Ole Rasmussen
That "something" might be sheer experience or just
intuition or maybe even a third thing.
It's a rule, and no different from changing any other object: if I
want to change the behavior, then I start by changing the tests.
Post by Ole Rasmussen
The point is if you don't update the
contract tests when updating the interfaces all is lost.
Again, this is true of all objects: if we don't change the tests when
changing the behavior, then all is lost.
Post by Ole Rasmussen
Therefore the
question is whether the technique encourages people to do this actively or
if it's more intuitive than before or if you in reality just pushed the
problem further up the test hierarchy.
I don't see any difference whether we change interfaces/protocols or
classes/implementations.
Post by Ole Rasmussen
My second thought is that I'm not sure I follow how you practically utilize
the contract tests. From your description it sounds like you never actually
run the contract tests.
On the contrary: when you implement ArrayList, you have to pass the
contract tests for List, so you create class
ArrayListRespectsListContract extends ListContract and inherit the
tests that ArrayList must pass to be considered a correct
implementation of List, respecting the Liskov Substitution Principle.

You might also have some implementation details that need testing, in
which case, I recommend test-driving those details in other test
classes.

Of course, ListContract must be abstract, because it has abstract
methods to create Lists in various states: empty, with one item, with
a few items, and so on.
Post by Ole Rasmussen
All they are present for is for you to get that "aha
I am chaning a contract test, so I must do one of the following now to stubs
or mocks: [insert your practical guide here]". After changing the contract
test you manually go through the touched stubs/mocks. This seems ok, but
couldn't we do better?
As I say above, the tests actually run.
Post by Ole Rasmussen
When I think about contract tests, I actually want them to tell me at test
time if any stub or mock has the wrong assumptions. I think this is
possible, but again please bear with me if I haven't grasped all
consequences of the technique yet.
I don't see how to do this with dynamic mock objects, because there is
no compile-time class to test, and I set different stubs and
expectations on the interface methods from test to test.
Post by Ole Rasmussen
public abstract class AContractTest {
    abstract AInterface createAObject();
    public void testFuncThrowsAException() {
        AInterface aObject = createAObject();
        aObject.func();
    }
}
This contract test doesn't describe the conditions in which func()
throws ARuntimeException. Does func() always throw that exception? If
it does, then what use is it?
Post by Ole Rasmussen
public class BTest extends AContractTest {
 .... // Same earlier example
    AInterface createAObject() {
        AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
        when(aStub.func()).thenThrow(new ARuntimeException());
        return aStub;
    }
}
I can't see the use of this test. This test doesn't stop me from
stubbing func() differently in tests for clients of A.

BTest would not extend AContractTest; instead, when you implement A
with class C, you'll have this:

class CContractTest extends AContractTest {
@Override
AInterface createAObject() {
return new C(); // C implements A
}
}

Now CContractTest inherits the tests from AContractTest, so those
tests execute on implementation C to verify that C implements A
correctly.
Post by Ole Rasmussen
What do you think? I certainly hope I haven't missed something important
(which I should have caught) that makes the idea irrelevant or useless, but
it happens from time to time, I guess that's why I like discussing my ideas
with others :)
What you've written here misses the mark by a long way. I don't have
the energy to type out my entire demo here; it looks like I need to do
it as a video. I don't know when I'll do that, but I will do it
eventually. :)
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
Ole Rasmussen
2012-03-17 09:09:27 UTC
Permalink
Post by J. B. Rainsberger
BTest would not extend AContractTest; instead, when you implement A
class CContractTest extends AContractTest {
@Override
AInterface createAObject() {
return new C(); // C implements A
}
}
Now CContractTest inherits the tests from AContractTest, so those
tests execute on implementation C to verify that C implements A
correctly
I got that part. My point was just that the mock that BTest creates also is
kind of an implementation of A. I know BTest isn't, but it was just a hack
in this case to check that the mock complied to the specification.

I do see your point about verifying mock behavior automatically though. I
certainly haven't thought it through but the question is whether one could
design a DSL or maybe even just a custom JUnit test runner that would allow
declaring a kind of conditionals specifying when and what mocks need to
comply with. The inherent problem with mocks is that they almost always
only specify a subset of the behavior of an interface. If you verify them
against the normal contract tests they are deemed to fail. But still we
(almost?) always want our mocks to comply with the interface just for the
subset of behavior they specify.

If interface A has a method "int doubleArg(int i)" that doubles its
argument, and we have a stub that's specified something like this
"when(stub.doubleArg(3)).thenReturn(6)" then even though it complies with
the interface it probably won't pass the tests for it. For example if the
contract test is assertThat(obj.doubleArg(10), is(20)). I don't see how we
could possibly make the stub pass that test without doing something very
framework specific, like looking into the stub implementation and
extracting it's expectations to determine if they comply with those in the
test. But it sounds complicated.

I guess I'll try using contract tests for a while to get a feel for the
workflow. As Steve mentioned this whole problem doesn't really occur that
often in practice. Knowing this and using contract tests may make me more
confident.
Steve Freeman
2012-03-17 15:50:43 UTC
Permalink
Not wishing to sound (especially) snotty but i think you'll find that the jmock is closer to the dsl you have in mind. I certainly tend to think of it in terms of pre and post conditions.
S

Steve Freeman
http://www.higherorderlogic.com

Written on a phone, so please allow for typos and short content.
Post by J. B. Rainsberger
BTest would not extend AContractTest; instead, when you implement A
class CContractTest extends AContractTest {
@Override
AInterface createAObject() {
return new C(); // C implements A
}
}
Now CContractTest inherits the tests from AContractTest, so those
tests execute on implementation C to verify that C implements A
correctly
I got that part. My point was just that the mock that BTest creates also is kind of an implementation of A. I know BTest isn't, but it was just a hack in this case to check that the mock complied to the specification.
I do see your point about verifying mock behavior automatically though. I certainly haven't thought it through but the question is whether one could design a DSL or maybe even just a custom JUnit test runner that would allow declaring a kind of conditionals specifying when and what mocks need to comply with. The inherent problem with mocks is that they almost always only specify a subset of the behavior of an interface. If you verify them against the normal contract tests they are deemed to fail. But still we (almost?) always want our mocks to comply with the interface just for the subset of behavior they specify.
If interface A has a method "int doubleArg(int i)" that doubles its argument, and we have a stub that's specified something like this "when(stub.doubleArg(3)).thenReturn(6)" then even though it complies with the interface it probably won't pass the tests for it. For example if the contract test is assertThat(obj.doubleArg(10), is(20)). I don't see how we could possibly make the stub pass that test without doing something very framework specific, like looking into the stub implementation and extracting it's expectations to determine if they comply with those in the test. But it sounds complicated.
I guess I'll try using contract tests for a while to get a feel for the workflow. As Steve mentioned this whole problem doesn't really occur that often in practice. Knowing this and using contract tests may make me more confident.
philip schwarz
2012-03-18 06:58:25 UTC
Permalink
Greetings fellow GOOS enthusiasts.

I think I just realised something: can you tell me what you think
about it? See below for my train of thought

For object A1 to collaborate with object B, it has to honour A's
contract.

Interfaces (java interfaces or abstract classes) are not enough to
define a contract.

Preconditions/postconditions are needed to spell out the contract in
full.

These conditions are seldom stated in the interface: instead, they are
stated in automated tests.

Jbrains' contract tests check that the conditions are honoured. If
implementation A1 violates the conditions, then it violates the Liskov
Substitution Principle (LSP) and we have a problem.

But the conditions are also expressed in the collaboration tests. If a
collaboration tests misstates the conditions, then A1 can't help but
violate the LSP: while it satisfies the contract as stated by the
contract tests, it violates the contract as expressed by collaboration
tests.

The contract is articulated in more than one place. This DRY (Don't
repeat yourself) violation means that when we wish to change the
contract, we have to make changes in more than one place: not only do
we have to modify the contract tests so that they check the new
conditions are honoured, and not only do we have to change A's
implementations so that they satisfy the new conditions, we also have
to remember to change the contract tests so that they correctly
articulate the new conditions, otherwise they contradict the contract
(as embodied in contract tests and implementations), and since from a
contradiction anything can follow, the collaboration tests start lying
to us.

Philip
Post by J. B. Rainsberger
Post by Ole Rasmussen
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that worked
differently than what it does now. The core of the problem is that the
assumptions/simulations we made in the mock for interface A at the time we
wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
* A stub in a collaboration test must correspond to an expected result
in a contract test
* An expectation in a collaboration test must correspond to an action
in a contract test
This provides a /systematic/ way to check that B remains in sync with
1. Change the implementation A1 of A, noticing a change in the contract of A.
    2.1 If an action in A has changed (parameter changed, method name
changed), then look for expectations of that action in collaboration
tests, and change them to match the new action.
    2.2 If a response from A has changed (return type, value, what the
value means), then look for stubs of that action in collaboration
tests, and change them to match the new response.
Done.
<snip />
When I change func() to return CRuntimeException, I'm changing a
return type. This means that I look for all tests that stub func(),
and I find the BTest.
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException test. I
change the stub to throw CRuntimeException, then decide whether I have
to change the test or fix the implementation of B.
Done.
When people in my training classes tell me that they worry about doing
this correctly, I point out that the rule of correspondence between
stub and expected result or expectation and action tells us exactly
what to look for when we change the contract of any interface. It
takes discipline, but not more discipline than the rest of TDD or
good, modular design takes.
Post by Ole Rasmussen
The question is if we can do something about this? I would like my tests to
alert me if something like this happens. Is this not possible? What do you
guys do about it? I'm sure it must be a problem for most of the test and
especially TDD practitioners.
the Twitter.
--
J. B. (Joe) Rainsberger ::http://www.jbrains.ca::http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work ::http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me athttp://nta.gs/jbrains
Steve Freeman
2012-03-18 09:13:59 UTC
Permalink
yes. We work with primitive languages and tools that should do more. That said, I'm not seeing this as a huge problem in practice. Is anyone suffering from the consequences?

S
Post by philip schwarz
Greetings fellow GOOS enthusiasts.
I think I just realised something: can you tell me what you think
about it? See below for my train of thought
philip schwarz
2012-03-18 11:00:59 UTC
Permalink
Correction: "we also have to remember to change the COLLABORATION (not
contract) tests"

On Mar 18, 6:58 am, philip schwarz
Post by philip schwarz
Greetings fellow GOOS enthusiasts.
I think I just realised something: can you tell me what you think
about it? See below for my train of thought
For object A1 to collaborate with object B, it has to honour A's
contract.
Interfaces (java interfaces or abstract classes) are not enough to
define a contract.
Preconditions/postconditions are needed to spell out the contract in
full.
These conditions are seldom stated in the interface: instead, they are
stated in automated tests.
Jbrains' contract tests check that the conditions are honoured. If
implementation A1 violates the conditions, then it violates the Liskov
Substitution Principle (LSP) and we have a problem.
But the conditions are also expressed in the collaboration tests. If a
collaboration tests misstates the conditions, then A1 can't help but
violate the LSP: while it satisfies the contract as stated by the
contract tests, it violates the contract as expressed by collaboration
tests.
The contract is articulated in more than one place. This DRY (Don't
repeat yourself) violation means that when we wish to change the
contract, we have to make changes in more than one place: not only do
we have to modify the contract tests so that they check the new
conditions are honoured, and not only do we have to change A's
implementations so that they satisfy the new conditions, we also have
to remember to change the contract tests so that they correctly
articulate the new conditions, otherwise they contradict the contract
(as embodied in contract tests and implementations), and since from a
contradiction anything can follow, the collaboration tests start lying
to us.
Philip
Post by J. B. Rainsberger
Post by Ole Rasmussen
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that worked
differently than what it does now. The core of the problem is that the
assumptions/simulations we made in the mock for interface A at the time we
wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
* A stub in a collaboration test must correspond to an expected result
in a contract test
* An expectation in a collaboration test must correspond to an action
in a contract test
This provides a /systematic/ way to check that B remains in sync with
1. Change the implementation A1 of A, noticing a change in the contract of A.
    2.1 If an action in A has changed (parameter changed, method name
changed), then look for expectations of that action in collaboration
tests, and change them to match the new action.
    2.2 If a response from A has changed (return type, value, what the
value means), then look for stubs of that action in collaboration
tests, and change them to match the new response.
Done.
<snip />
When I change func() to return CRuntimeException, I'm changing a
return type. This means that I look for all tests that stub func(),
and I find the BTest.
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException test. I
change the stub to throw CRuntimeException, then decide whether I have
to change the test or fix the implementation of B.
Done.
When people in my training classes tell me that they worry about doing
this correctly, I point out that the rule of correspondence between
stub and expected result or expectation and action tells us exactly
what to look for when we change the contract of any interface. It
takes discipline, but not more discipline than the rest of TDD or
good, modular design takes.
Post by Ole Rasmussen
The question is if we can do something about this? I would like my tests to
alert me if something like this happens. Is this not possible? What do you
guys do about it? I'm sure it must be a problem for most of the test and
especially TDD practitioners.
the Twitter.
--
J. B. (Joe) Rainsberger ::http://www.jbrains.ca::http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work ::http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me athttp://nta.gs/jbrains
Andrew Bruce
2012-03-19 02:51:25 UTC
Permalink
Perhaps I'm missing something, but in the Ruby world, do shared examples in e.g. RSpec perform the same task as contract tests? If I can say that a new thing behaves the same way as another, and the examples are sufficiently duck-typed, am I safe?
Post by philip schwarz
Correction: "we also have to remember to change the COLLABORATION (not
contract) tests"
On Mar 18, 6:58 am, philip schwarz
Post by philip schwarz
Greetings fellow GOOS enthusiasts.
I think I just realised something: can you tell me what you think
about it? See below for my train of thought
For object A1 to collaborate with object B, it has to honour A's
contract.
Interfaces (java interfaces or abstract classes) are not enough to
define a contract.
Preconditions/postconditions are needed to spell out the contract in
full.
These conditions are seldom stated in the interface: instead, they are
stated in automated tests.
Jbrains' contract tests check that the conditions are honoured. If
implementation A1 violates the conditions, then it violates the Liskov
Substitution Principle (LSP) and we have a problem.
But the conditions are also expressed in the collaboration tests. If a
collaboration tests misstates the conditions, then A1 can't help but
violate the LSP: while it satisfies the contract as stated by the
contract tests, it violates the contract as expressed by collaboration
tests.
The contract is articulated in more than one place. This DRY (Don't
repeat yourself) violation means that when we wish to change the
contract, we have to make changes in more than one place: not only do
we have to modify the contract tests so that they check the new
conditions are honoured, and not only do we have to change A's
implementations so that they satisfy the new conditions, we also have
to remember to change the contract tests so that they correctly
articulate the new conditions, otherwise they contradict the contract
(as embodied in contract tests and implementations), and since from a
contradiction anything can follow, the collaboration tests start lying
to us.
Philip
Post by J. B. Rainsberger
Post by Ole Rasmussen
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that worked
differently than what it does now. The core of the problem is that the
assumptions/simulations we made in the mock for interface A at the time we
wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
* A stub in a collaboration test must correspond to an expected result
in a contract test
* An expectation in a collaboration test must correspond to an action
in a contract test
This provides a /systematic/ way to check that B remains in sync with
1. Change the implementation A1 of A, noticing a change in the contract of A.
2.1 If an action in A has changed (parameter changed, method name
changed), then look for expectations of that action in collaboration
tests, and change them to match the new action.
2.2 If a response from A has changed (return type, value, what the
value means), then look for stubs of that action in collaboration
tests, and change them to match the new response.
Done.
<snip />
When I change func() to return CRuntimeException, I'm changing a
return type. This means that I look for all tests that stub func(),
and I find the BTest.
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException test. I
change the stub to throw CRuntimeException, then decide whether I have
to change the test or fix the implementation of B.
Done.
When people in my training classes tell me that they worry about doing
this correctly, I point out that the rule of correspondence between
stub and expected result or expectation and action tells us exactly
what to look for when we change the contract of any interface. It
takes discipline, but not more discipline than the rest of TDD or
good, modular design takes.
Post by Ole Rasmussen
The question is if we can do something about this? I would like my tests to
alert me if something like this happens. Is this not possible? What do you
guys do about it? I'm sure it must be a problem for most of the test and
especially TDD practitioners.
the Twitter.
--
J. B. (Joe) Rainsberger ::http://www.jbrains.ca::http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work ::http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me athttp://nta.gs/jbrains
Matt Wynne
2012-03-19 09:37:48 UTC
Permalink
Post by Andrew Bruce
Perhaps I'm missing something, but in the Ruby world, do shared examples in e.g. RSpec perform the same task as contract tests? If I can say that a new thing behaves the same way as another, and the examples are sufficiently duck-typed, am I safe?
Yeah, ask Kevin Rutherford about that. He's started calling RSpec shared example groups contract tests, and I think that's a nice way to think about them. He even aliases it_should_behave_like to it_should_support_contract or something like that.

cheers,
Matt

--
Freelance programmer & coach
Author, http://pragprog.com/book/hwcuc/the-cucumber-book
Founder, http://www.relishapp.com/
Twitter, https://twitter.com/mattwynne
Kevin Rutherford
2012-03-19 12:07:36 UTC
Permalink
Post by Andrew Bruce
Perhaps I'm missing something, but in the Ruby world, do shared examples in
e.g. RSpec perform the same task as contract tests? If I can say that a new
thing behaves the same way as another, and the examples are sufficiently
duck-typed, am I safe?
Yeah, ask Kevin Rutherford about that. He's started calling RSpec shared
example groups contract tests, and I think that's a nice way to think about
them. He even aliases it_should_behave_like to it_should_support_contract or
something like that.
That's right, I do. These contract tests have saved our big old rails
project numerous times -- or at least saved loads of debugging time.

For example, our views have breadcrumbs that show a route into deep
content. Each "crumb" can in fact be any one of a growing number of
different kinds of domain object, so we have a contract test that
defines the behaviour we expect from anything that can appear in a
breadcrumb. The shared example group expects there to be a 'subject'
already set up, and then it performs a variety of checks ranging from
should_implement-every_method_on_the_interface to actual behaviour
contracts.

Cheers,
Kevin
Andrew Bruce
2012-03-19 15:35:53 UTC
Permalink
Ah, thanks guys. I've used this approach in the past and had also been
struggling with translating the definition of a contract test to RSpec.

This is especially useful for writing gems that are intended to be extended
with new adapters etc. - just tell the user who wants to write an adapter
to stick a line in their spec file.
Post by Matt Wynne
Post by Andrew Bruce
Perhaps I'm missing something, but in the Ruby world, do shared examples
in
Post by Andrew Bruce
e.g. RSpec perform the same task as contract tests? If I can say that a
new
Post by Andrew Bruce
thing behaves the same way as another, and the examples are sufficiently
duck-typed, am I safe?
Yeah, ask Kevin Rutherford about that. He's started calling RSpec shared
example groups contract tests, and I think that's a nice way to think
about
Post by Andrew Bruce
them. He even aliases it_should_behave_like to
it_should_support_contract or
Post by Andrew Bruce
something like that.
That's right, I do. These contract tests have saved our big old rails
project numerous times -- or at least saved loads of debugging time.
For example, our views have breadcrumbs that show a route into deep
content. Each "crumb" can in fact be any one of a growing number of
different kinds of domain object, so we have a contract test that
defines the behaviour we expect from anything that can appear in a
breadcrumb. The shared example group expects there to be a 'subject'
already set up, and then it performs a variety of checks ranging from
should_implement-every_method_on_the_interface to actual behaviour
contracts.
Cheers,
Kevin
J. B. Rainsberger
2012-03-19 18:46:36 UTC
Permalink
Post by Andrew Bruce
Perhaps I'm missing something, but in the Ruby world, do shared examples in e.g. RSpec perform the same task as contract tests? If I can say that a new thing behaves the same way as another, and the examples are sufficiently duck-typed, am I safe?
I think so; I don't have enough varied experience to feel comfortable
with the details, but generally, I think so.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
J. B. Rainsberger
2012-03-19 18:52:34 UTC
Permalink
On Sun, Mar 18, 2012 at 02:58, philip schwarz
Post by philip schwarz
But the conditions are also expressed in the collaboration tests. If a
collaboration tests misstates the conditions, then A1 can't help but
violate the LSP: while it satisfies the contract as stated by the
contract tests, it violates the contract as expressed by collaboration
tests.
Collaboration tests make assumptions about the contract; contract
tests try to justify those assumptions. For over 10 years, we've
written good collaboration tests, but few people write contract tests.
Post by philip schwarz
The contract is articulated in more than one place.
I disagree. The contract tests articulate the contract; the
collaboration tests use the contract. I see exactly the same
"duplication" between production code and tests: more like
double-entry book-keeping than worrisome duplication.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
philip schwarz
2012-03-19 22:47:29 UTC
Permalink
that is very useful: it helps me understand contract tests.

Thanks.

Philip
Post by J. B. Rainsberger
On Sun, Mar 18, 2012 at 02:58, philip schwarz
Post by philip schwarz
But the conditions are also expressed in the collaboration tests. If a
collaboration tests misstates the conditions, then A1 can't help but
violate the LSP: while it satisfies the contract as stated by the
contract tests, it violates the contract as expressed by collaboration
tests.
Collaboration tests make assumptions about the contract; contract
tests try to justify those assumptions. For over 10 years, we've
written good collaboration tests, but few people write contract tests.
Post by philip schwarz
The contract is articulated in more than one place.
I disagree. The contract tests articulate the contract; the
collaboration tests use the contract. I see exactly the same
"duplication" between production code and tests: more like
double-entry book-keeping than worrisome duplication.
--
J. B. (Joe) Rainsberger ::http://www.jbrains.ca::http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work ::http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me athttp://nta.gs/jbrains
Luca Minudel
2012-03-16 09:10:53 UTC
Permalink
Ole, in the scenario you are describing what you are testing is a
behavior resulting from the interaction between two objects. In your
code example BClass and an implementation of AInterface if I
understood you correctly.

In a scenario like this I would also test the two real objects
together, without mocking the AInterface for that test (I call this an
integration test).
Since AInterface is an abstract type, usually I want also to run that
test with all the classes derived from AInterface, a test written with
generics help me with this.

Another possibility that I consider firs is that what you reported is
a smell of the current design of BClass and AInterface. It is worth to
verify if the responsibility in the implementation of AInterface that
is causing the problem is placed in the wrong place, maybe should be
in BClass. Or maybe BClass should delegate more to AInterface (tell
don't ask principle).

Luca
Post by Ole Rasmussen
I just finished the GOOS book and while I find the mock-focused
practice attractive for many reasons, there is one special scenario (issue
really) I cannot get out of my head.
When we unit test our objects we want to mock or stub the collaborators.
Mocking/stubbing may seem innocent but if we take a more detailed look at
what's actually happening it seems we might be creating grounds for a very
annoying problem in the future.
Assume we're writing a test for an object B that has interface A as
collaborator. Mocking A requires looking at the interface specification and
"simulating" part of that in the mock behavior. The important part is that
whatever behavior we are simulating it is correct according to the
interface. At the time we are writing the test this is obviously the case
because we are looking directly at the interface and "copying" its
protocols (hopefully).
Now imagine we finished writing the test for object B that mocks interface
A. At some point later in time we figure out we need to change interface A.
We make the change, run out unit tests, and see most tests for classes
implementing interface A fail. This is good and expected. We fix the
classes and run the tests once more. Everything is green.
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that
worked differently than what it does now. The core of the problem is that
the assumptions/simulations we made in the mock for interface A at the time
we wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
public class ARuntimeException extends RuntimeException {}
public class BRuntimeException extends RuntimeException {}
public class CRuntimeException extends RuntimeException {}
public interface AInterface {
    int func() throws ARuntimeException;
}
public class BClass {
    public void doSomething(AInterface arg) throws BRuntimeException {
        try {
            arg.func();
        }
        catch (ARuntimeException e) {
            throw new BRuntimeException();
        }
    }
}
public class BTest {
    public void
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException() throws
Exception {
        AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
        when(aStub.func()).thenThrow(new ARuntimeException());
        new BClass().doSomething(aStub);
    }
}
public interface AInterface {
    int func() throws CRuntimeException;
}
I hope I have described the problem clearly, but to sum up: when mocking an
interface we make assumptions about it's behavior. When the interface
changes those assumptions are no later true.
I understand that the UNIT tests for object B theoretically shouldn't fail
because even though interface A changes object B still works isolated.
However, I also consider the UNIT tests for object B a kind of integration
test between object B and the mock. The whole reason the unit tests have
any value to us, is that this "integration test" is based on a mock that
actually does what the interface says it should. But when we change the
interface it doesn't, so the unit test theoretically is completely wrong.
The question is if we can do something about this? I would like my tests to
alert me if something like this happens. Is this not possible? What do you
guys do about it? I'm sure it must be a problem for most of the test and
especially TDD practitioners.
Uberto Barbini
2012-03-16 14:34:17 UTC
Permalink
Post by Ole Rasmussen
I just finished the GOOS book and while I find the mock-focused
practice attractive for many reasons, there is one special scenario (issue
really) I cannot get out of my head.
When we unit test our objects we want to mock or stub the collaborators.
Mocking/stubbing may seem innocent but if we take a more detailed look at
what's actually happening it seems we might be creating grounds for a very
annoying problem in the future.
Assume we're writing a test for an object B that has interface A as
collaborator. Mocking A requires looking at the interface specification and
"simulating" part of that in the mock behavior. The important part is that
whatever behavior we are simulating it is correct according to the
interface. At the time we are writing the test this is obviously the case
because we are looking directly at the interface and "copying" its protocols
(hopefully).
Now imagine we finished writing the test for object B that mocks interface
A. At some point later in time we figure out we need to change interface A.
We make the change, run out unit tests, and see most tests for classes
implementing interface A fail. This is good and expected. We fix the classes
and run the tests once more. Everything is green.
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that worked
differently than what it does now. The core of the problem is that the
assumptions/simulations we made in the mock for interface A at the time we
wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
Hi Ole,

I'm bit late to reply but we had the same problem and this is how we solved.
I'm easily confused with A-B example so let me tell you a real case we
have here:


interface DocumentFetcher
public DocumentFetcherResponse getDeviceList(Url url);


class DocumentFetcherHttp implements DocumentFetcher
[with a httpclient inside and the logic to retrieve and validate xml
from external url]


DocumentFetcherResponse [value object]
boolean isOk;
Document xmlDocument;
String errorMessage;
int statusCode;


This is used in the business logic to differenciate content according
the device.

When we test the BL, we mock DocumentFetcher and as expectations we
have a set of stubbed DocumentFetcherResponse that covers all corner
cases from UC.

Then when we test DocumentFetcherHttp we check that from all the
possible xml and conditions (http errors, misconfigurations etc.) it
returns exactly the same set of DocumentFetcherResponse.

Finally we check the test coverage to be sure all our the IFs and
methods in the code are covered by tests. Of course this is not a
warranty by itself but it helps to determine if we need more tests.

In this way if there is a change in code or in the requirements we are
pretty sure to catch it all along the line.

I hope this can help.


cheers

Uberto
philip schwarz
2012-04-11 07:54:57 UTC
Permalink
A similar question on Stack Overflow:
http://stackoverflow.com/questions/2965483/unit-tests-the-benefit-from-unit-tests-with-contract-changes/10101918#10101918
Post by Ole Rasmussen
I just finished the GOOS book and while I find the mock-focused
practice attractive for many reasons, there is one special scenario (issue
really) I cannot get out of my head.
When we unit test our objects we want to mock or stub the collaborators.
Mocking/stubbing may seem innocent but if we take a more detailed look at
what's actually happening it seems we might be creating grounds for a very
annoying problem in the future.
Assume we're writing a test for an object B that has interface A as
collaborator. Mocking A requires looking at the interface specification and
"simulating" part of that in the mock behavior. The important part is that
whatever behavior we are simulating it is correct according to the
interface. At the time we are writing the test this is obviously the case
because we are looking directly at the interface and "copying" its
protocols (hopefully).
Now imagine we finished writing the test for object B that mocks interface
A. At some point later in time we figure out we need to change interface A.
We make the change, run out unit tests, and see most tests for classes
implementing interface A fail. This is good and expected. We fix the
classes and run the tests once more. Everything is green.
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that
worked differently than what it does now. The core of the problem is that
the assumptions/simulations we made in the mock for interface A at the time
we wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
public class ARuntimeException extends RuntimeException {}
public class BRuntimeException extends RuntimeException {}
public class CRuntimeException extends RuntimeException {}
public interface AInterface {
    int func() throws ARuntimeException;
}
public class BClass {
    public void doSomething(AInterface arg) throws BRuntimeException {
        try {
            arg.func();
        }
        catch (ARuntimeException e) {
            throw new BRuntimeException();
        }
    }
}
public class BTest {
    public void
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException() throws
Exception {
        AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
        when(aStub.func()).thenThrow(new ARuntimeException());
        new BClass().doSomething(aStub);
    }
}
public interface AInterface {
    int func() throws CRuntimeException;
}
I hope I have described the problem clearly, but to sum up: when mocking an
interface we make assumptions about it's behavior. When the interface
changes those assumptions are no later true.
I understand that the UNIT tests for object B theoretically shouldn't fail
because even though interface A changes object B still works isolated.
However, I also consider the UNIT tests for object B a kind of integration
test between object B and the mock. The whole reason the unit tests have
any value to us, is that this "integration test" is based on a mock that
actually does what the interface says it should. But when we change the
interface it doesn't, so the unit test theoretically is completely wrong.
The question is if we can do something about this? I would like my tests to
alert me if something like this happens. Is this not possible? What do you
guys do about it? I'm sure it must be a problem for most of the test and
especially TDD practitioners.
Michał Piotrkowski
2012-04-16 13:01:14 UTC
Permalink
Hi,

I was thinking about this for a while. I think I came up with a solution.
Tell me what you think about it. Let's summarize:

Context:
- you are TDD practitioner and you follow red/green/refactor mantra,
- you write unit tests and you accomplish high code coverage in your
projects,
- you use a lot of mocks in your test code,
- you have a new feature to implement that changes existing code,
- one of those changes involve changing *contract* of a class (lets call
it Class A) without changing it's *signature*,
- you write failing test for new feature, then you change implementation
of Class A, you compile and run your tests, they pass (everything is green).

Problem:
- you are aware that in your test code there are many mocks that respect
old *contract* of Class A but they do not conform to new *contract*,
- you can find all the usages of changed interface (with help of your IDE)
and review those changes, but:
1. you feel bad about changing production code on green phase without
failing test first,
2. you are afraid that you can miss something.

Solution:
At the beginning I thought (just like Philip Schwarz did) that the reason
of this problem is duplication. Expectations towards collaborators are
scatered all over the test code, so I came up with 'Introduce Factory
Pattern' solution.
But this is cumbersome. You have to remember to always use the factory to
create mocks. Some of the mocks will be single-use-only so placing them in
the factory seems like a big overkill.

Then I thought - 'Let's create Decorator/Proxy' pattern that will wrap your
mocking framework and run your contract tests (J. B. Rainsberger) against
your mocks:

ClassWithContract mock =
MyMockingWrapper.createMock(ClassWithContract.class);
expect(mock.returnSomeNotNullValue()).andReturn(null); // <-
validates the contract and throws ContractForbidsReturningNullsException.

In my opinion, implementing such wrapper would be difficult but possible.
But:
1. client code can use mocking frameworks directly,
2. it will not work for ad hoc created stubs like:

ClassWithContract mock = new ClassWithContract(){
public Object returnSomeNotNullValue(){
return null;
}
};

Then I realised that I missed somethig. The reason of this problem is the
lack of following 'O' in SOLID. Open-closed principle says: 'code should be
open for extension but closed for modification'.
What we are trying to do is to modify the existing code. We have to pay the
price for breaking this principle.
We have to be aware of all the consequences of changing the interface of
the existing code. We have to analyze all the usages of the existing code.
In this cotext this looks more like refactoring than adding new features.
What we would have to do is to drive our changes in code by failing tests.
Like Ole said in his first post:
'We make the change, run out unit tests, and see most tests for classes
implementing interface A fail. This is good and expected.'
What can we do to make tests fail? We can use 'Lean on compiler' pattern
from 'Working effectively with legacy code'.
Change (do not refactor) method signature so that the client code no longer
compiles.

Example:

Let's introduce a new example. Let's assume we have:

public class ApplicationVersion{

private int minor;
private int major;

public ApplicationVersion(int minor, int major){
this.major = major;
this.minor = minor;
}

public String asString(){
return String.format("%d.%d", major, minor);
}
}

and interface:

interface ApplicationVersionResolver{

public ApplicationVersion resolve();

}

and implentation:

public class ManifestApplicationVersionResolver implements
ApplicationVersionResolver{

// uses MANIFEST.MF to resolve application version

}

ManifestApplicationVersionResolver tries to resolve application version
from 'MANIFEST.MF' file. If it does not find it (i.e. developement version
does not contain this file), it returns null.

This feature is used by class ApplicationVersionWidget. Here are tests:

public class ApplicationVersionWidgetTest{

private ApplicationVersionResolver resolverMock;
// ...

public void shouldDisplayUnknownForNotResolvedVersion{

givenApplicationVersionCantBeEstablished();

String applicationVersion = widget.getText();

Assert.assertEquals(applicationVersion, "Version: UNKNOWN");
}

public void givenApplicationVersionCantBeEstablished(){

resolverMock =
EasyMock.createMock(ApplicationVersionResolver.class);
expect(resolverMock.resolve()).andReturn(null); // <-
assumes that resolver returns null when it cannot resolve version
replay(resolverMock);
}

// ... other tests
}

Then we spot an ugly conditional logic in our ApplicationVersionWidget:

ApplicationVersion version = resolver.resolve();
if(version == null){
versionLabel.setText("Version: UNKNOWN");
} else{
versionLabel.setText("Version: " + version.asString());
}

In order to get rid of it we decide to change contract of
ApplicationVersionResolver (by introducing NullObjectPattern).

From:

/**
...
@return version of application or null when it cannot be established.
*/
public ApplicationVersion resolve();

To:

/**
...
@return version of application or ApplicationVersionResolver.UNKNOWN
when it cannot be established.
*/
public ApplicationVersion resolve();

Steps:
1. Create test for the new functionality,
2. Make the tests pass,
3. Change the name of method 'resolve' to 'resolve_v2'. Do not refactor!
Just edit the name of this method,
4. Production code does not compile,
5. Fix compile errors in *production* code to refer to the new method name
'resolve_v2',
6. Production code compiles, test code does not compile,
7. Fix compilation errors in the test code. For each error check if it the
assumptions in the class are up-to-date,
8. Test passes,
9. Refactor name of the method 'resolve_v2' to 'resolve'.

In our example in step 7 we notice that the method
givenApplicationVersionCantBeEstablished in ApplicationVersionWidgetTest
has no longer valid assumptions. We have to correct this.

What do you think about this?

Cheers,
Micha³
Post by Ole Rasmussen
I just finished the GOOS book and while I find the mock-focused
practice attractive for many reasons, there is one special scenario (issue
really) I cannot get out of my head.
When we unit test our objects we want to mock or stub the collaborators.
Mocking/stubbing may seem innocent but if we take a more detailed look at
what's actually happening it seems we might be creating grounds for a very
annoying problem in the future.
Assume we're writing a test for an object B that has interface A as
collaborator. Mocking A requires looking at the interface specification and
"simulating" part of that in the mock behavior. The important part is that
whatever behavior we are simulating it is correct according to the
interface. At the time we are writing the test this is obviously the case
because we are looking directly at the interface and "copying" its
protocols (hopefully).
Now imagine we finished writing the test for object B that mocks interface
A. At some point later in time we figure out we need to change interface A.
We make the change, run out unit tests, and see most tests for classes
implementing interface A fail. This is good and expected. We fix the
classes and run the tests once more. Everything is green.
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that
worked differently than what it does now. The core of the problem is that
the assumptions/simulations we made in the mock for interface A at the time
we wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
public class ARuntimeException extends RuntimeException {}
public class BRuntimeException extends RuntimeException {}
public class CRuntimeException extends RuntimeException {}
public interface AInterface {
int func() throws ARuntimeException;
}
public class BClass {
public void doSomething(AInterface arg) throws BRuntimeException {
try {
arg.func();
}
catch (ARuntimeException e) {
throw new BRuntimeException();
}
}
}
public class BTest {
@Test(expected = BRuntimeException.class)
public void
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException() throws
Exception {
AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
when(aStub.func()).thenThrow(new ARuntimeException());
new BClass().doSomething(aStub);
}
}
public interface AInterface {
int func() throws CRuntimeException;
}
I hope I have described the problem clearly, but to sum up: when mocking
an interface we make assumptions about it's behavior. When the interface
changes those assumptions are no later true.
I understand that the UNIT tests for object B theoretically shouldn't fail
because even though interface A changes object B still works isolated.
However, I also consider the UNIT tests for object B a kind of integration
test between object B and the mock. The whole reason the unit tests have
any value to us, is that this "integration test" is based on a mock that
actually does what the interface says it should. But when we change the
interface it doesn't, so the unit test theoretically is completely wrong.
The question is if we can do something about this? I would like my tests
to alert me if something like this happens. Is this not possible? What do
you guys do about it? I'm sure it must be a problem for most of the test
and especially TDD practitioners.
Luca Minudel
2012-04-17 10:27:10 UTC
Permalink
Michael I had a look at the ManifestApplicationVersionResolver example
where you discuss the OCP.

Since I documented extensively the relation between TDD with mocks and
SOLID I was interested into your example.

What your are doing (introduce the null pattern) with the intention to
eliminate a conditional expression, is changing the behavior of a
class (ManifestApplicationVersionResolver) and the protocol of the
related interface (ApplicationVersionResolver).

As far as I see, this change (the change of the internal
rappresentation for the null version) is not related to OCP or any
other of the SOLID .
Post by Michał Piotrkowski
In our example in step 7 we notice that the method
givenApplicationVersionCantBeEstablished in ApplicationVersionWidgetTest
has no longer valid assumptions. We have to correct this.
What do you think about this?
When you change the protocol of an interface you have to review also
all the unit tests that mock that interface and change them.
I usually do a 'Search usages' of the interface in the tests assembly
with the refactoring tool.
Then since the protocol has changed, often I change the behaviors of
the mocks accordingly and often I also update the names of the tests.
In lovely refactoring as the ones in your example where complexity and
code get eliminated, is the case that also some test can be eliminated
(in your case: shouldDisplayUnknownForNotResolvedVersion).

HTH
Luca Minudel

On 16 Apr, 15:01, Michał Piotrkowski
Post by Michał Piotrkowski
Hi,
I was thinking about this for a while. I think I came up with a solution.
 - you are TDD practitioner and you follow red/green/refactor mantra,
 - you write unit tests and you accomplish high code coverage in your
projects,
 - you use a lot of mocks in your test code,
 - you have a new feature to implement that changes existing code,
 - one of those changes involve changing *contract* of a class (lets call
it Class A) without changing it's *signature*,
 - you write failing test for new feature, then you change implementation
of Class A, you compile and run your tests, they pass (everything is green).
 - you are aware that in your test code there are many mocks that respect
old *contract* of Class A but they do not conform to new *contract*,
 - you can find all the usages of changed interface (with help of your IDE)
    1. you feel bad about changing production code on green phase without
failing test first,
    2. you are afraid that you can miss something.
At the beginning I thought (just like Philip Schwarz did) that the reason
of this problem is duplication. Expectations towards collaborators are
scatered all over the test code, so I came up with 'Introduce Factory
Pattern' solution.
But this is cumbersome. You have to remember to always use the factory to
create mocks. Some of the mocks will be single-use-only so placing them in
the factory seems like a big overkill.
Then I thought - 'Let's create Decorator/Proxy' pattern that will wrap your
mocking framework and run your contract tests (J. B. Rainsberger) against
    ClassWithContract mock =
MyMockingWrapper.createMock(ClassWithContract.class);
    expect(mock.returnSomeNotNullValue()).andReturn(null);    // <-
validates the contract and throws ContractForbidsReturningNullsException.
In my opinion, implementing such wrapper would be difficult but possible.
    1. client code can use mocking frameworks directly,
        ClassWithContract mock = new ClassWithContract(){
            public Object returnSomeNotNullValue(){
                return null;
            }
        };
Then I realised that I missed somethig. The reason of this problem is the
lack of following 'O' in SOLID. Open-closed principle says: 'code should be
open for extension but closed for modification'.
What we are trying to do is to modify the existing code. We have to pay the
price for breaking this principle.
We have to be aware of all the consequences of changing the interface of
the existing code. We have to analyze all the usages of the existing code.
In this cotext this looks more like refactoring than adding new features.
What we would have to do is to drive our changes in code by failing tests.
'We make the change, run out unit tests, and see most tests for classes
implementing interface A fail. This is good and expected.'
What can we do to make tests fail? We can use 'Lean on compiler' pattern
from 'Working effectively with legacy code'.
Change (do not refactor) method signature so that the client code no longer
compiles.
public class ApplicationVersion{
    private int minor;
    private int major;
    public ApplicationVersion(int minor, int major){
        this.major = major;
        this.minor = minor;
    }
    public String asString(){
        return String.format("%d.%d", major, minor);
    }
}
interface ApplicationVersionResolver{
    public ApplicationVersion resolve();
}
public class ManifestApplicationVersionResolver implements
ApplicationVersionResolver{
    // uses MANIFEST.MF to resolve application version
}
ManifestApplicationVersionResolver tries to resolve application version
from 'MANIFEST.MF' file. If it does not find it (i.e. developement version
does not contain this file), it returns null.
public class ApplicationVersionWidgetTest{
    private ApplicationVersionResolver resolverMock;
    // ...
    public void shouldDisplayUnknownForNotResolvedVersion{
        givenApplicationVersionCantBeEstablished();
        String applicationVersion = widget.getText();
        Assert.assertEquals(applicationVersion, "Version: UNKNOWN");
    }
    public void givenApplicationVersionCantBeEstablished(){
        resolverMock =
EasyMock.createMock(ApplicationVersionResolver.class);
        expect(resolverMock.resolve()).andReturn(null);        //    <-
assumes that resolver returns null when it cannot resolve version
        replay(resolverMock);
    }
    // ... other tests
}
    ApplicationVersion version = resolver.resolve();
    if(version == null){
        versionLabel.setText("Version: UNKNOWN");
    } else{
        versionLabel.setText("Version: " + version.asString());
    }
In order to get rid of it we decide to change contract of
ApplicationVersionResolver (by introducing NullObjectPattern).
/**
    ...
*/
public ApplicationVersion resolve();
/**
    ...
when it cannot be established.
*/
public ApplicationVersion resolve();
1. Create test for the new functionality,
2. Make the tests pass,
3. Change the name of method 'resolve' to 'resolve_v2'. Do not refactor!
Just edit the name of this method,
4. Production code does not compile,
5. Fix compile errors in *production* code to refer to the new method name
'resolve_v2',
6. Production code compiles, test code does not compile,
7. Fix compilation errors in the test code. For each error check if it the
assumptions in the class are up-to-date,
8. Test passes,
9. Refactor name of the method 'resolve_v2' to 'resolve'.
In our example in step 7 we notice that the method
givenApplicationVersionCantBeEstablished in ApplicationVersionWidgetTest
has no longer valid assumptions. We have to correct this.
What do you think about this?
Cheers,
Michał
Post by Ole Rasmussen
practice attractive for many reasons, there is one special scenario (issue
really) I cannot get out of my head.
When we unit test our objects we want to mock or stub the collaborators.
Mocking/stubbing may seem innocent but if we take a more detailed look at
what's actually happening it seems we might be creating grounds for a very
annoying problem in the future.
Assume we're writing a test for an object B that has interface A as
collaborator. Mocking A requires looking at the interface specification and
"simulating" part of that in the mock behavior. The important part is that
whatever behavior we are simulating it is correct according to the
interface. At the time we are writing the test this is obviously the case
because we are looking directly at the interface and "copying" its
protocols (hopefully).
Now imagine we finished writing the test for object B that mocks interface
A. At some point later in time we figure out we need to change interface A.
We make the change, run out unit tests, and see most tests for classes
implementing interface A fail. This is good and expected. We fix the
classes and run the tests once more. Everything is green.
At this point in time, even though all tests are green there is a
substantial flaw in our code; namely that object B doesn't work. The unit
test for object B is based on the earlier version of interface A that
worked differently than what it does now. The core of the problem is that
the assumptions/simulations we made in the mock for interface A at the time
we wrote the unit test for object B aren't necessarily true anymore. The
surrounding world has changed.
public class ARuntimeException extends RuntimeException {}
public class BRuntimeException extends RuntimeException {}
public class CRuntimeException extends RuntimeException {}
public interface AInterface {
    int func() throws ARuntimeException;
}
public class BClass {
    public void doSomething(AInterface arg) throws BRuntimeException {
        try {
            arg.func();
        }
        catch (ARuntimeException e) {
            throw new BRuntimeException();
        }
    }
}
public class BTest {
    public void
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException() throws
Exception {
        AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
        when(aStub.func()).thenThrow(new ARuntimeException());
        new BClass().doSomething(aStub);
    }
}
public interface AInterface {
    int func() throws CRuntimeException;
}
I hope I have described the problem clearly, but to sum up: when mocking
an interface we make assumptions about it's behavior. When the interface
changes those assumptions are no later true.
I understand that the UNIT tests for object B theoretically shouldn't fail
because even though interface A changes object B still works isolated.
However, I also consider the UNIT tests for object B a kind of integration
test between object B and the mock. The whole reason the unit tests have
any value to us, is that this "integration test" is based on a mock that
actually does what the interface says it should. But when we change the
interface it doesn't, so the unit test theoretically is completely wrong.
The question is if we can do something about this? I would like my tests
to alert me if something like this happens. Is this not possible? What do
you guys do about it? I'm sure it must be a problem for most of the test
and especially TDD practitioners.
Loading...