Since then, Mike has followed up with this: Testing is not a substitute for thinking which starts as a summary of the responses to the earlier post then moves on to his own views on testing.
There is mention of some great work by Darius Bacon who tests around twenty solutions gathered, painstakingly, from the comments. Darius' program is great work, but I thought that my testing strategy might be better so modified his code to:
- Add my attempt as function: paddy3118
- Graft on a version of my test routine to his list of function variants
I find that i cannot agree with a lot of Mikes writing on test driven development as he seems to take an extreme position by separating test from development too much.
def paddy3118(data, item):
if not data:
return -1
lo, hi = 0, len(data)-1
mid = (hi + lo) // 2
#print(lo,mid,hi)
while item != data[mid] and lo < hi:
lo, mid, hi = ( (lo, (lo + mid) // 2, mid)
if item < data[mid] else
(mid+1, (hi + mid + 1) // 2, hi) )
#print(lo,mid,hi)
#0/0
return mid if item == data[mid] else -1
# PASSES test()
#test(paddy3118)
#print sorted(k for k,v in globals().items() if type(v)==type(dan) and 'test' not in k)
tests = (Max, aaron_maxwell, ashish_yadav, ben_gutierrez, clark,
dan, dave_r, ed_marshall, guilherme_melo, jasper, martin,
michael_fogleman, paddy3118, patrick_shields, paul,
rodrigo_b, scott, si, tomas_henriquez,
travis_wrapped, yc)
if __name__ == '__main__':
totpass = 0
for t in tests:
fails =0
for dlimit in range(5):
data = list(range(dlimit))
for item in data:
try:
assert t(data, item) != -1
except:
fails += 1
print(" ERROR: %s FAIL when looking for %i in %r" %
(t.func_name, item, data))
break
for item in list(range(dlimit+1)):
try:
assert t(data, item - 0.5) == -1
except:
fails += 1
print(" ERROR: %s FAIL as %3.1f is found in %r" %
(t.func_name, item - 0.5, data))
break
if fails:
break
if not fails:
totpass += 1
print('PASS: %s' % t.func_name)
print("\n%i/%i = %3.1f%% of functions pass" %
(totpass, len(tests), 100 * totpass / float(len(tests))))
'''
PASS: Max
ERROR: aaron_maxwell FAIL as -0.5 is found in []
PASS: ashish_yadav
PASS: ben_gutierrez
ERROR: clark FAIL when looking for 0 in [0]
ERROR: dan FAIL as -0.5 is found in []
ERROR: dave_r FAIL as -0.5 is found in []
ERROR: ed_marshall FAIL as -0.5 is found in []
ERROR: guilherme_melo FAIL as -0.5 is found in []
ERROR: jasper FAIL as -0.5 is found in []
ERROR: martin FAIL as -0.5 is found in []
PASS: michael_fogleman
PASS: paddy3118
ERROR: patrick_shields FAIL as -0.5 is found in []
PASS: paul
ERROR: rodrigo_b FAIL as -0.5 is found in []
ERROR: scott FAIL as 1.5 is found in [0, 1, 2]
PASS: si
ERROR: tomas_henriquez FAIL when looking for 0 in [0, 1]
ERROR: tomas_henriquez FAIL as 1.5 is found in [0, 1]
ERROR: travis_wrapped FAIL when looking for 0 in [0]
PASS: yc
8/21 = 38.1% of functions pass
'''
When I was creating my function, i new that there were likely to be issues with off-by-ones when modifying range indices, and with handling of an empty data list. I chose to leave a 'mental place-holder' on the off-by-one issues as I thought i could easily write tests to exercise those corners and fill them in. The alternative would be to mentally interpret those corner cases to write them down in 'development', then still have to thoroughly test for them anyway.
When writing my tests:
- I thought about the division of the range of the indices- and thought that I should use data of both odd and even lengths, and of sufficient length so that a range might be divided more than one.
- I thought about all the comparisons - and decided to test for every item that is in the data.
- I thought about the significant ways that an item might not be in the data - and decided to test for items between each and every data item, an items below the smallest and above the largest data items.
The only conclusion I can bring, is that you need to do some white box testing, i.e. tests based on knowledge of the code too. Mike Taylor needs to recognise that imposing rigid categories might well lead to extremes. I guess, their is no substitute for experience, but the unexperienced need to be given rules until they can learn enough to break them - Do we teach testing up front? Or not? (Remembering that given half a chance, testing by the novice will be inadequate).
- Paddy.