There is something called defensive programming http://en.wikipedia.org/wiki/Defensive_programming that is supposedly something good. I believe so too but it might have a way of hiding bugs.
In defensive programming you recieve data and inspect it for flaws. When it is possible you manipulate the data so it is correct and usable again. This is good so as much your code works.
But when you are in a project where your code is just a cog in a bigger machinery this approach is not necessarily the best; by recieving and hiding flawed data a bug somewhere else is hidden.
In the example in Wikipedia a string limited to 1000 characters is manipulated. The suggested solution is to truncate the string and keep rowing as if there wasn't a leak in the first place. Depending on the circumstances the bug might be at the caller and by removing the very bug instead of the symptoms the very problem disappears.
According to Fail at once an error should surface as soon and close to the source as possible. The chances for data resulting of the bug to propagate, is then smaller and the bug tracking is probably faster.
In the example it would have been better to throw an exception than to just keep on with data, now faulty in another way than to start with.
Or like this: if a user writes an essay of 2000 characters he probably wants to know that it cannot be saved the very moment ctrl-S is pressed instead of 2 days later when proof reading, when the last 1000 characters are gone both from mind and binary space. Or an operator of a tube bending machine prefers to know if his machine is faulty when he is working at it instead of two weeks later when the tubes are buried inside a wall somewhere.
2008-07-03
Subscribe to:
Post Comments (Atom)
1 comment:
Completely agree. I checked out the article before continuing reading your post and after seeing the example I started thinking if this is really what you expect the program to do "back stage" for the input that is being given. Yeah, maybe "it ain't crashing" but in case the input is given from a machine (calling your example of the tubes) in my opinion, the process must stop and alert the operator that the SENDER is given data that the application is unable to process. If your architecture and logic is based on a certain range of inputs specified before then it shouldn't be certainly your bug, but production must be warned of this, not putting makeup to your code crashers.
Post a Comment