The machine that won the war is a short story written by Issac Asminov in the 1960's. The story revolves around the end of a long war that has decimated an alien race leading to the survival of the human race. One huge computer system is initially credited with making the decisions that lead to the victory.
Data about results of various battle results, availability of resources and their locations are all collected and fed into this system. These data are processed by the super computer to provide accurate guidance to the commanders as to what decisions to make. However, over the course of the story we learn that the data being fed to it was too unreliable as most people involved in supplying the data could not be trusted. Moreover, even the data that was collected from these untrustworthy people was being manipulated so that it was "correct" before being sent to the computer. To make matters more interesting the computer system itself was not in a working conditions and could not trusted to interpret the data reliably. So the output was again being manipulated to take into account Murphy's law that says that anything that has to go wrong will go wrong.
Finally it is revealed that although the data was being fixed at multiple stages, the final person responsible for using the decisions was using a toss of the coin to make the calls.
Does this point to the fact that very complex systems are very hard to model, interpret and predict? So much so that a probabilistic approach performed as well as a very complicated model. May be the war could have been won much earlier if all the data was perfectly accurate and computer was in perfect working order and its instructions were followed to the letter. On the other hand having such perfect data without reliability issues might be hard to find in many systems that are very complex. Hence, the need to include a factor that takes the unreliability in the data into account.