Astrobotic said its Peregrine spacecraft had a propulsion anomaly hours after launch Jan. 8, making it unlikely that it will be able to land on the moon.
Yes I second this. I’m often doing what I can archeology on code. “What were the ancients thinking at the time? There just have been some reason they did this”.
Not just with the details of the code itself, sometimes there can be large amounts of functionality that make no sense.
A while back I was chatting to an older guy who worked with one of the old mainframes that we still have and he was able to explain to me why we did things the way we did. It was because another external system was very slow and batch orientated. It needed its data asap to begin its processing or it works finish too late. So we had a whole orchestration about sending some data at different times and data marked with different priorities etc.
When the performance requirement disappeared with hardware improvements, the orchestration made no sense anymore, but nobody remembered why we did it that way, so we kept doing it for years after it was needless and stupid.
There was no way you could figure out the why unless you knew the history of the external system.
That’s a great example. There are a lot of implicit requirements that result from business processes, hardware configuration, org structure, and so on. Nobody really even thinks of these things as requirements at the time, and reverse engineering all that later on becomes impossible. This a great article on this phenomenon that likens long running software projects to living on a generation ship :)
Yes I second this. I’m often doing what I can archeology on code. “What were the ancients thinking at the time? There just have been some reason they did this”.
Not just with the details of the code itself, sometimes there can be large amounts of functionality that make no sense.
A while back I was chatting to an older guy who worked with one of the old mainframes that we still have and he was able to explain to me why we did things the way we did. It was because another external system was very slow and batch orientated. It needed its data asap to begin its processing or it works finish too late. So we had a whole orchestration about sending some data at different times and data marked with different priorities etc.
When the performance requirement disappeared with hardware improvements, the orchestration made no sense anymore, but nobody remembered why we did it that way, so we kept doing it for years after it was needless and stupid.
There was no way you could figure out the why unless you knew the history of the external system.
Often you know the what, but not the why.
That’s a great example. There are a lot of implicit requirements that result from business processes, hardware configuration, org structure, and so on. Nobody really even thinks of these things as requirements at the time, and reverse engineering all that later on becomes impossible. This a great article on this phenomenon that likens long running software projects to living on a generation ship :)
https://medium.com/@wm/the-generation-ship-model-of-software-development-5ef89a74854b