highwaystar101 said:
Sqrl said:
highwaystar101 said:
(apparently about the same spec as a modern day laptop, although I find this claim dubious)
|
Why does that sound so dubious?
If the equipment was installed in '80 it's been 360 months or 20 iterations of 18 month increments.
By Moore's law computers should be around 1,048,576x faster today than they were then. Obviously it's not exactly that simple, but it's a good enough ballpark figure, and large enough that a laptop today could easily rival (or surpass) a supercomputer then.
Luckily for modern society, reptitous doubling gets out of hand rather quickly =)
|
You misunderstood my post...
Don't worry, I understand Moore's law. But I wasn't talking about the 1980's, I was talking about the 1960's Apollo mission. Yes a super computer in the 1980's would rival or even surpass a modern laptop; but in the 1960's, computing was a completely different game.
When the Apollo mission was launched in the late 1960's, the highest spec. computer in the world was the CPC7600, which could deliver 36MFlops, which is not a lot when you consider our modern laptops work with Gflops.
So either NASA had some secret super computer that massively outperformed the most powerful super computer of the day, or, as I think, the story has been romanticised by documentaries.
|
Sounds like it could be a case of the difference between coding efficiency then VS now. As HappySqrl pointed out earlier the calculations on what has more $$ value has changed substantially since the 60's. It used to be that the computer time was far and away worth more than the programmers time because the computers were ungodly expensive and the programmers usually worked for free or very little (depending on whether it was a business or university machine). Now a programmer makes a tidy salary and the computer could be purchased out of the petty cash of most businesses. Coding practices and what is and isn't acceptable efficiency change a lot to suit the standards of the day.
Then on top of pure incentive to have coding efficiency they had the advantage of being able to program for a specific architecture which gave them another advantage (not to mention not having to run some Albatross of an OS).
There is no way to determine how much any of those factors makes a difference, or if together they could make up the entire difference you're talking about, but I think they could go a long ways towards closing the gap.