Lately I’ve been working with a very smart software developer who began working in the industry in the early 80s, and one of his observations is that many of the systems we’re developing today are very similar in overall functionality as the systems he developed early on but require thousands of times the processing power and memory to function. A large portion (perhaps the majority) of this processing power and memory is going to driving user interfaces, but a large portion is because developers have been developing towards re-usability and reducing their development time rather than developing more efficient systems; which (in many/most cases) is probably a reasonable choice.
An example of what he is talking about is that we’re braking up large complicated enterprise systems into smaller and smaller independent applications, and having them communicate with each-other through soap or rest interfaces; and this typically involves the creation and parsing of XML data. Now, what this means is that to transfer an integer (as an example) rather than having 4 or 8 bytes of data somewhere in memory that you enter into a function you now have a 16 byte string representation of the integer wrapped in another 32 bytes of characters to make the XML tags at both the local and remote destination with additional memory devoted to an XML parser; and you use additional processing power to convert the integer into a string, convert that string into XML, transfer that XML, parse the XML, and convert the string back to an integer.
Applications that would run in 64k of memory on an Motorola 68k now require 512 MB of memory on a Core 2 duo because of all the complexity we have wrapped around them.
Or to put it another way, if it is a working system than I doubt they would benefit much from moving to better hardware.







