After the initial IMP installations in 1969, the ARPANET continued to expand and, being a more or less operational network, fixes were required and improvements could be made to the 516/316 IMP software [McQuillan72]. Later the algorithms were reimplemented so Pluribus-platform-based IMPs could function as an IMP compatibly with the 516/316 IMPs and the 516/316 IMP program was adapted to run on the C/30 platform. However, it was years later before fundamental networking algorithms were developed on a platform other than the 516/316 IMP platform. Here we will sketch the initial round of changes to the IMP software by quoting from a 1972 paper [McQuillan72].
A balanced design for a communication system should provide quick delivery of short interactive messages and high bandwidth for long files of data. The IMP program was designed to perform well under these bimodal traffic conditions. The experience of the first two and one half years of the ARPA Network’s operation indicated that the performance goal of low delay had been achieved. The lightlyloaded network delivered short messages over several hops in about one-tenth of a second. Moreover, even under heavy load, the delay was almost always less than onehalf second. The network also provided good throughput rates for long messages at light and moderate traffic levels. However, the throughput of the network degraded significantly under heavy loads, so that the goal of high bandwidth had not been completely realized. We isolated a problem in the initial network design which led to degradation under heavy loads [BBN Report 2161, Kahn71].
This problem involves messages arriving at a destination IMP at a rate faster than they can be delivered to the destination Host. We call this reassembly congestion. Reassembly congestion leads to a condition we call reassembly lockup in which the destination IMP is incapable of passing any traffic to its Hosts. Our algorithm to prevent reassembly congestion and the related sequence control algorithm are described in the following subsections. We also found that the IMP and line bandwidth requirements for handling IMPto- IMP traffic could be substantially reduced. Improvements in this area translate directly into increases in the maximum throughput rate that an IMP can maintain. Another set of changes was made to expand the capabilities rather than the performance of the IMP.
The size of the initialization code and the associated tables deserves mention. This was originally quite small. However, as the network has grown and the IMP’s capabilities have been expanded, the amount of memory dedicated to initialization has steadily grown. This is mainly due to the fact that the IMPs are no longer identical. An IMP may be required to handle a Very Distant Host [a host at the other end of a communications circuit rather than a bit of wiring away from the IMP], or TIP hardware [an IMP option for directly connecting to it a software host handling 63 terminals], or five lines and two Hosts, or four Hosts and three lines, or a very high speed line, or, in the near future, a satellite link. As the physical permutations of the IMP have continued to increase, we have clung to the idea that the program should be identical in all IMPs, allowing an IMP to reload its program from a neighboring IMP and providing other considerable advantages.
However, maintaining only one version of the program means that the program must rebuild itself during initialization to be the proper program to handle the particular physical configuration of the IMP. Furthermore, it must be able to turn itself back into its nominal form when it is reloaded into a neighbor. All of this takes tables and code. Unfortunately, we did not foresee the proliferation of IMP configurations which has taken place; therefore, we cannot conveniently compute the program differences from a simple configuration key. Instead, we must explicitly table the configuration irregularities.
John McQuillan has also said the following about that era during which checksums and other code robustness devices were put into the code [McQuillan13]. . . . [a] significant part of the effort I put in to the IMP program from 1971 to 1973 had to do with hardware/software interactions. The interrupt system of the 516, and the direct memory channels turned out to be a key focus, both as strengths of the hardware, and sources of issues and failures . . . One of the goals in that period was to make the IMP more resilient . . . After the above changes were made, the major effort of the next few years was redoing the original ARPANET routing as the limitations of the original algorithm were discovered as the network grew larger. First there were little modifications, and then McQuillan looked at the issues in detail [McQuillan74]. Eventually, McQuillan and others developed a new routing algorithm [McQuillan80, McQuillan09] which the then IMP programmers implemented.
The original routing algorithm was useful for getting the ARPANET up and running quickly and supporting, more or less, its first few years of operational use. The new routing algorithm lives on today in the OSPF routing algorithms [McQuillan09]. Although not unique to the routing transition, the routing transition was an instance where incompatible releases of the IMP software had to be distributed; this added significant complexity to the release effort (an interim release had to be created to allow moving between the prior and new operational releases).