Posted on Fri 09 June 2023
My wife worked for the first public ISP for several years. My first job, still in college, was for a local ISP, eventually becoming the head of all technical operations. And my next job was at a major commercial ISP, one which leased POPs to AOL among other businesses.
Here’s how they worked, at a technical and business level:
Every aspect of an ISP, economic and technological, is based on the ability of IP connections to time-share multiple connections by packetizing them. Consider every point-to-point network connection as a train track populated by packets, train cars that always move at the speed limit to the other end of the track, where the sign on the front of the car is read by the stationmaster, who selects a route for the next leg of each car’s journey.
The stationmaster doesn’t care whether all the cars are going to the same destination or not. If the station only has three tracks, then every car arriving on track 1 is headed out on 2 or 3; every car coming in on 2 is going out on 1 or 3. Which one should be selected? That’s up to the sign on the car and the stationmaster’s policy.
A minimally viable ISP in 1994 bought a T1 line to another, larger, ISP, who would agree to route all their packets. This is called “buying transit”. That gets you 1.5 megabits per second in each direction. The T1 terminates in a modem-like piece of equipment called a CSU/DSU, which was usually attached via a high-speed (for the time) V.35 serial connection to a router – often a Cisco 25xx series 1U device. Some of them had interface cards that had all the functions of the CSU/DSU built in, and so the T1 would go directly into the router. Integration is nice – it guarantees compatibility among the integrated components, takes up less space, improves reliability, and usually uses less power and therefore generates less heat.
That router would talk via 10Mb/s ethernet - wow, fast! - to the local equipment, which would include a server running authentication/authorization software (based on the RADIUS protocol, usually), a local mail server, perhaps a local FTP server, and, for most of that period, a local USENET server. Finally, we have the equipment to allow clients to log in: a bank of telephone lines from the local phone company, each attached to a modem, which was connected to an RS232 serial line to a terminal server – that is, a computer with a lot of serial ports. The terminal server would offer a terminal connection to a local computer, or a SLIP connection, or a PPP (point-to-point protocol) connection, and whichever of these was available would be connected over the local Ethernet.
So: clients would dial in on a phone number (a roll-over number was an arrangement by which the telco would connect inbound calls on one number to whichever line was next in sequence, until they were all busy at once), negotiate with the modems, authenticate for a PPP session, and then send and receive packets.
Every packet that stayed local to the ISP was an economic win for the ISP, because local packets weren’t using up the valuable commodity of the general Internet connection. That’s why there would be local mail and Usenet and FTP – and eventually local web servers. Typically, a high-quality ISP could run at a ratio of 5 or 8 to 1 – 5 to 8 times the bandwidth on modems compared to the upstream bandwidth. A few years later, Akamai’s line of business was in convincing ISPs to give them space and power for free in their POPs or datacenters, using the clever argument that every web request answered by an Akamai cache server was a request not using valuable Internet transit. Meanwhile, Akamai’s clients appeared to be ridiculously fast to anyone asking for their content from an Akamai-affiliated ISP. (Oh, yes, after I worked for the big ISP I worked for Akamai.) The hard part of running an ISP in the first few years was letting people know you existed – newspaper ads, physical bulletin board ads, billboard ads, radio spots – you needed to invest in these things.
A few years later, Lucent Technology, who had been spun off from AT&T, came up with an all-in-one box called the AscendMAX. In one medium-large box, an ISP could have every component of a POP: efficiently wired telephone circuits going to built-in modems, a high-speed transit connection, routing services, and authentication (usually reflected back to central authentication servers).
Then Lucent outsmarted themselves.
There was an ISP boom. Everyone was offering a variation on $20/month all-you-can-eat dialup service. To expand to a new location, you just needed to arrange for some space in a rack, a T3/DS3 backhaul to your core networks, a set of T1/DS1 lines for dialup, and one or more AscendMAX boxes to connect everything together. Lucent was selling thousands. Lucent’s salespeople were getting rich on commissions, and pretty soon they started offering financing for ISPs which were clearly going to be making money hand-over-fist forever. The financing was at a nice interest rate, too, and the business was solid, so Lucent’s only collateral was the AscendMAX itself. If an ISP failed, well, some other ISP would happily buy the repossessed, refurbished, discounted equipment from Lucent. And all this went so well that the stock price soared, and some funny accounting tricks were played to make it go even higher.
Then it crashed, and Lucent found themselves the proud owners of thousands of repo’d AscendMAX units, and no buyers. See the Wikipedia article for details.