Ethernet has been in a state of perpetual evolution since its inception – with significant accommodation for backwards compatibility thanks to frame structure standardization. While exponential increases in throughput are perhaps most noteworthy, Ethernet has also seen improvements in the flexibility of Media Access Control (MAC) mechanisms at Layer 1. A number of physical (PHY) sub-layer developments have evolved, not the least of which is the increased breadth of transmission media choices for an Ethernet network.

Ethernet Evolution
StarLAN was the first implementation of Ethernet and used twisted pair copper wire. Known as 1BASE5 and developed by the IEEE as 802.3e in the mid-1980s, StarLAN ran at speeds of up to 1 Mbit/s. In light of the circuit switched, voice orientation of networks at that point, developers of 1BASE5 wanted to reuse previously installed cabling for telephony (PBX and/or key systems), thus minimizing the need to rewire office buildings and other enterprises. As the name implies, StarLAN was built around a hub-and-spoke topology – a direct emulation of circuit switched voice systems dominant at the time.

10BASE-T & Beyond
Introduced in the early 1990s, 10BASE-T supported up to 10 Mbit/s on 4 pair (8 conductor) twisted copper terminated on the now universally recognized RJ-45 modular connector. Both half and full duplex is supported as is the case with 100BASE-T (100 Mbit/s), and 1000BASE-T at 1Gbit/s (GigE). More than evolutionary, 10BASE-T arguably ushered in the broad adoption of LANs in the business environment.

10BASE-T was initially delivered over a shared coaxial cable in a bus topology, emulating a data radio network environment not unlike AlohaNet (described in the previous post). Thus the Etherin Ethernet. CSMA/CD played an essential role in managing channel contention resulting from packet collisions. Topologically, it was impractical to segment the network and as such, any number of single points of failure could bring down the entire network.

There were inefficiencies inherent in early Ethernet. Since a single coaxial cable carried all network communication (slotted Aloha), information sent by one device would be received by all devices on the network. It was the job of the Attachment Unit Interface (AUI) – essentially a pre-Network Interface Card (NIC) – to reject all traffic, other than that intended for the device it was connected to. Also, by confining all network traffic to a single shared cable, bandwidth can be quickly exhausted. Exacerbating the finite bandwidth was the broadcast nature wherein all stations on the network were sent all data regardless of whether it was intended for them or not. Finally, while elegant, CSMA/CD by its very nature has an impact on channel efficiency.

Switched Ethernet
As 10BASE-T hubs and bridges matured, the concept of Switched Ethernet developed. Switched Ethernet is significant in that it takes the concept of Token-Ring’s once superior network speed through the concept of one session (i.e.: two network devices) accessing all the LAN bandwidth for a given instant, as opposed to sharing network bandwidth as was the case with the broadcast model. Modern Ethernet switches could manage thousands of concurrent network segments. From the switch’s point of view, the only device on each segment is the end station’s Layer 1 interface (NIC). The switch’s intelligence is dedicated to managing frame delivery over the appropriate segment – often managing hundreds or thousands of segments in concurrence.

The Journey Continues
Ethernet has earned its universal adoption in the enterprise because of its speed, reliability, flexibility, uniformity and operational simplicity. The journey to ubiquitous Ethernet is advancing rapidly with Carrier Ethernet solutions such as WireIE’s Transparent Ethernet Solutions™leading the way.

WireIE’s Transparent Ethernet Solutions™ give carriers new and innovative ways to tap into hard-to-reach markets. And because TES scales so well, carriers are also discovering they can use TES to provide broadband services to enterprises where ROIs were previously prohibitive using antiquated leased facilities.WireIE is a Carrier Ethernet network operator and our TES solutions are backed up by an SLA.

Ethernet is ubiquitous. It’s in our businesses, schools, hospitals and homes. It’s in our cars, and it’s even the nerve system for the latest fly-by-wire airliners. Ethernet dominates in the datacenters where Internet and World Wide Web content is stored and served. Few would dispute that our modern world of communications runs on Ethernet.

Why Ethernet? In a few words; seamless, universal connectivity… There are certainly many secondary advantages, but this ‘plug and play’ aspect makes Ethernet particularly compelling when compared with other methods.

A wise person once said; “You need to know where you’ve been in order to know where you’re going.” Ethernet has been around a long time, but it’s entree into the world of telecommunications is fairly recent.
Ethernet (IEEE 802.3) was developed in the mid 1970s by Xerox. It was largely based on the Aloha system developed at the University of Hawaii.

AlohaNet, as it was called, used UHF radio as a data communications network medium. Transmission of packets across the radio channel was managed by Aloha’s random access contention algorithm. In the event of two (or more) data packets being sent on the same communication channel at the same instant, a collision occurs, the packets get corrupted, and no data is exchanged. Aloha manages this inevitability through the use of a random access timer. Should a collision be detected, a jam signal is sent over the network, notifying all other devices of the collision and to wait before sending further packets. The senders affected by the collision will then set a random self-timer to resume transmission, thus reducing the likelihood of a repeat collision. This mechanism is known as Collision Detection (CD).

To compliment CD, Ethernet uses a mechanism known as Carrier Sense Multiple Access (CSMA) – commonly referred to as CSMA/CD. Combined with the benefits of Collision Detection, the CSMA function stipulates that sending data communications equipment must ‘listen’ to the channel prior to transmitting a packet.

In the early days of Local Area Networking, Ethernet competed with IBM’s Token Ring networking standard. Considered very efficient in many types of network configurations, Token Ring still fell into obscurity as most leading vendors other than IBM placed their loyalties in Ethernet. The galvanizer was the IEEE’s pursuit of a single LAN standard which for a number of reasons went to Ethernet in 1982. Global approval of Ethernet as IEEE 802.3 was granted in 1984.

In the ensuing years, Ethernet has become ubiquitous. This ubiquity has led to powerful network hardware at incredibly low prices – all in an ever shrinking form factor per unit performance. The vast majority of Internet services are hosted on Ethernet networks, as are the user communities linking to those services.

Now a mature, universal Local Area Network (LAN) access standard, hardware supporting Ethernet (Switches and Network Interface Cards etc.) is commoditized and as such comparatively inexpensive and largely self-configuring. The entire TCP/IP suite is seamlessly supported by Ethernet, carried on various media ranging from CAT5e cable to fiber to digital microwave/radio.

In the next installment we’ll look at the evolution of Ethernet. That will set us up to explore the reconciliation between modern day Ethernet as a packet based protocol, and the time domain orientation of legacy telecommunications infrastructure.