In 1965 an observation was made by David House that over the history of computing hardware, the processing power of a minimal cost computer chip would double approximately every two years – Moore’s Law. While the impact of that forecast has been widely accepted and credited with significant advances in technology and associated economic benefits, there is an important forecast that applies to the power and value of networks, known as Metcalfe’s Law. The simplest way to apply this law is to look at the value of one fax machine which is useless on its own and as soon as you start to increase the number of fax machines, you increase the number of people who can send and receive faxes. While there is no specific timeline attached to Metcalfe’s law, advances in network size and capacity along with the mobile computing and communications revolution, have made the impact of Metcalfe’s law perhaps even more significant that its processing power counterpart.

We definitely “get more” out of the devices and networks that we use by virtue of their size and reach. Metcalfe’s law was defined by device only – today it also applies to the network effect most commonly discussed in reference to social networks. For example, if Facebook were a nation it would be larger than the United States and would rank itself just behind China and India in population.

I believe the most transformative aspect of the network effect or Metcalfe’s law is in an economic context. We are witnessing firsthand how the whole paradigm of enterprise computing is shifting to a cloud based model. This is allowing for even greater levels of distribution and provisioning of services and applications across both urban, rural and remote locations.

You can draw a direct line from the now dominant importance of SLAs in our business to the network effect described in Metcalfe’s Law. When the power and capacity of the connections becomes standard, only the reliability and performance of the network can impact its value to customers. In our opinion, the impact of the network effect and Metcalfe’s Law is significantly larger in rural and remote communities because of the fact that the multiplier effect is allowing business to take a substantial leap forward from the negative effects of decades of living off restricted and narrow network capacity.

The tide is shifting on acceptance and adoption of microwave radio as a viable alternative or supplement to fibre and economics may dictate more of the same.

For many in the telecommunications industry the recognition of microwave as a viable alternative to fibre to create carrier grade bandwidth with industry leading latency is not old news.

It has been frustrating to witness that the marketplace has not recognized this fact in a substantial and meaningful way. That does appear to be changing.

Late last year, Jason Bunge of Dow Jones wrote about the pace and level of high speed microwave adoption that has taken place recently in the securities exchange markets in North America and Europe. His article highlights how the deployment of high speed broadband over microwave is about to outpace fibre network deployment this year. As Bunge notes this is an industry where milliseconds count and where the highest standards of speed and network reliability are considered essential.

What is driving the change is cost efficiency and timeliness as the exchange business needs to address declining trade volumes by increasing speed and efficiency in their markets without breaking the bank to do it.

Many consider the capital markets to be technology leaders in the Financial Services (FS) sector and highly influential concerning the use and adoption of technology and telecom innovation. If the leaders of the FS sector are ready to make the jump to microwave radio it bodes well for the broader adoption of this standard within that sector and beyond.

Consider for a moment that the economics is driving the shift away from fibre and it becomes clear that there are other sectors that could likewise realize the same benefits and make the switch. If not for primary connections to office locations, it will be used as secondary to locations that have fibre available. Industries like oil and gas extraction, mining, Manufacturing, retail and the public sector are all witness to both exponential growth in data and the opportunity to use data to quickly and effectively deliver innovative new products and services to an increasingly “high demand” business place. If it is also recognized as an alternative or supplement that is more cost effective than traditional fibre deployment, widespread adoption of microwave radio  is not far behind? It is not the innovation of technology that is the biggest driver of change but the “mother of necessity” economics that makes change all the more compelling.

– Rob Barlow, CEO

About WireIE: We deliver carrier-grade Transparent Ethernet Solutions backed by SLAs. With a custom blend of fiber and digital to suit your circumstances, we transform, extend and support your communications networks in rural and remote areas. +1.905.882.4660 | |

On June 8, 2012, the Government of Ontario took the next step in their Clean Energy Economic Development Strategy, with the release of the Clean Energy Institute (CEI). The new institute will bring together industry leaders and utility companies to build on Ontario’s strengths in smart grid technologies and other clean energy innovations.

In conjunction with the CEI, Mars hosted the Future Energy Summit focused on bringing some of the top minds in clean energy to give feedback and help design the Smart Grid we need. A smarter grid will spearhead better tools to manage electricity use, help utilities prevent, detect and restore outages and ultimately connect every home and building to a renewable energy grid, therefore, decreasing green house gas emissions.

WireIE contributes to the Smart Grid by partnering with the University of Ontario Institute of Technology (UOIT) to define the operational requirements of a communications network supporting Smart Grid. By modeling various rural and urban electricity distribution scenarios, communication network specifications have been developed. This collaboration continues as WireIE sponsors the study and modeling of new Smart Grid applications.

WireIE is now part of this new funding released today by the Energy Minister for a Durham region trial. This will advance our current research into a live production environment. As a Smart Grid future is enabled in Ontario WireIE will continue to lead with its partners.

For more information on Ontario’s Clean Energy Institute:

For more information on Smart Grid projects:

For more information about Microwave Technologies for Carrier Ethernet Services, download this MEF document

About WireIE: We deliver carrier-grade Transparent Ethernet Solutions backed by SLAs. With a custom blend of fiber and digital to suit your circumstances, we transform, extend and support your communications networks in rural and remote areas. +1.905.882.4660 | |

The typical forms of voice and data transport for Carrier Ethernet Services are fiber and copper. While both provide connectivity in access networks, fiber is favoured for its prolific capacity, and copper is most widely used in environments with an existing telephone network . However, there are times when physical, geographical, legal, political or financial obstacles will stand squarely in the way of digging ditches, raising poles and pulling wire.

Overcoming the Obstacles

This is where microwave steps in. Even in the most challenging of circumstances, the combination of digital radio and Carrier Ethernet services can offer excellent flexibility, reliability, bandwidth and quality of service at a realistic price:

  • Right-of-way: Because microwave uses radio spectrum, it can navigate physical barriers such as private property
  • Service-aware traffic management allows you to differentiate voice and data packets by type, to avoid bottlenecks and smooth demand.
  • Rural and third world: In these environments, often with poor legacy communications, microwave extends your connectivity reach
  • Planning issues: Digital radio leapfrogs complex planning approvals that can slow the progress of fiber or copper installations in densely populated urban areas
  • Temporary links: Digital radio is a great choice for temporary sporting or entertainment events
  • Physical hurdles: Water, roads and challenging terrain can all complicate, or defeat, terrestrial installations
  • Security concerns: The threat of human or environmental interference, especially the increasing theft of copper in some countries, makes traditional installations more risky and less advisable

Low Cost Gigabit Ethernet Services

Today’s digital radio technologies are capable of providing rapid connectivity and delivering Gigabit Ethernet services across any terrain, over significant distances. Recent technical developments also enable digital radio to function in lower frequency bands without line-of-sight. Plus, in many environments, this technology can provide the lowest cost per bit for Ethernet service transport.

Remote Site Connectivity

Here are just some of the ways you can use microwave technology to connect the remotest or most rural of locations:

  • Broadband networks to support the conversion to digital TV
  • Broadband networks to support DSL access in rural areas by overcoming the distance limitations of the DSLAM and broadband backbone
  • Fiber backup routes to provide redundancy, diversity and network protection
  • Network extensions to reach remote locations

So, whether you’re looking to extend service in areas where fiber and copper are not available, or need a high-performance back-up route to ensure failsafe communications, digital radio is a highly competitive choice with an impressive performance history.

For more information about Microwave Technologies for Carrier Ethernet Services, download this MEF document

About WireIE: We deliver carrier-grade Transparent Ethernet Solutions backed by SLAs. With a custom blend of fiber and digital to suit your circumstances, we transform, extend and support your communications networks in rural and remote areas. +1.905.882.4660 | |

If you want to cause a stir, walk into a room full of seasoned technicians and mention microwave. Citing the twin fears of limited capacity and weather-dependent performance, many will offer stories of past problems without realizing that, like many other things in life, microwave has moved on.

The Future is not the Past

The legacy-based, analog solutions of the past bear no resemblance to modern microwave. Dismiss the new developments, and you could find yourself missing out on the many business benefits that today’s digital radio technologies bring.

Increasingly, organizations are discovering the advantages of a converged network platform that combines Carrier Ethernet and point-to-point digital radio to provide a new, highly effective method of voice and data transport. With the benefit of alternative thinking, smart solutions providers are overcoming terrestrial challenges and building advanced communications networks in some surprisingly remote areas – where often dial up had been the only option.

Two Strong Technologies

In response to our appetite for higher bandwidth and budget-conscious performance, over the past decade Carrier Ethernet has moved to centre stage – and continues to evolve today. Checking all the boxes, it’s a quicker, simpler and cheaper way to connect people with information. Plus, with Ethernet, it’s easy to build extensions or make adjustments down the road. And terrestrial microwave has proven to be an excellent partner for fiber in access networks – playing an increasingly valuable role in support of rich media applications like video, VoIP and disaster recovery.

The Question of Capacity

It’s time to dispel some of the myths and reveal the facts about microwave:

  • Gigabit capacity is already a reality – and it’s enough for most Carrier Ethernet applications.
  • Service-aware traffic management allows you to differentiate voice and data packets by type, to avoid bottlenecks and smooth demand.
  • Adaptive code modulation technology increases bandwidth capacity and also means you can deploy microwave equipment in densely populated areas.
  • Nodal function optimizes radio bandwidth resources and makes it easier for you to scale.
  • Packet technology is flexible, which means you can use microwave to get an optimal increase in data rates.
  • Over-air capacity is increased with microwave by using multiple transmission channels at different carrier frequencies. Capacity has also grown through enhancements like cross polarization, interference cancellation and data compression.

The Latest Weather Report

Although weather can affect microwave, technology enhancements have made it easier to deal with bad conditions, and custom-engineered links are specifically designed to account for the elements:

  • Adaptive modulation protects your network from weather effects by varying radio throughput, making adjustments according to the performance of air interface channels.
  • Frequency diversity makes your network resilient to bad-weather fading.

A New Form of Transport

The evolution of microwave technology offers a valuable opportunity to combine Carrier Ethernet services with digital radio to provide end-to-end network transport services. Offering limitless reach, this converged platform will give you the performance and capacity to communicate faster and more flexibly at a price that suits your CFO – even when geography is not on your side.

Ethernet has been in a state of perpetual evolution since its inception – with significant accommodation for backwards compatibility thanks to frame structure standardization. While exponential increases in throughput are perhaps most noteworthy, Ethernet has also seen improvements in the flexibility of Media Access Control (MAC) mechanisms at Layer 1. A number of physical (PHY) sub-layer developments have evolved, not the least of which is the increased breadth of transmission media choices for an Ethernet network.

Ethernet Evolution
StarLAN was the first implementation of Ethernet and used twisted pair copper wire. Known as 1BASE5 and developed by the IEEE as 802.3e in the mid-1980s, StarLAN ran at speeds of up to 1 Mbit/s. In light of the circuit switched, voice orientation of networks at that point, developers of 1BASE5 wanted to reuse previously installed cabling for telephony (PBX and/or key systems), thus minimizing the need to rewire office buildings and other enterprises. As the name implies, StarLAN was built around a hub-and-spoke topology – a direct emulation of circuit switched voice systems dominant at the time.

10BASE-T & Beyond
Introduced in the early 1990s, 10BASE-T supported up to 10 Mbit/s on 4 pair (8 conductor) twisted copper terminated on the now universally recognized RJ-45 modular connector. Both half and full duplex is supported as is the case with 100BASE-T (100 Mbit/s), and 1000BASE-T at 1Gbit/s (GigE). More than evolutionary, 10BASE-T arguably ushered in the broad adoption of LANs in the business environment.

10BASE-T was initially delivered over a shared coaxial cable in a bus topology, emulating a data radio network environment not unlike AlohaNet (described in the previous post). Thus the Etherin Ethernet. CSMA/CD played an essential role in managing channel contention resulting from packet collisions. Topologically, it was impractical to segment the network and as such, any number of single points of failure could bring down the entire network.

There were inefficiencies inherent in early Ethernet. Since a single coaxial cable carried all network communication (slotted Aloha), information sent by one device would be received by all devices on the network. It was the job of the Attachment Unit Interface (AUI) – essentially a pre-Network Interface Card (NIC) – to reject all traffic, other than that intended for the device it was connected to. Also, by confining all network traffic to a single shared cable, bandwidth can be quickly exhausted. Exacerbating the finite bandwidth was the broadcast nature wherein all stations on the network were sent all data regardless of whether it was intended for them or not. Finally, while elegant, CSMA/CD by its very nature has an impact on channel efficiency.

Switched Ethernet
As 10BASE-T hubs and bridges matured, the concept of Switched Ethernet developed. Switched Ethernet is significant in that it takes the concept of Token-Ring’s once superior network speed through the concept of one session (i.e.: two network devices) accessing all the LAN bandwidth for a given instant, as opposed to sharing network bandwidth as was the case with the broadcast model. Modern Ethernet switches could manage thousands of concurrent network segments. From the switch’s point of view, the only device on each segment is the end station’s Layer 1 interface (NIC). The switch’s intelligence is dedicated to managing frame delivery over the appropriate segment – often managing hundreds or thousands of segments in concurrence.

The Journey Continues
Ethernet has earned its universal adoption in the enterprise because of its speed, reliability, flexibility, uniformity and operational simplicity. The journey to ubiquitous Ethernet is advancing rapidly with Carrier Ethernet solutions such as WireIE’s Transparent Ethernet Solutions™leading the way.

WireIE’s Transparent Ethernet Solutions™ give carriers new and innovative ways to tap into hard-to-reach markets. And because TES scales so well, carriers are also discovering they can use TES to provide broadband services to enterprises where ROIs were previously prohibitive using antiquated leased facilities.WireIE is a Carrier Ethernet network operator and our TES solutions are backed up by an SLA.

Ethernet is ubiquitous. It’s in our businesses, schools, hospitals and homes. It’s in our cars, and it’s even the nerve system for the latest fly-by-wire airliners. Ethernet dominates in the datacenters where Internet and World Wide Web content is stored and served. Few would dispute that our modern world of communications runs on Ethernet.

Why Ethernet? In a few words; seamless, universal connectivity… There are certainly many secondary advantages, but this ‘plug and play’ aspect makes Ethernet particularly compelling when compared with other methods.

A wise person once said; “You need to know where you’ve been in order to know where you’re going.” Ethernet has been around a long time, but it’s entree into the world of telecommunications is fairly recent.
Ethernet (IEEE 802.3) was developed in the mid 1970s by Xerox. It was largely based on the Aloha system developed at the University of Hawaii.

AlohaNet, as it was called, used UHF radio as a data communications network medium. Transmission of packets across the radio channel was managed by Aloha’s random access contention algorithm. In the event of two (or more) data packets being sent on the same communication channel at the same instant, a collision occurs, the packets get corrupted, and no data is exchanged. Aloha manages this inevitability through the use of a random access timer. Should a collision be detected, a jam signal is sent over the network, notifying all other devices of the collision and to wait before sending further packets. The senders affected by the collision will then set a random self-timer to resume transmission, thus reducing the likelihood of a repeat collision. This mechanism is known as Collision Detection (CD).

To compliment CD, Ethernet uses a mechanism known as Carrier Sense Multiple Access (CSMA) – commonly referred to as CSMA/CD. Combined with the benefits of Collision Detection, the CSMA function stipulates that sending data communications equipment must ‘listen’ to the channel prior to transmitting a packet.

In the early days of Local Area Networking, Ethernet competed with IBM’s Token Ring networking standard. Considered very efficient in many types of network configurations, Token Ring still fell into obscurity as most leading vendors other than IBM placed their loyalties in Ethernet. The galvanizer was the IEEE’s pursuit of a single LAN standard which for a number of reasons went to Ethernet in 1982. Global approval of Ethernet as IEEE 802.3 was granted in 1984.

In the ensuing years, Ethernet has become ubiquitous. This ubiquity has led to powerful network hardware at incredibly low prices – all in an ever shrinking form factor per unit performance. The vast majority of Internet services are hosted on Ethernet networks, as are the user communities linking to those services.

Now a mature, universal Local Area Network (LAN) access standard, hardware supporting Ethernet (Switches and Network Interface Cards etc.) is commoditized and as such comparatively inexpensive and largely self-configuring. The entire TCP/IP suite is seamlessly supported by Ethernet, carried on various media ranging from CAT5e cable to fiber to digital microwave/radio.

In the next installment we’ll look at the evolution of Ethernet. That will set us up to explore the reconciliation between modern day Ethernet as a packet based protocol, and the time domain orientation of legacy telecommunications infrastructure.

Most would agree that the traditional centralized electrical distribution model will evolve to a distributed generation (DG) model. When this occurs, and to what degree remains to be seen. Regardless, a smart grid communications infrastructure is essential in the safe, reliable and efficient management of a DG infrastructure.

For the past couple of years, WireIE has worked in collaboration with the University of Ontario Institute of Technology (UOIT) in developing a model for a smart grid distribution system of the future. Faculty in the university’s Electrical Engineering & Applied Science program, along with their students, have modeled a number of distributed generation scenarios from the utility’s perspective. One of the many outcomes of this exercise has been a clearer specification of communication network requirements to support these distributed generation scenarios.

Communication Network Requirements
A smart grid communications network must support a number of applications, some mission critical, while others are comparatively forgiving. As our UOIT colleagues specify, the operation of taking a distributed generation source on or off line demands execution of the transition in no more than 5 – 6 cycles, or 80 – 100 milliseconds. In contrast, other administrative functions such as a dispatch applications may be tolerant of a number of seconds delay.

With UOIT’s DG scenarios in mind, our most critical communications network specification is latency. Latency is defined as the time taken for an element of data to transcend a link, or series of links, in a data communications network. We therefore need to factor in the very stringent latency requirements of DG while also recognizing that our smart grid communications network will be handling significant volumes of less time-sensitive administrative traffic.

Communications Network Architecture
A smart grid communications network must support protection and control functions at DG interconnection points. These sites include facilities on the grid itself, along with businesses and residences where alternative energy may also to be available to the grid. With a clear delineation between mission-critical operations and those more tolerant of latency and throughput variations, a dual or potentially multi-layered, communications network is envisioned.

One can think of the bottom layer of the network being administrative and housekeeping oriented. It is designed for high reliability but it also has comparatively high forgiveness of latency, along with other network performance variations. Geographically, this layer covers a wide area – potentially all of a Local Distribution Company – and is appropriately referred to as a Wide Area Network (WAN). In contrast, the top layer is composed of several Local Area Networks (LANs). All LANs connect to the WAN so that communication can take place between the Operations Centre on the WAN and remote sites on the network.


Mouse Over the Image to Reveal the LAN Layer

The Drawing Assumes an IEC 61850 Interface as a Demarcation Between Electrical Utility and Communication Network Assets

While this basic topology is by no means revolutionary, the mission-criticality of many protection and control functions will require unprecedented robustness and redundancy – particularly on the LAN layer, and often at the network edge. As is the trend with many modern networks, edge oriented data processing and storage yields significant bandwidth efficiencies, along with a commensurate improvement in network performance and service reliability.

The LAN’s primary purpose is to execute time-sensitive, mission-critical protection and control operations such as a DG source switch-over. It should be noted that DG operational decision making is not the same thing as the actual execution of the operational decision. This distinction is important in that business and operational policies and decision-making do not occur on the LAN. Instead, a centralized operations facility, or perhaps a collection of regional operations centres, are located on the WAN. Among other things, these centres are where operational decisions are made and subsequently delivered to the appropriate LAN. Once an instruction is delivered to the appropriate LAN, local sensing and measuring equipment determine whether conditions are conducive to actual execution on the instruction. The outcome of the instruction (executed successfully, failed) is then delivered from the LAN to the operations centre via the WAN.

Why not consolidate the WAN and LAN layers? The main reason relates to the wide range of expectations placed on the smart grid communication network as a whole. As previously mentioned, protection and control functions are comparatively demanding of the network in terms of reliability and low latency, whereas administrative functions are quite forgiving.

As a self-contained network within a larger ‘network of networks’, the local aspect of a LAN has some very important attributes in supporting protection and control. As a topologically simple, self-contained local network, a LAN is very fast – an essential characteristic in executing protection and control operations. Not only are communication link distances short in a LAN, there are fewer hops (a linear collection of communication links) per communication channel. Multiple hops introduce aggregate latency. An additional inherent benefit of the LAN’s simplicity is reduced points of failure within the LAN itself. In fact in most situations, the LAN can operate autonomously should there be either a planned or unforeseen disconnection from the WAN. Predefined operational policies would stipulate the degree to which the LAN can operate autonomously in the event of a disconnection from the WAN.

Communications Network Technology Considerations
Many DG sources are in locations where limited or no communications infrastructure exists. In these cases deployment of digital radio, or a digital radio/fiber optic hybrid is both attractive and pragmatic.

WireIE’s Transparent Ethernet Solutions™ (TES) are built with exceptionally low latency characteristics – all backed up by a Service Level Agreement (SLA). WireIE TES can be deployed in a point-to-point, or point-to-multipoint topology. For access, Long Term Evolution(LTE) promises very attractive latency characteristics, well within the requirements set out by our friends at UOIT. WiMAX(Worldwide Interoperability for Microwave Access) also shows potential as a Smart Grid access technology — particularly WiMAX 802.16m, recently approved by the ITU.

Single hop latency in a WiMAX or LTE link measured from base station to CPE (customer premises equipment), is typically equal to or less than 10 milliseconds. Aggregate latency must therefore be kept safely below 50 milliseconds on all protection and control paths. Again, containing execution of distributed generation activities to a LAN ensures latency thresholds are not exceeded.

WireIE TES, LTE and WiMAX offer a number of sophisticated capabilities over and above impressive latency characteristics. All employ dynamic radio link quality management capabilities. Throughput is traded off for link robustness in the event the quality of a radio path should deteriorate. The reverse is also true as radio path quality improves. The mechanism facilitating throughput verses robustness is known as adaptive modulation.

It is essential that each digital radio link be engineered to exceptionally strict path propagation specifications because of the mission critical nature of smart grid protection and control applications. This entails exhaustive path analysis and a subsequent network design to ensure that every radio path is never at risk of engaging a modulation scheme below a carefully calculated threshold. As a fixed network, radio link reliability can be achieved with a high degree of predictability. That said, best-of-breed engineering is an essential ingredient from a reliability and performance perspective. In addition, network redundancy and/or diversity must be incorporated into the design, thus enhancing overall reliability and equally important, allowing for any and all network failure scenarios. Further protection against communication network failures must also be addressed as the application layer.

A properly engineered LAN using digital radio technologies such as WireIE’s TES, LTE and WiMAX will provide a safe and reliable platform by which to execute critical protection and control operations such as a DG switch-over. The underlying WAN provides the necessary communications foundation to administer such activities. The WAN also supports the broader administrative, ‘house keeping’ activities envisioned for smart grid.

With the broad adoption of personal computing, we have witnessed more than a quarter of a century of staggering incremental improvement in data processing power. These benefits have not only touched the traditional desktop. Smaller form factors such as laptops, netbooks, and now tablets and smart phones, are reaping the benefits of ever increasing clock speeds complimented by multiple core processors. In parallel, memory has become faster and cheaper.

A case in point is Apple’s iPad. Launched a year ago, the original iPad had a 1 GHz single core processor. A mere year later, Apple last week announced iPad 2 which boasts a dual core processor along with a nine fold increase in graphics processing capability. All of this at the same price yet in a form factor 1/3 thinner than its still novel predecessor. And a year later, Apple no longer owns the entire tablet market. Familiar names such as HPLGMotorolaRIMand Samsung are offering tablets with impressive specifications — all supported by powerful dual core processors.

The increase in processing speed, memory capacity and other performance related specifications align with Intel co-founder Gordon E. Moore’s law which essentially asserts that processing and memory performance improves exponentially per unit cost over the course of roughly one year. In addition, while battery technology is comparatively slow in its evolution, we’ve seen enormous improvements in power efficiency in microprocessors and RAM – allowing for device portability. Deloitte predicts that smart phones and tablets will outsell all other computer categories combined in 2011. Device portability is now an expectation of the consumer, and increasingly the enterprise as well.

With all this horsepower in the hands of the user, why is cloud computing so compelling? While the three previous installments in this series touched on cloud computing benefits such as real time collaboration, ubiquitous access to applications and user files on any device, perhaps the most compelling attraction is the exceptionally low cost of entry. Cloud computing user devices need be nothing more than a hardware platform functioning as an ultra thin client. Equally attractive, cloud computing is client platform agnostic – both from a hardware and operating system perspective.

For example, a user at head office on the east coast creates a spreadsheet in the cloud using her office notebook running Windows. Later on at lunch, she reviews the spreadsheet on her Xoom tablet and makes a few changes before discussing it with her colleague out on the west coast. Later on and now from home, that same user accesses the spreadsheet on her brand new MacBook Pro running OS/10. As she makes some final changes, her colleague from the west coast has the spreadsheet loaded on his office desktop running Windows. Through real time collaboration he adds the remaining numbers — the spreadsheet now ready for review by the CEO. The CEO is on an ecotour in Central America but is able to stop in a small village where there’s an Internet cafe. On an old PC running Windows 98 and with dial-up Internet access, the CEO pulls up the spreadsheet, reviews it, adds some comments and returns to his adventure.

Combining portability with a more ‘traditional’ user interface such as a low cost netbook is a very good platform for cloud based office productivity applications such as spreadsheet and document preparation. Even presentations are simple to prepare using cloud based applications.

Impact on the Network Operator

As the chart below depicts, cloud computing transfers virtually all of the burden away from the consumer and into the hands of the host (most often a webco), along with the network operator/carrier.

Cost Distribution of Cloud Computing

Clearly the end user enjoys very low fixed and variable costs. With service delivery via the Internet, virtually any device with a standards compliant browser can be used. In addition, cloud oriented ‘apps’ for smart phones and tablets continue to emerge – almost on a daily basis.

The aggregate cost burden for cloud computing service delivery (both capital and operational) is largely absorbed by the host webco and/or the network operator. With that in mind, cost mitigation and monetization strategies are being investigated by webcos and network operators alike.

Cloud Computing Cost Distribution

For network operators, an opportunity to repatriate some lost revenue from over-the-top users is one possibility. Many cloud computing webcos see benefit in dispersing their hardware assets beyond their own data centres. In the trend towards network edge oriented service delivery, installing an instance of the webco’s cloud services in a network operator’s facilities is becoming a compelling idea. This approach increases redundancy and geographic diversity for the webco, but it also disperses the global cost burden.

In turn, the network operator benefits from revenue sharing, or some other revenue generating mechanism. Co-branding, along with other enhanced marketing opportunities also become possible under such collaboration.

As the industry has learned in the past decade however, it is essential the user experience of the cloud service not be compromised in attempts to build walled gardens, or through attempts to offer reverse over-the-top services in competition with the webco itself. Users are sophisticated and know they have a choice. Importantly, users typically associate cloud computing value with the webco as opposed to the network operator. The enormous success of smart phone ‘apps’ stores offered by Apple, Google, RIM and others demonstrate that network operators are in fact cognizant of where their value is and equally important, where it isn’t. With that in mind, a great opportunity for network operator/webco collaboration awaits.

As a wholesale network operator in Canada, WireIE is capable of hosting Cloud services as a complement to our Transparent Ethernet Solutions.

Cloud applications are wide and varied. Household names such as Facebook and Twitter are cloud based as are content management systems such as WordPress. Netflix, another house hold name, streams video to millions of viewers from its servers based in the cloud. At the other end of the spectrum are advanced IT oriented cloud services such as Cisco’s OverDrive network virtualization services. OverDrive virtualizes routing, switching, security and access control in the cloud.

The general consensus is that MSN’s Hotmail was the original cloud computing service – although it wasn’t regarded as such when it launched in July, 1996. Google raised the bar in terms of capability by introducing their Docs & Spreadsheets (now simply called Google Docs) cloud service. Taking direct aim at Microsoft’s hold on the Office Suite space, Google Docs offered less functionality – the thinking being that a simplified feature set is actually an advantage for the vast majority of users. Studies have shown that 80 percent of the traditional desktop application user community only uses 20 percent of the available features. The busyness of the user interface becomes an impediment for these users. Offsetting the “dumbed-down” feature set is the ability to:

  • Collaborate on a file with other people on a real time basis regardless of where the participants are located.
  • Access the documents from any browser on any OS from anywhere there is Internet connectivity.
  • Use Google Docs at no charge.
  • Know that you will always be using the latest, most secure version of the application.
  • Know that user file backup practices offered by Google are going to be more reliable and secure than those followed in many homes and businesses.

Microsoft’s Office 365 offers tight integration between its desktop software model and its cloud services – essentially the best of both worlds – a richer feature set combined with the benefits of working in the cloud.

Dropbox and Carbonite, on the other hand, offer a more basic service by providing automatic, unattended synchronization and back up of user files to the cloud. Encryption options are available as are file sharing options with Dropbox.

The following video from the Pentasoft Channel describes the philosophy of cloud computing by concentrating on the three pillars of:

  • Virtualization
  • Utility Computing – Distributed Server Capacity
  • Software as a Service (SaaS)

Many consumer oriented cloud services predate Google Docs. Photo storage and sharing sites such as Flickr and Picasa have been around for years now. Even processor-intensive applications such as Photoshop have a cloud based repository and editing environment. Video editing, arguably one of the most bandwidth-demanding, processor intensive applications, is available in the cloud from the likes of YouTube Video Editor and Kaltura.

As the World Wide Web rapidly evolves to HTML5 many resources currently found in a client operating system are being moved out to the cloud. A simple example is cloud based fonts. Prior to HTML5, a web designer was limited to the fonts residing in the site visitor’s operating system. Among many other things, HTML5 allows new font sets to be loaded from the cloud. In fact, as we move to HTML5, the very tools used to develop websites are moving to the cloud.

An intriguing concept is Google Cloud Print. As a companion to Google Chrome, Cloud Print places printer drivers and security credentials in the cloud. Printers are then mapped to the appropriate cloud profile. Not only does this enable printing from virtually any computer anywhere, it also has the potential to redefine the way we use legacy services such as facsimile and the postal service.

In April 2010, the Eyjafjallajökull ice cap in Iceland erupted causing days of flight cancellations and delays for both passengers and air cargo. Some of the affected cargo was trans-Atlantic mail. Had we evolved to a cloud print world, much of the mail would have been unaffected because it would have printed locally – be it at a postal centre, or at the actual addressee’s home or office.

The world of Cloud Computing is advancing rapidly. Derrick Harris of Gigaom recently assembled a list of 8 cloud companies he feels we should be watching in 2011. Just click hereto read his analysis.

In our final installment we’ll take a look at the bandwidth implications as a result of the boom in cloud computing.