March 09, 2015

When Worlds Collide: Digging Deeper

Now that some basic definitions of these new approaches to implement LTE in unlicensed spectrum can be agreed, delving deeper into the colliding worlds of licensed and unlicensed band services, it’s important to understand how these technologies might be deployed and some of the options mobile operators will have.

WrightSome pundits have said “to argue for Licensed Assisted Access (LAA), one needs to make a case for Wi-Fi’s insufficiency in some regard.”  Really? Why? 

These comments reference pre-standard/non-standard LTE operation within the unlicensed spectrum being promoted by equipment suppliers such as Ericsson, but the same assertion may be made to the other proposals.

To understand what insufficiencies Wi-Fi may have, it is first necessary to understand the services that LTE operation in the unlicensed spectrum would actually support. 
It’s important to note that LTE-U and LAA are intrinsically data services.  So is Wi-Fi, notwithstanding adaptations to support real-time, bi-directional services through the use of enhancements such as WMM and the Wi-Fi Alliance’s Voice-Enterprise program. So if LAA, LTE-U, and Wi-Fi are ALL used to support data services, how are they differentiated?  The answer is it depends on who you are and what you want.

In June, 2014, the 3GPP Workshop broadly defined LAA’s operation as: 

"Aggregation of a primary cell, operating in licensed spectrum can be used to deliver critical information and guaranteed quality of service, with a secondary cell, operating in unlicensed spectrum to opportunistically boost data rate"

 This made it pretty clear that LAA will utilize the unlicensed channel(s) to augment the data path, while the licensed downlink and uplink will be utilized for QOS sensitive services like Voice over LTE (VoLTE).

 Pre-standard LTE-U plans to use the unlicensced spectrum for downlink only traffic, therefore making it unsuited for bi-directional services. This leads to another key question: where will LTE within the unlicensed spectrum be deployed?

3GPP’s LAA program and ongoing study item include both indoor and outdoor deployment scenarios. However it is clear that both LTE-U and LAA are initially, and primarily, focused on the indoor market given the challenges that are often faced getting cellular signals within buildings and the practical deployment and economic benefits Wi-Fi can provide there.

It’s generally accepted that around 80% of wireless data usage occurs indoors. And the target bands for LTE operation in the unlicensed spectrum (namely 5GHz and possibly 3.5 GHz) are ideally suited for indoor applications.

While the 3GPP LAA Study Item scenarios (see below) include an option to link unlicensed LAA small cells directly to licensed macro sites, the majority of cases involve linking the unlicensed LAA small cells to co-located licensed small cells, which will in fact most often be integrated into a single unit supporting both licensed LTE and LTE-U/LAA operation. The focus for licensed small cells is now largely indoors.

Given that LTE-U and LAA are data services, are proposed for the higher frequency bands, and will most likely be integrated into small cells, it’s clear that they are envisioned as primarily indoor technologies.

Source: 3GPP, LAA Study Item Deployment Scenarios

Wi-Fi: A Rock Star for Data Delivery Indoors

That’s what it was designed for in the first place. And as the 802.11 standard has evolved and carrier-grade technology has been introduced into the market, Wi-Fi has become more than merely an afterthought for carriers looking for more capacity. It’s become a mainline strategic technology to support present and future mobile data services. A comparison of the unlicensed options for indoor data service delivery bears this out (see chart).

Wifi-lte-comparison-2Relative to neutral 
host support, Wi-Fi is inherently ‘operator-agnostic’, but can be used to service mobile operator subscribers via branded SSIDs or Hotspot 2.0 services.

With LAA or LTE-U the unlicensed capacity can be transparently added or removed to the client’s licensed data links, assuming that licensed coverage is available.  This results in a consistent user experience. However, Hotspot 2.0 addresses the traditional issues and limitations associated with users connecting to Public Wi-Fi and also provides a secured airlink.

Finally the claims of better spectral efficiency for LTE in unlicensed are based on either testing that has been done with the pre-standard/non-standard LTE-U, or simulations of LAA using listen-before-talk (LBT) mechanisms that may or may not prove practical.  The relative performance advantages may not be fully discernable until LAA standards are completed and full evaluations are possible.

Mobile Operator Options for Unlicensed Spectrum

The most compelling advantage for LAA and LTE-U is clearly the direct integration of unlicensed services with the mobile core.

 This is a distinct advantage for mobile operators, enabling a more transparent use of unlicensed spectrum for their subscribers – keeping in mind that Hotspot 2.0 will effectively automate the Wi-Fi connection process and address much of the complexity associated with the millions of today’s disparate Public Wi-Fi hotspots.

Within the various proposals of the operation of LTE within the unlicensed spectrum, the dual mode (licensed and unlicensed) eNB presents a unified interface to the mobile core (Evolved Packet Core, or EPC, for LTE networks).

This is more straightforward than existing 3GPP Wi-Fi ‘interworking’ solutions that allow the data plane from a Wi-Fi network to be interfaced to a mobile core’s data handling subsystems, but involve intermediate gateway devices such as Wireless Access Gateways (WAG/TWAG) or evolved Packet Data Gateways (ePDG).

Wi-Fi interworking, first introduced in 3GPP Release 8, has gone through a number of iterations. In a previous post on Wi-Fi Calling, we noted that Apple’s iOS implementation is making use of the ePDG data path for voice sessions.

For operators choosing to use Wi-Fi as their unlicensed airlink, there is also the option of integrating the authentication and accounting without integrating the data plane.

Using Hotspot 2.0, or standard 802.1X with EAP-SIM or EAP-AKA/AKA’, a Wi-Fi footprint can be used to onboard cellular subscribers, authenticate them against an operator’s HLR/HSS, and provide details on their
data usage, but not actually forward the data traffic to the MNO’s core.

This is a popular deployment option that provides transparent connectivity to the mobile operator’s
subscribers over Wi-Fi.  Yet it doesn’
t require the operator to deploy additional data handling capacity (GGSNs or PGWs) in their core thereby avoiding a good amount of CAPEX.

Wifi-lte-comparison-1The table on the right (click to expand) helps summarize some of the characteristics of Wi-Fi and LTE in unlicensed as they pertain to mobile operator integration.

Heretofore, unlicensed and licensed wireless technologies have been worlds apart. Now, almost any way you look at it, they are colliding. While it won’t happen overnight, the implications will be profound for everyone. So buckle up.


February 23, 2015

When Worlds Collide




If there’s one thing we know, it’s a ruckus. And there’s one going on in the world of telecommunications around the use of the unlicensed spectrum for LTE services. With Mobile World Congress just a week away, there will undoubtedly be a barrage of announcements on the topic.

This will prompt, from various constituents, strong reaction ranging from extreme antipathy and fear, to statements of expectant support.

Problem is, much of what’s being reported doesn’t paint a complete picture of what’s really going on. So here’s some context to help (with more posts to come).

Basic Background Required

In the fall of 2013, Qualcomm proposed an innovative use of unlicensed spectrum to carry LTE traffic. They referred to their proposal as LTE Advanced in unlicensed Spectrum, or LTE-U for short.

Conceptually, over the air LTE consists of a control connection between the e-Node B (LTE radio access node) and the User Equipment (UE), aka client. There is an uplink path for data traffic from the UE to the eNB, and a downlink path for data traffic from the eNB to the UE.

LTE supports two deployments models in licensed spectrum: Frequency Division Duplexing or FDD-LTE and Time Division Duplexing or TDD-LTE.

FDD-LTE is when the control and uplink are typically deployed in one band, while the downlink is deployed in a separate, paired band.

TDD-LTE, in contrast is where the control, uplink and downlink are deployed within a single band.

 LTE is a full duty cycle technology. This means it is assured full use of the band(s) it is operating in. The control path is used to coordinate the airtime on the traffic channels between the eNB and its connected UEs (in TDD mode, the control path is also used to coordinate the amount of airtime which will be used for uplink and downlink operation).

 LTE Advanced (LTE-A) introduced the concept of Carrier Aggregation (CA). [In licensed terms, “carrier” equates to what the Wi-Fi industry calls a “channel”.] CA allows an operator to effectively bond multiple portions of spectrum for the downlink and/or uplink to achieve greater capacity.

The original Qualcomm proposals for LTE-U were rather wide ranging, including a possibility of implementing all of the LTE paths (control, UL, and DL) in unlicensed spectrum.

 3GPP (the organization responsible for LTE standardization) began looking at LTE in Unlicensed Spectrum in early 2014. A first 3GPP Workshop was held in January 2014, and a second in June 2014 which established some initial priorities for 3GPP’s activities. It was at this second meeting that 3GPP adopted the term Licensed-Assisted Access (LAA) to denote their proposed use of unlicensed spectrum for LTE. A few other outcomes from this second Workshop included decisions to focus on the 5 GHz unlicensed bands and a goal for a single global solution. At the Radio Access Network Specification Group (TSG-RAN) meeting in September 2014, a formal LAA Study Item was approved. This meant that LAA was now officially a study item for 3GPP Release 13. The main goal of the study item is to “study the LTE enhancements needed to operate in unlicensed spectrum and to ensure fair coexistence with Wi-Fi”.

3GPP members are now conducting coexistence testing and will be reporting their findings to TSG-RAN.  The timeline below shows some of the milestones and expectations.


 LTE-U versus LAA

 There’s a lot of confusion at this point about the terms LTE-U and LAA. Some are using them interchangeably, while others have distinctly different things in mind when they mention them. For instance, Ericsson recently announced that it will introduce LAA support on some of its small cell platforms by the end of 2015.

However, LAA, as a 3GPP standard, isn’t expected to be finalized until March of 2016. So Ericsson is really referring to a pre-standard/non-standard technology, which has distinctive features from what is most likely to be standardized as 3GPP LAA.

While Qualcomm and 3GPP seem to be using consistent terminology at this point, given the growing alphabet soup, here are some basic definitions and distinctions that the industry should consider adopting as standard nomenclature to keep things clear:


Pre-standard/non-standard LTE in unlicensed spectrum implementations use 3GPP Release 10-12 CA features to provide Supplemental Downlink (SDL) service over Wi-Fi. Coexistence with Wi-Fi is provided via initial channel sensing/selection, and in the presence of Wi-Fi operating co-channel the use of on/off duty cycle mechanisms controlled by algorithms to determine the allocation of channel airtime for LTE and Wi-Fi (e.g. Qualcomm’s Carrier Sensing Adaptive Transmission – CSAT). Due to the lack of Listen Before Talk (LBT) support, this solution would only be deployable in regulatory regimes that do not require LBT such as the US, China, Korea, and India.


The proposed 3GPP Release 13 standard implementing LTE within the Unlicensed Spectrum is designed to opportunistically boost data rates. LAA can be used as a secondary carrier (channel) for the downlink only, uplink only, or both.  The initial focus is on SDL - downlink only]. Coexistence with Wi-Fi operating co-channel would be provided via LBT mechanisms. Because of the LBT support, LAA would be deployable virtually worldwide.

Subsequent posts on the topic will use the terms with these definitions in mind. [Notice that neither of these current initiatives include proposals to implement the entire LTE system (control, UL, and DL) in unlicensed spectrum].

Next up, we’ll compare proposals for LTE in Unlicensed with Wi-Fi, and finally look more closely at the coexistence issues. So stay tuned.

December 21, 2014

Making the Most of Wi-Fi Calling

   IWi_Fi-Calling_Handsets2n the time since Apple’s revelation that iOS 8 would support a form of Wi-Fi calling, the industry has seen a barrage of announcements, even TV commercials, around Wi-Fi calling. Come to find out that many of them are actually existing products and technologies simply re-spun. A deeper dive into Wi-Fi calling’s history and characteristics reveals what is truly needed to properly support this exciting new Apple capability, as well as other Vo-Fi services.

So What is Wi-Fi Calling?

Sort of like the term “cloud,” Wi-Fi calling often means different things to different people.

TMo_Wi-Fi_CallingSimply put, Wi-Fi calling is the ability to place a voice call using IP encapsulation over a Wi-Fi network, but this can be implemented in a variety of ways.

The iOS form of Wi-Fi calling is different from so-called over-the-top (OTT) services like Skype or Lync because it is integrated within the OS’s dialer (not a third- party app) and is architected to work in the same way a Voice over LTE (VoLTE) call works. It’s also being developed to support the transparent handoff of a call as the user moves between Wi-Fi and LTE coverage areas, something OTT approaches simply can’t do. It’s more of an evolution of about older UMA (unlicensed mobile access) based services, which were some of the first to support cellular voice services over Wi-Fi. Other implementations would include services from MVNOs such as republic wireless and Scratch Wireless.


While these are all examples of “Wi-Fi Calling,” they have very different characteristics, raising a number of important questions:

  • Where does the voice session terminate?
    In an IMS core, at a standalone SIP server/gateway, or on a MSC, does the voice session have to enter an operator’s core network? If so, how is the ‘untrusted/trusted’ border transited?

  • What codec is used?
    How is it encapsulated for transmission?

  • Is the voice session encrypted?
    If so, what are the encryption endpoints?

  • Is the calling service integrated into the native dialer?
    Or does it require a separate app?

  • Does the service support call handoff to and/or from a cellular service?
    If so, does handoff work with Circuit-Switched networks, VoLTE/IMS networks, or both?

While there are obviously a lot of possibilities to enable Wi-Fi calling, it’s also encouraging that there are so many ways to support voice over Wi-Fi – underscoring Wi-Fi’s flexibility to support a myriad of IP-based services. 

An in progress Wi-FI call made on a republic wireless handset


What’s REALLY changing with Apple’s iOS integration and mobile operators lining up to support it is that Wi-Fi Calling will no longer be just a so-called OTT service, or only be offered by upstart MVNOs like republic wireless or Scratch Wireless. Wi-Fi calling is going mainstream.

The question now becomes: How do Enterprises, Operators, and Venues optimize their Wi-Fi networks to support this service?

Voice as an IP Service

Voice is a low bitrate, but very finicky, data service because real-time, bidirectional voice demands a narrow set of operating parameters from the network in to ensure a high quality calling experience.

As such, the requirements for latency, jitter, and packet loss are much tighter for voice than for standard business or Internet applications.

  Wi-Fi Never Really Designed for Voice

Wi-Fi utilizes a shared medium (unlicensed spectrum in 2.4 or 5 GHz) for all the stations in a service set (including the Access Point). Access to the medium is not directly coordinated between the stations, but is performed using mechanisms that seek to minimize simultaneous access attempts and indicate to a transmitter if the intended receiver did not receive its frames. In addition to the contention for access to the medium, Wi-Fi can also be subject to interference by other uses of the same unlicensed spectrum.  

While on the surface, Wi-Fi might not seem like an appropriate access network for quality voice services, advances in Wi-Fi technology make it possible.

Stronger Voice with Smarter Wi-Fi

Addressing many of these issues that can hinder good Wi-Fi calling, new adaptive antenna technology was conceived for transporting delay-sensitive video and voice traffic over Wi-Fi to enable a highly optimized signal for each client. 

A stronger signal equates to a better Modulation and Coding Scheme (MCS).  Better MCS means higher data rates and higher data rate means it takes less time to send a specific amount of data allowing client stations to spend less time accessing or fighting for access to the Wi-Fi medium. 

This also reduces contention for the RF channel as well as reducing the likelihood of collisions (increased jitter), frame loss or packet retransmissions (increased latency).

In other words, providing better signal at the receiver increases the overall airtime efficiency of the service set for stations sending voice and those sending other types of traffic.

Adaptive antenna array technology utilizes smart directional Antennas within a single array, automatically controlled by fancy software that picks, for every packet, the best antenna combination to focus the RF energy towards the intended receiver.  This results in a 5 to 6 dB of gain of signal on the downlink connection.

In addition, smart antennas help mitigate interference from other access points operating in the area by only directing RF energy towards the intended receiver, not simply blasting it everywhere. The impacts from the receiver gain and interference mitigation are cumulative and quite pronounced in dense deployments such as office buildings or high capacity public venues.

Looking ahead, 802.11ac Wave 2 introduces the concept of Multiuser MIMO (MU-MIMO).  Multiuser MIMO effectively allows concurrent Wi-Fi conversations to occur for different clients. The ‘grouping’ of clients into MU-MIMO sets will be essential to maximizing the benefits of this innovation. Good grouping will enhance the ability of a given set of clients to simultaneously receive a transmission and effectively interpret their individual data streams. 

Due to the uniform nature of Wi-Fi calling payload sizes, this will make Wi-Fi calling clients prime candidates for grouping with each other (assuming they meet other grouping criteria), benefitting the Wi-Fi calling experience by servicing multiple downstream clients simultaneously.

 Another important innovation benefiting Wi-Fi calling is the ability to enhance the uplink signal from the client to the AP by receiving the client’s signal on both the horizontally and vertically polarized antenna elements. Because they are able to implement polarization diversity with maximal ratio combining (PD-MRC), smart antennas can provide up to 5 dB of uplink gain. This is especially important when considering single stream/antenna mobile devices (the vast majority of smartphones and tablets, including all models of the iPhone), which transmit with a single polarization. Adaptive smart antenna technology is able to effectively extract or construct the best possible Wi-Fi signal regardless of the client’s orientation relative to the AP.  

Because real-time voice is inherently bidirectional, it is important that both the downlink and uplink support the best possible MCS and highest data rates.

Beyond The Antenna 

Beyond antennas, recent technical advances have also been made in how traffic is handled within Wi-Fi access points to ensure the best possible quality of service for Wi-Fi calling.

Since traffic is often encrypted with Wi-Fi calling, the Wi-Fi access network has no real visibility into the payload to determine what type of traffic is being served.

 With more innovative heuristics-based quality of service, different traffic types can be automatically identified, prioritized, scheduled and queued even without the ability to inspect the inner contents of the packets and detect that they are part of a voice session. This is achieved through sophisticated algorithms that constantly examine the characteristics and behavior of the traffic such as the size and frequency of packets in a flow (even an encrypted flow).

Such sophisticated traffic inspection, classification, and optimization technology works in software to provide per-client, per-traffic-class queuing. So traffic is mapped into the various queues based on existing L2 or L3 tags received from the upstream network or these heuristic-based identification algorithms.

What’s more, sophisticated schedulers implement advanced algorithms to transmit the frames based on airtime and throughput potential or even WLAN prioritization settings that have been configured. If a client doesn’t receive a frame, the scheduler ensures that the frame gets priority for retransmission, eliminating head of line blocking issues. 

The Holy Grail for Wi-Fi Calling?

Ultimately for Wi-Fi calling to work as everyone wants it to, the combination of these technology innovations is essential to delivering a true low-latency carrier-class Wi-Fi calling experience so good that you’ll be able to hear a pin drop (over Wi-Fi).

October 29, 2014

Small Business Gets Big Wi-Fi, Finally!

Businesman-holding-hands-upIn a wireless world that’s so dependent on reliable connectivity, there’s something small business owners will tell you:  Wi-Fi for small businesses really stinks.

The small business sector is one of today’s most underserved and overlooked markets, and the opportunity to provide these businesses with better Wi-Fi is compelling, to say the least.

In 2011, According U.S. census data, there were nearly 6 million small businesses with actual employees in the United States. Firms with fewer than 500 workers accounted for 99.7% of those businesses and businesses with less than 20 workers made up 89.8%.

This is a big market. And these businesses deserve some love.

Click to view image

Selling business-class Wi-Fi equipment to small businesses looks to be the fastest growing sub-segment within the global enterprise WLAN market.

Dell’Oro Group estimates that that the market opportunity for selling enterprise Wi-Fi gear into the small and distributed branch office segment will jump from $700 million in 2013 to $1.4 billion by 2018 (see chart).

Click on Image to View

A recent survey of 400 U.S. small businesses with retail places of business, commissioned by Devicescal and conducted by /GR, found [to nobody’s surprise] that providing free Wi-Fi is good business for increasing:

  • Customer foot traffic
  • The time spent on premises (and most importantly),
  • The amount customers spend.

The study focused on independent “mom and pop” retail stores, including bars, nightclubs, restaurants, fast food places, coffee shops, clothing boutiques, book shops, and salons.

With more wireless-only devices, savvy users and mobile business applications needing higher capacity and more reliable Wi-Fi access, small businesses have been, well, stuck.

And when it comes to Wi-Fi today, small businesses have few reliable choices.

Most small businesses are typically forced down-market to use consumer-grade Wi-Fi equipment (including Wi-Fi integrated into cable modems and DSL routers provided by services providers). These solutions lack the features, functionality and gusto to adequately meet the growing demands for better and more reliable wireless connectivity.

Another [not so great] option has been the use of enterprise class wireless LAN (WLAN) systems. While feature-rich, these solutions are simply overkill and way too expensive and technically daunting for small organizations with no dedicated IT experts (which is pretty much every small business on the planet).

What the market craves is some sort of system that bridges this growing gap, with business-class Wi-Fi reliability and pervasive performance at consumer-type prices. And it must be brain-dead simple to use.

A New Way to W-Fi with Xclaim Wireless

Xclaim-ap-and-harmonyLooking to solve these problems, Ruckus today took a big step into the small business market with Xclaim Wireless.

Xclaim is a business-class Wi-Fi system, insanely priced and simple, simple, simple to configure and install.

This isn’t merely a repackaged Ruckus enterprise product simply de-featured at consumer price points. Rather, it’s a new way to Wi-Fi, uniquely developed and designed for the small business market.  

No controllers, nerd knobs, or complex network settings to memorize. Xclaim redefines the notion of better Wi-Fi for small business by combining enterprise-class power and reliability with the simplicity that small businesses are clamoring for.

Xclaim-harmony-on-ipadAt the heart of Xclaim is a custom-built (and yes FREE) mobile application, Harmony for Xclaim, that puts Wi-Fi management into the palm of your hand; radically simplifying the configuration, management and monitoring process of Wi-Fi networks. We're talking grandparents-can-do-it-simple (watch this). 

Pundits are already xclaiming what they think about all this. 

So say goodbye to the days of amenity Wi-Fi as the norm for small business. Now there’s a powerful business-class Wi-Fi solution for mobile connectivity that offers tremendous benefit for both businesses and their customers without both going either broke or crazy.

Visit Xclaim Wireless to learn all about it.

October 27, 2014

Hotspots Get Hotter with Release 2 of Hotspot 2.0


Hotspot 2.0 Release 2 is here – expanding and improving on the considerable innovations introduced with HS2.0 Release 1.

At Ruckus, we’ve always been huge fans of Hotspot 2.0 and have taken an active part in its testing and development.  With Release 2, Hotspot 2.0 gets even better. 

Hot Spot 2.0 (HS 2.0), often referred to as Wi-Fi Certified Passpoint, is the new standard for Wi-Fi public access that automates and secures the connection. It addresses the two major challenges with legacy hotspots:

  1. the often-confusing task of connecting (which SSID, what’s this captive portal thing, does this even have Internet access?) and

  2. the open/unencrypted airlink connection. Hotspot 2.0 also enables us to interconnect all the “islands” of hotspots into larger footprints via roaming agreements between Wi-Fi operators.

Early examples include the recent announcements of bidirectional roaming between the Time Warner Cable and Boingo Passpoint services, and AT&T’s release of a new Wi-Fi Hub service with Hotspot 2.0 support.

Release 1 of HS 2.0 was based on the IEEE 802.11u standard and introduced new capabilities for automatic Wi-Fi network discovery, selection and 802.1X authentication based on the Access Network Query Protocol (ANQP).  

 With Hotspot 2.0, the client device and access points now exchange information prior to association using ANQP. The AP advertises the “backend” service providers (SPs) who can process authentication requests that are reachable from this hotspot. The client then checks to see if it possesses a credential for one of those SPs.  If it does, the client proceeds to associate and then authenticate to the network using 802.1X and the provisioned credential. Supported client credentials include SIM cards, USIMs, X.509 certificates and username/password pairs. Each credential is associated with a specific EAP type. The primary benefits of Release 1 were automating the connection experience at hotspots where the client credential was accepted and providing a secure, encrypted airlink for Public Wi-Fi. A secondary benefit is the ability to support multiple roaming partners over a single SSID, with SSID proliferation having become an increasing issue for operators looking to expand their footprint through roaming relationships.

Release 2 is largely focused on standardizing the management of the credentials; how they are provisioned, how they are stored on the device, how they are used in network selection, and how long they are valid. Some of these capabilities aren’t applicable to cellular credentials (SIM/USIM), because those are provisioned by the home mobile network operator (MNO) and are themselves the stored credential. But what about all those Wi-Fi only devices, how do we get them provisioned for service (and perhaps even linked to the subscriber’s cellular data account)? And what if the SP wants to apply some policy as to how its credential may be used (including the cellular credentials)? How do we expire a credential after a certain amount of time or usage? What do we do if a device submits a credential that has expired? And how can we do all of these things in a manner that preserves the security of the subscriber and their credential? These are some of issues that the smart folks in the Wi-Fi Alliance’s® Hotspot 2.0 Technical Task Group are addressing with Release 2 of Hotspot 2.0.

Making Smart Phones Event Smarter.

 Until Release 2 there was no standard format for managing a Hotspot 2.0 credential on a client device. Depending upon the OS or manufacturer, a text or XML file was typically used, but these might have different naming conventions, syntaxes, and locations within the file system. Release 2 leverages the Open Mobile Alliance’s Device Management (OMA-DM) framework, which provides a standardized XML tree structure within which different information can be stored in a consistent manner.

Release 2 specifies a new Per Provider Subscription Management Object (PPS-MO), which is one or more branches in the OMA-DM tree containing all of the information related to the Hotspot 2.0 credential(s) on the device. The credentials themselves may be stored in the PPS-MO (e.g. a username/password pair), or they may be located elsewhere on the device (e.g. a SIM or X.509 client certificate) and referenced within the PPS-MO. However, the PPS-MO doesn’t just contain the credential information; it also standardizes the storage of some associated Release 1 parameters and introduces a whole range of new ones. Click on the table to see a few of the new of the release 2 parameters for comparison.

MAIN-FETURES-IMAGEIt’s important to understand that the credential information and associated parameters for each provider are being stored in a separate branch of the PPS-MO tree. Further, only the provider who provisioned the credential is allowed to modify any of the parameters for that credential. So, a SIM credential branch from your cellular provider might contain preferred roaming partners and blacklisted SSIDs that apply when using EAP-SIM, while a username/password credential branch from your cable operator could contain a different set of policies to follow when using that credential with EAP-TTLS. Consistent with Release 1, Release 2 upholds the user’s preference as the ultimate decision maker for network selection, providing the ability for the user to prioritize multiple subscriptions/credentials.

A Few New Backend Servers Needed.

With Release 1, the only supporting servers required were the AAA servers providing the client authentication, or perhaps acting as gateways to a mobile operator’s Home Location Register (HLR) for EAP-SIM authentication. Release 2 adds a number of new server elements to support service registration, credential provisioning, credential management, and ensure the security of the client and credentials. Here’s an overview of these new server elements: 

  • Online Signup (OSU) Server
    Registers new users for service and provision them with a credential.
  • Policy Server (PS)
    Provisions network detection and selection policy criteria for the provider’s issued credential.
  • Subscription Remediation Server (SubRem)  
    Corrects any issues with the issued credential, policy or subscription, and also to renew prepaid type credentials.

  • CA
    Generates and issues client certificates if TLS authentication is used. 


All Release 2 clients receive Trust Roots that link to the Wi-Fi Alliance’s® PKI.  This means that clients can validate all Release 2 server components and even the provisioning WLAN itself, even before they’ve been provisioned with a credential of their own. Remember that these are logical entities and could be implemented on separate platforms or in a single box, perhaps combined with the AAA server.

How does it all work?

A Release 2 client will see the Release 2 support in the Hotspot 2.0 indication element of the APs beacons and probe responses.

The client then sends an ANQP query to the AP. In the ANQP response, the AP indicates that Online Signup services are available and lists the OSU providers that are reachable from this hotspot. Since the client does not have a valid credential associated with this hotspot operator, or any of its roaming partners, it does not proceed to automatically associate and 802.1X authenticate. Instead, while it is still in the pre-association phase, the user will be notified that Online Signup services are available. If the user elects to sign up, they will be presented with a list of the available Online Signup providers. The list is typically displayed as an icon, operator friendly name, and description for each operator. The icon and friendly name are actually embedded within the PKI certificate issued to the OSU server, thus ensuring that clients don’t connect to “rogue” provisioning systems. Remember that everything described so far has happened while the client is not yet associated to any WLAN.

It’s also important to note that with Release 2 of HS 2.0, a new type of WLAN is being introduced, the OSU Server-only authenticated layer 2 Encryption Network (OSEN).  Release 2 OSU deployments can use either Open or OSEN WLANs for the client provisioning process. 

The OSU Provider List on a Samsung Galaxy S5

The intent is to ensure that the client is connecting to a valid/trusted OSU WLAN and that the registration and provisioning servers are authenticated. In order to accomplish this, there will be new Public Key Infrastructure (PKI) root trusts loaded into Release 2 clients. These will be used to validate OSU servers and the OSU WLAN if the OSEN option is used. 

Once the user selects an OSU provider from the list, the connection manager on the device will connect to the OSU WLAN (Open or OSEN). It then triggers an HTTPS connection to the OSU server URI, which was received with the OSU providers list. The client validates the server certificate to ensure it is a trusted OSU server. Then the client will be prompted to complete some type of online registration through their browser.

The final step of this registration is the provisioning of the credential and parameters to the client.   Finally, now that the client has a valid credential for the production HS2.0 WLAN, it disassociates from the OSU WLAN and connects to the HS2.0 WLAN using the standard ANQP mechanisms. The connection manager also factors any configured policies into its selection decisions when utilizing the credential. From then on, the credential provider can use this framework to update the credential, policy or subscription of the device by indicating via RADIUS messaging that the client needs to contact one of the provisioning servers and/or the client device can initiate an update based on configured intervals or user action.

What’s Next?

The Wi-Fi Alliance recently held a formal launch event for Release 2 of HS 2.0 at its Wi-Fi Global Congress at the Palace Hotel in San Francisco. Ruckus performed the public demonstration of Release 2 at the launch event while the WFA announced that Ruckus’ OSU server suite is one of two selected for the Passpoint Release 2 Certification Testbed. The Ruckus SmartCell Gateway and ZoneFlex Access Points have already been certified for Passpoint Release 2.

On the client side, Samsung already has two models of the Galaxy S5 that have been certified, there are a number of certified chipset reference designs available from companies like MediaTek, Broadcom, Qualcomm Atheros, and Marvell. Intel has also received certification for the 7260.HMWG adapter.

The WBA is planning its Next Generation Hotspot (NGH) Phase 3 trials, which will be based on Hotspot 2.0 Release 2. We expect a number of operators to participate in the NGH Phase 3 trials and some to conduct their own private trials. Commercial deployments will follow.

 So it looks like hotspots will be heating up even more with Online SignUp and standardized credential management, which is great news for everyone. 

June 14, 2014

Not So Random Thoughts on Privacy and Positioning

Iphone-icture2Earlier this month at the Apple’s Worldwide Developer’s Conference (WWDC) it was uncovered by Frederic Jacobs that with the upcoming Apple iOS8 operating system, Apple devices will be able to, as a privacy mechanism, hide or mask their MAC address by randomly generating a fake MAC address to present to the Wi-Fi network. 

In iOS8 Wi-Fi scanning behavior has changed to use random, locally administrated MAC addresses within Wi-Fi probe requests and responses (the way devices and and access points talk to each other to determine if a connection can be established).  Many expect Google to make similar changes within its Android OS.

Media access control (MAC) addresses are unique identifiers that are assigned by device manufacturers. A MAC address is hard-coded onto the device’s network interface and is unique to it. These addresses are essential for networking and network diagnosis because they never change, as opposed to a dynamic IP address that can change as users move around. For a network administrator, that makes a MAC address a more reliable way to identify senders and receivers of data on the network.

The news caused a fair bit of consternation among companies that use MAC addresses as a way to identify and locate client devices being used in public spaces for the purposes of improving the customer experience when using Wi-Fi networks.

Many see Apple’s move to randomize MAC addresses as simply a way for them to push its iBeacon technology. iBeacon already uses Bluetooth Low Energy (BLE) technology for which Apple also randomizes the addresses. But make no mistake. iBeacon will undoubtedly benefit from this action. More to the point, it’s a good way for Apple to remain publicly contentious about user privacy concerns while helping its iBeacon business along.

Protecting user privacy is nothing but goodness. Most people don’t want personal information about them, like their age, birthday, gender and what color underwear they have on, exposed to anyone who might use it for some nefarious purpose.

But here’s the thing. And it’s an important thing. MAC addresses don’t expose ANY of this kind of information. Users are personally identifiable only after they have logged onto the Wi-Fi network and/or signed into a mobile app (e.g. a shopping app / or an app for a convention venue) where they provide details to gain access or opt-in to obtain information of use to them (promotions, directions, alerts, etc.).

While being able to hide the unique MAC address on a device seemingly provides an added level of protection and privacy for users, it effectively prevents increasingly popular passive network-based Wi-Fi location services from identifying and tracking devices that aren’t connected or associated to the Wi-Fi network but are still “talking” to it.

This means value-added services that users want and business have been demanding could be diminished.

MAC addresses can be tracked whether or not users actually connect to a Wi-Fi network. Even when people aren’t using or connected to a Wi-Fi network, their device (if Wi-Fi is turned on) still continues to let the network know that it’s around by transmitting probe requests.

This information is extremely useful for Wi-Fi-based location and positioning systems that are designed to provide invaluable analytics that can be used by businesses to deliver customized services to their clients who they know are within a given area.

Affected-chartThe biggest impact of this move by Apple is on devices that are not associated with the Wi-Fi network (see above chart). All associated devices remain unaffected by any changes to MAC-address randomization on any mobile OS. In addition, many advanced location-based systems, like the Ruckus Smart Positioning Technology (SPoT) service already make use of sophisticated hashing performed on MAC data to maintain user privacy. While these systems won’t see any reduction in the accuracy of location services they deliver, they will now have less data available to make use of as a result of Apple’s move. What a shame.

Fortunately iOS devices will remain identifiable. Despite MAC-address randomization, these devices have a unique, and known, range of MAC addresses. By eliminating all unassociated iOS devices from the database (positioning engine), the integrity of the user/visitor profile in a venue is maintained.

The good news for venues like malls, hotels, airport and convention centers, as well as value-added resellers and carriers looking to deliver location-based services using Wi-Fi is, the impact of the iOS8 feature is limited.

Because only unassociated iOS devices are missed (again, see chart above), organizations can continue to engage and locate a majority of their users — still a significant number compared to the limited pool of users who hope to be identified by Bluetooth signal.

Meanwhile, venues can continue to have access to high quality location analytics and customer insights, and can continue to engage their users (visitors/customers) with highly targeted location based services, (including promotions and other content. Apple’s move will help drive a massive shift from users of unassociated devices towards users with associated devices. With that, organizations need not worry about being unable to engage users and analyze their movement and behavior.

Ultimately, brands, venues and companies must begin to focus on creating customer value and satisfaction that delivers a compelling mobile experience beyond basic wireless connectivity. Location-based Wi-Fi services and brand-based mobile applications remain an ideal way to do exactly this.

So keep watching this SPoT.

June 05, 2014

A Wi-Fi Gamble That Pays Off @Interop Las Vegas?


"Good Luck!"

That’s what I was told when an industry friend heard that we were providing the Wi-Fi service for this year's Interop Las Vegas.

To be fair, the bar for Wi-Fi at Interop was quite low. Historically the Wi-Fi network has received less than stellar reviews from the attendees which is exactly why we decided to take on the challenge. 

THE DESIGN Interop-sign

From a Ruckus perspective, given our experience at othe high-density events and venues, such as Mac Tech and Time Warner Arena, we knew that 5 GHz adoption would be quite high. In fact, no 2.4 GHz was provided on the exhibit hall floor due to high amounts of interference.

This was something that Glenn Evans, Interop's Network Director, was adamant about because of his experience at the show. The interference is high because almost every vendor brings their own system to perform demos. With so little bandwidth available in 2.4 GHz, physics will always win out. 

There were 18 APs placed evenly around the exhibit hall. Coverage, of course, was not the primary design issue. We designed for capacity.

One AP (even set at low transmit power)could have covered the entire show floor. However, we wanted to ensure the highest data rates and spectrum efficiency, so we determined that given AP mounting locations and available backhaul, 18 was the right number to achieve that goal.

Interop-1More than 50 APs were used to cover the various areas off of the exhibit floor. These areas included small breakout rooms, larger meeting rooms and the massive keynote ballroom.

Of the 68 APs used, most of them were 7982s (3x3:3 11n). In addition, we sprinkled our new 802.11ac access point, the R700, in strategic spots.

We originally wanted 802.11ac everywhere but knew it would be overkill and that 802.11n would be more than adequate for this event given the low number of 11ac in clients that could actually benefit from the new standard.

Due to the open-air high density of the show floor we used 20 MHz wide channels only.

In any ultra high-density deployment, more, smaller channels are preferable to reduce Wi-Fi contention. Off the show floor, we enabled 40 MHz channels since there was enough RF separation between APs for proper channel reuse without causing co-channel interference. In both scenarios we enabled DFS channels. Although some people tend to shy away from using them, the reality is there isn’t much to be worried about unless you have specific limitations imposed by your client devices.


High density environments are always a challenge, but besides the sheer density of client devices, there were some unique challenges to the deployment. The first one we encountered was simple enough: how to hide the APs. On the show floor was easy but off the show floor proved to be a bit challenging. So, in perfect Ruckus fashion, we used a bit of ingenuity and strength (they were heavy!) to conceal the APs (see picture above).

On a more technical level, we had one major challenge that we showed itself in two ways: client roaming.

The Interop network is really two networks. One is ‘show floor’ and the other is ‘off show floor’ of which are served by different router and switch vendors but only one wireless vendor. This created the problem of two separate layer 3 networks.

So what’s the problem? For ease of use and good design, the same SSID was used throughout the show. The problem is, when an attendee moves from one L3 network to the other, their device may not realize that it’s moved to a different layer three system and try to use the incorrect IP address.

There are two conventional solutions to this problem:

  1. Two separate SSIDs. This would be the easiest technical solution but not the cleanest user experience.

  2. Tunnel one set of APs to a destination (controller in this case) to the other network.This would have also worked fine but we had another trick up our sleeve.

The second client roaming problem was that client devices don’t always roam well. In an open-air environment with few obstructions, client devices tend to make very poor roaming decisions because so many APs look alike from a signal perspective. Or, better stated, they don’t look different enough.

To illustrate this, let’s look at a test we performed. I started at one end of the hall and ran a speed test. We achieved approximately 35 Mbps to the Internet, which we considered acceptable considering single stream 20 MHz channels. I’d then move to the next location, about three rows down. 25 Mbps. Move three more, 13 Mbps. Yet three more, 5 Mbps. Why the change? The client (gold iPhone 5s in this case) was sticking to the original AP. Even if I turned off Wi-Fi and reassociated, the phone would choose the original AP.

In reality 5 Mbps is plenty for an attendee to do what they want on the network. However, the underlying problem is that airtime used for lower data rates is very detrimental to the network’s capacity. Fortunately, Ruckus has a system that worked to solve both of these problems: SmartRoam.


SmartRoam was designed for public access networks just like this one. SmartRoam should not be confused with client load balancing or band steering. SmartRoam exists because most (if not all) client devices make very poor AP connection decisions. The poor choice shows up occasionally during initial association but more frequently during roaming. To solve that problem SmartRoam is designed to force a client device to move to the best AP for that client device. The key word here is force. There is no current widely supported Wi-Fi protocol to inform the station that it should move to another AP. So in order to keep stations connected to the best AP, more drastic measures need to be taken.

SmartRoam is a break before make protocol. When it senses the client device needs to move to a better AP (dictated by user adjustable metrics), it will deauthenticate the client and then withhold probe responses from that AP while the better AP (based on signal and available throughput) continues to send probe responses. As you are already thinking, this isn’t an ideal scenario. It sounds very bad to purposely break a client device connection but in the case of high-density public environments, the increase in available capacity for all is worth it. As a great man once said, “The good of the many outweighs the good of the one, or the few.”

After a short tuning period testing different scenarios, we found that SmartRoam was allowing connection anywhere on the exhibit hall floor and always connect to the best AP. This was proven by running multiple throughput tests and showing little deviation based on location.

 SUCCESS? N+I-ZD-capture

The big question that always remains after these types of events is "how should success be measured? Beyond looking at raw data movement (over one terabyte per day) we looked to social media and the opinions of the network team that’s done Interop for years.

By both measures we couldn’t be happier with the results. There was some risk involved in volunteering to implement our system at a show with so many competing wireless systems but we were very confident that we could provide world class Wi-Fi service since well, that’s what we design our systems to do and that is what we expect.


May 10, 2014

Beamforming Basics for the Wi-Fi Challenged

With every new Wi-Fi technology or standard, industry fortune tellers are quick to cast adaptive antenna arrays into an early grave. This gets a little (actually, a lot) technical, but it's required.

Distinctly different from chip-based beamforming, for which every vendor now claims support, patented smart adaptive antennas are designed to dynamically and continuously create unique directional RF patterns (what we call BeamFlex™) proven to deliver the highest throughput for each client device at longer ranges.

This innovation, yet to be duplicated en masse, has stood the test of time and remains hugely valuable in the world of Wi-Fi.

The first attempt at dumping of adaptive antenna arrays into a death spiral happened with 11a/g and its blazing fast 54 Mbps. Yet BeamFlex survived and, in fact, thrived as the RF environment became more noisy and complex.

It happened again with the introduction of the 802.11n standard that came with the ultra-reliability of MIMO.

And, surprise, surprise, it's happening again with the destruction (sort of) of the gigabit barrier brought by the new 802.11ac standard. The problem is, to which history now attests, BeamFlex has only proven to add more value, not less, to these technological changes.

At the core of current speculation (that adaptive antenna switching is doomed with 802.11ac) is the notion that transmit beamforming (TxBF) with 11ac replaces the need for Smart Wi-Fi. This is due to the common misconception that smart, adaptive antennas—as proprietary Ruckus BeamFlex technology—and transmit beamforming—as a standards-based technology—are one-and-the-same. This whitepaper details the diffferences.

Despite some similarities (such as the goal of enhancing signal quality and the use of the word “beam”), the two technologies are fundamentally different. Here's why:

BeamFlex and TxBF are not Mutually Exclusive.

BeamFlex is truly adaptive antenna technology by which an access point (AP) selects an optimal transmit path out of many possible options. It is fundamentally an antenna technology, combining special hardware and sophisticated software, that sits on top of all radio foundations (in a protocol-agnostic way).


TxBF, on the other hand, is a digital signal processing technique that occurs within the transmitting radio, and is heavily protocol dependent. It attempts to send multiple copies of the same data so as to create constructive combinations at the receiving radio. The beauty of this is that BeamFlex (antenna technology) and TxBF (radio technology) can be perfectly wed; and a happy marriage it is. Ruckus can support both of these techniques at the same time (on some products) to deliver a cumulative benefit to signal quality.

BeamFlex works for all clients.

Because it is an antenna technique—and not a radio technique—BeamFlex works equally well for all clients of all capabilities. This means 802.11a/b/g/n/ac clients all benefit, and there are no special requirements for support. Single-stream, two-, three- and four-stream clients all benefit, and there is no tradeoff as the number of streams increases.

No transmit beamforming and Spatial Multiplexing at the Same Time.

Because it is a radio technique, effective TxBF DOES require client support (something a lot of people fail to understand). Consequently, 802.11a/b/g/n clients miss out on the perks. And some 11ac clients do not support TxBF. Looking at the pervasive adoption of 11n, we should not expect all (or even most) clients to support TxBF even by the end of 2016.

TxBF must also tradeoff with spatial multiplexing. The same transmitters cannot be used for both. In order to be effective, TxBF systems should have double the number of transmit antennas as spatial streams.

1x1 clients (this means one transmit and one receive radio chain, in Wi-Fi parlance) will be happy with even a 2x2 AP; but for a 2x2 MIMO client to benefit (in any appreciable way) from an AP’s use of TxBF, a 4x4 AP is desired. This also means that a 3x3 client sees miniscule (if any) benefit from an AP that is anything less than 6x6 (yes this means 6 transmitters and 6 receivers, can you even imagine what that looks like?).

BeamFlex does not disrupt the neighbors.

As we use 5 GHz spectrum more heavily, and especially as we expand to support wider channels (80 MHz, not 160) in 11ac, we should expect to see more Wi-Fi protocol contention from APs operating on the same channel (because neighboring APs are more likely to operate on the same channel since we have fewer non-overlapping channels). As a result, our precious 5 GHz spectrum will soon look more like the 2.4 GHz spectrum.

The value of BeamFlex directional transmissions is paramount to preserving capacity with co-channel neighbors. It does so by transmitting with directional patterns that both maximize data rates (allow clients to get on/off wireless airtime quicker) and also avoid sending RF energy where it is undesired (towards neighboring APs). This reduces unnecessary contention with neighboring APs, which drags down capacity.

TxBF is often visualized as a directional steering mechanism, but it works by creating signal peaks in point space (hopefully at the receiver’s antenna), which often creates signal peaks in unintended directions, causing interference where it is undesired.

Multi-User-MIMO is gold for BeamFlex

If you’re focused on 160 MHz channels in Wave2—or other Gigabit hype —you’re focused on a spec-bloated red herring. 160 MHz is for (some) consumer networks. The biggest bang from 802.11ac comes in the second wave of products (that hopefully come by Christmas of next year) and is what's called multi-user MIMO (MU-MIMO). This is a technique by which the AP can send downlink frames to multiple clients at the same time. And this enhancement will require new hardware (yes, for everyone). It’s a lovely protocol enhancement for boosting capacity, given the plethora of very simple 1x1 mobile devices on networks. But it comes with a big catch: signal isolation.

To multiply the efficiency of spectrum with multi-user transmission, we need to ensure that each station receives its data without receiving other stations’ data at the same time (which would cause interference and make MU-MIMO not work as intended).

It's this need for signal isolation that makes Wave2 MU-MIMO and BeamFlex a perfect complement to each other.

BeamFlex adds directionality and signal separation (along with a boost to SNR), while the TxBF component of MU-MIMO provides additional separation at the radio level. Higher data rates per station, MU signal separation, maximum spectrum efficiency. Call it, well, the perfect storm.

Is Wi-Fi performance a commodity?

Now that we’ve further bludgeoned home the technical superiority of adaptive smart antenna arrays (BeamFlex), let’s get to the real issue.

People claim that all AP hardware is the same and 11ac commoditizes Wi-Fi performance. There are two key requirements needed for Wi-Fi performance to become a commodity:

  1. There must be roughly equal—or at least “optimal”—range, capacity, throughput, optimization simplicity, reliability, and interference mitigation from all equipment suppliers, and

  2. The ratio of cost-to-performance needs to be the same for all equipment suppliers.

The Ethernet Example.

If you look at Ethernet, performance commoditization is a given because both of these issues are true.

There is enough performance (Gig Ethernet has been sufficient for its application for many years), and the cost-to-performance of line-rate Gigabit switching is nearly identical across suppliers (hence, trends like NFV are occurring).

But, unequivocally, without the most diminutive doubt, the answer to the question is that Wi-Fi absolutely does not share these characteristics (and perhaps never will).

Is Wi-Fi sufficient as of 11ac, even 11ac with Wave2? Do we currently have sufficient spectrum (or spectrum efficiency) to meet all of our capacity, performance, reliability, app delivery, and network optimization needs? If the answers are no, then it stands to reason that a product supplier that enhances the Wi-Fi experience in light of these technical deficiencies could still differentiate on cost and performance alone (but no one, including Ruckus, is really doing that).

When customers start telling us that Wi-Fi performance is as reliable as it will ever need to be, there's a surplus of radio spectrum, they have no issues connecting devices to the network and there is ample capacity to keep all users and devices happy, we will be happy to admit that adaptive smart antennas should be put to sleep.

Until then it's simply better wireless for everyone who wants to cause a Ruckus.

February 21, 2014

Can Wi-Fi be made easier to use than cellular?

Wright Floating-wi-fi-guy"Making Wi-Fi as easy as cellular” is a popular maxim when engineers, marketeers, and journalists talk about Hotspot 2.0. And it’s not hard to understand why. The cellular connectivity experience is well understood in virtually every culture, while, except to those involved with its development and testing, Hotspot 2.0 remains a big unknown. Therefore to say Hotspot 2.0 makes Wi-Fi connectivity like cellular puts it in terms that most people can understand. In fact, if you look back, you’ll find a few Ruckus press releases and presentations that use this very analogy.

However, as we approach the launch of production Hotspot 2.0 networks and begin using this technology in our daily lives, it is important to have a more precise understanding of what it is and how it works.

 It’s at this point that the comparison with cellular connectivity and roaming falls short of conveying what people need to know. For context, it’s best to start examining some of the similarities and differences between cellular and Wi-Fi with Hotspot 2.0 relative to connecting automatically, authentication, and roaming. Airlink encryption aside, users can be assured that robust security is given for both cellular and Wi-Fi (wih Hotspot 2.0) connections.

Connect Me.

To connect to any type of network, a client device must support the same physical interface and medium access mechanisms (Layers 1 and 2) as the access network.

Sometimes the compatibility cues are obvious. For example, plugging a token ring hermaphroditic connector into a 10BASE-T hub would have been quite an accomplishment, even if futile in terms of passing data. But in the wireless world, there are no visible cables or connectors and the end users need to have a fuller understanding in order to ensure that her device will connect to an available network.

The first consideration is the frequency band. Does the device “talk” on the same frequency that the network is operating. Wi-Fi currently operates in swaths of unlicensed 2.4 GHz and 5 GHz spectrum that are largely harmonized globally. The first 11 channels (3 non-overlapping) in the 2.4GHz band are de facto “world bands” as they are approved in virtually all regulatory domains. The picture in 5 GHz is currently less uniform, but there are sections (5.15-5.25 and 5.725-5.85 especially) that have been, or soon will be, adopted for unlicensed use in most parts of the world. 5 GHz is the current focus of regulatory bodies since 802.11ac requires it, and commissioners are endeavoring to open more common frequencies there.

So for Wi-Fi at least, a dual-band (2.4 GHz and 5 GHz) device bought in the US today will definitely connect to a 2.4 GHz Wi-Fi network in Europe, Africa, or Asia, and can connect to 5 GHz Wi-Fi networks in most areas of the world.


In the cellular world, the situation with device support is not nearly as straightforward. 


Because licensed spectrum is exclusively allocated in much ‘thinner’ slices to individual mobile operators at the national or regional level. And because the 2G, 3G, and LTE bands vary from country to country, it is impractical to implement a single radio access front end that can support all of the possible RF bands.

One aspect of this is the so-called LTE “band fragmentation” issue. This means that even the most sophisticated handsets have to be produced in a large range of models, which are often specific to a region, country, and/or operator. Even the “international” models can’t hope to support all of the possible operating bands for each generation of technology. At last glance there were 19 different models of the Samsung Galaxy S4s in production to support this collection of different cellular bands. 

The difference between Wi-Fi’s harmonized bands and cellular’s fragmented bands is underscored by the fact that all of the 19 different models of the Galaxy S4 use the same Broadcom BCM4335 Wi-Fi chipset.

 Meanwhile cellular chipset manufacturers are hard at work creating advanced chipsets and RF frontend solutions that can support large numbers of licensed bands, such as the Qualcomm RF360. As Qualcomm SVP of Product Management, Alex Katouzian, recently pointed out, "The wide range of radio frequencies used to implement 2G, 3G and 4G LTE networks globally presents an ongoing challenge for mobile device designers.

This severe band fragmentation issue doesn’t exist for Wi-Fi connectivity.

Another challenge that the cellular industry faces with ubiquitous device support is technology schisms. With 2G, 2.5G, and 3G most of the world settled on GSM/UMTS based coding and modulation, but big (and globally significant) operators in the US and Korea chose CDMA solutions. A similar split is occurring with LTE.  In LTE land, the world is standardizing on the Frequency Division Duplexing (FDD) implementation.  Meanwhile China is deploying a version based on Time Division Duplexing (TDD).

The bottom line being that a 3G CDMA handset can’t connect to a UMTS Node B, nor can an FDD LTE handset connect to a TDD LTE eNB.

In contrast to the technology factions that exist within the cellular industry, Wi-Fi modulation and coding implementations have effectively remained uniform as standardized by the IEEE and certified by the Wi-Fi Alliance.

The reality is that Wi-Fi devices are able to connect to just about any Wi-Fi network in the world (and Hotspot 2.0 makes it even easier), while cellular band and technology fragmentation has led to a complex mix of often incompatible devices and networks, especially when traveling outside of the home operator’s coverage area.

Authenticate Me.

Where the cellular user experience truly excels, is in the automatic authentication of the device to the network. Each device is provisioned with a unique identifier that is known, and can be verified, by its home operator’s subscriber database (Home Location Register or Home Subscriber Server – HLR / HSS). The identifier is known as an International Mobile Subscriber Identity or IMSI, and can be embedded in a SIM, USIM, or sometimes in the device itself. 

The IMSI contains the Mobile Country Code (MCC) and Mobile Network Code (MNC) for the home mobile operator, which together comprise the Public Land Mobile Network (PLMN) ID. A device capable of communicating with a cellular access network can examine the PLMN ID(s) being advertised by the network, and if they match its IMSI, be assured that authentication is possible.

Wi-Fi authentication historically has been quite fragmented primarily due to the diversity of its use (residential, enterprise, hotspot, etc.) and the resulting need for different security requirements. With 802.11, authentication can be open system, based on a static shared code (WEP, WPA-PSK, and WPA2-PSK), or on more sophisticated mechanisms like 802.1X and the Extensible Authentication Protocol (WPA-Enterprise and WPA2 Enterprise). Also, portal-based authentication is often the method of choice for public access Wi-Fi networks, usually in conjunction with 802.11 open auth. These various authentication options are also related to the type of encryption, if any that is used over the air.

Hotspot 2.0 fixes this by standardizing Public Wi-Fi authentication and security.

With Hotspot 2.0, 802.1X is mandated with EAP-SIM/AKA, EAP-TLS, or EAP-TTLS and AES 256-bit encryption required. The authentication credential can be a cellular IMSI, an X.509 client certificate, or a username/password pair.

The inclusion of non-cellular credentials opens up Hotspot 2.0 services to Wi-Fi only devices like tablets, iPod Touches, laptop computers, and even client devices within the worldwide Internet of Things. Supporting a wide range of credential types also provides for a much broader pool of authentication providers, including mobile operators, cable operators, social media companies, hotel chains, and corporations.

Through the use of the 802.11u protocol, a Hotspot 2.0 Access Point (AP) advertises the PLMN IDs, network access identifier (NAI) Realms (think domain name), and Roaming Consortiums (a 3 or 5-byte hexadecimal identifier issued by the IEEE) for which it can authenticate credentials. 

The client device examines these various markers being advertised by the AP, and if there is a match with one of its provisioned credentials, it knows that automatic authentication is possible, and proceeds to connect and begin the EAP process.

Let's Roam.

Cellular network roaming is often portrayed as a successful model that Hotspot 2.0 should attempt to emulate. But is it really?

Legere-tweetEven when consumers have devices that are compatible with a visited cellular network, it turns out they are quite hesitant to connect. Rightly or wrongly, cellular roaming has become synonymous with “bill shock”,“highway robbery”, and “OMG” in the minds of the general public and especially CEOs.

This issue was highlighted
 by the European Commission in a recent survey report showing that a large percentage of Europeans either disable cellular roaming, turn off their mobiles altogether, or
drastically curtail their usage when traveling abroad within the region. 

Cellular roaming charges are perceived to be such an issue that some upstart carriers are seeking to gain market share by promoting low, or no, cost roaming plans, seeing this as a significant differentiator from the status quo.

Another symptom is the growing abundance of airport vending machines and kiosks waiting to provide local SIMs and prepaid plans to arriving visitors with unlocked devices. The calculation for the consumer is simple.

Bubbly-tweetOption 1: spend $20-30, and perhaps a little time hassling with APN settings, to get a generous amount of voice/text/data from a local operator, or

 Option 2: roam at will using your home IMSI, make a call or two, but be sure you don’t let GMail, Facebook, or Twitter use any cellular data for the duration of your trip, risking the potential $1,000+ bill you can’t seem to expense.

Harbingers for Hotspot 2.0?

Admittedly still in its infancy, Hotspot 2.0 may create radically different models for roaming, or authentication peering. Some precursor services like Eduroam and the Cable Wi-Fi alliance provide some indication as to how it may likely evolve. 

Eduroam, like Hotspot 2.0, is an 802.1X-based automatic connection and authentication network that has come from the higher education community. It started in Europe and Asia, but increasingly has a global presence. Individual institutions join Eduroam in order for their users (students and faculty) to automatically connect at any other Eduroam college or university, and so that visiting users can likewise automatically connect to the locally hosted network. It’s a reciprocally beneficial arrangement, and each institution that joins broadens the reach for the other participants. Even retail and hospitality businesses near Eduroam campuses are starting to offer the service as an enhanced benefit to their student customers. It’s common to see a social media post from an Eduroam user surprised to see that their device has connected in some unexpected venue or location.

In the U.S., the Cable WiFi alliance is a consortium of 5 of the largest MSOs (cable operators). Each company had independently deployed large-scale Wi-Fi hotspot networks in their coverage areas as a service to their residential broadband subscribers. They then decided to join together and advertise a single “CableWiFi” SSID across their combined footprint (between 200,000 and 250,000 hotspots across the country), which can be accessed by any of their subscribers. Again, another mutually beneficial arrangement.

Both Eduroam and the Cable WiFi alliance currently utilize SSID-based solutions, but they are also actively investigating Hotspot 2.0 as the next logical development for their service.

 Looking Ahead.

So, while it has been helpful up till now to describe Hotspot 2.0 in terms of making Wi-Fi work like cellular, a fuller understanding of the nuances and differences between the technologies and models shows that Wi-Fi can effectively be made easier to use and more pervasive than today’s cellular technologies.

 Hotspot 2.0 enabled Public Wi-Fi will offer a service that will be available to all Wi-Fi devices, allow authentication by a number of types of providers, and support roaming consortiums with diverse business arrangements and models. Hotspot 2.0, wherever you may roam.  And roam you will.

February 10, 2014

Will 802.11ac Stab You in the Back(haul)?



Marcus-burton Stabbing-3Stressing about the new 802.11ac standard seems to be the industry’s new pastime.

Now that Wave-1 of 802.11ac is here with vendors promising 1.3 Gbps in 5 GHz, 1.75 Gbps aggregate per AP, and world peace, suddenly the industry has focused in the potential bottleneck of AP backhaul links. In other words, is a single Gigabit Ethernet uplink enough for each AP?

The answer is just plain “yes,” and applies not only to Wave-1, but also to Wave-2 11ac. Here’s why:

Theoretical maximums do not happen in real-world conditions.

Even though 11ac Wave-1 promises a combined 1.75 Gbps theoretical rate, it’s hard to see how real-world conditions will live up to theoretical maximums. They won’t.

1.75 Gbps is a data rate. Real TCP throughput, however, (what the client experiences), has historically been somewhere near 50% of the data rate. With 11n/ac frame aggregation and other enhancements, 65% is becoming more realistic in best-case scenarios (usually for single-client tests only). So let’s say for the sake of argument that 65% of theoretical is possible—1.15-ish Gbps. Ac-chart2

So yes, if you have:

  • 3x3:3 client devices only, one on 2.4 GHz and one on 5 GHz,
  • Very good RF conditions with no neighbors and no RF interference,
  • TCP applications that can produce and sustain 700 Mbps, and
  • TCP applications that are 100% uplink or downlink,

then you might be able to tap out a gigabit backhaul, or so the argument goes.

 But, that just won’t happen in the real world.

 Client mixtures do not support the maximum capabilities.

If a network is comprised of client devices that all support 80 MHz channels (in 5 GHz) and 3 spatial streams, then there’s an outside chance of the stars aligning…barely.

But reality says:

  1. You’ll have some single-stream client devices, like mobile phones and tablets.
  2. You’ll have some two-stream client devices, like tablets and many laptops.
  3. You’ll have some 11a/g/n devices that don’t support 11ac maximums.
  4. You’ll have some clients in the service area that aren’t 3 meters from the AP—and thus subject to lower data rates.

So if your network has any of these client types (and it does!), then you can kiss your nightmares of gigabit saturation goodbye. Every lower-capability client on your network will reduce the average airtime efficiency,
making gig-stressing conditions impossible.

Don’t Forget: Ethernet is full duplex.

When comparing Wi-Fi speeds to Ethernet speeds, we must remember that Wi-Fi is half-duplex. All airtime is shared for uplink and downlink. So when you start with a theoretical maximum channel capacity, you have to divide it between uplink and downlink. Conversely, Ethernet is full duplex with a 1 Gbps uplink and 1 Gbps downlink simultaneously. So to really stress that gigabit link, you need to push either ALL uplink or ALL downlink traffic from Wi-Fi clients. Again, if we consult reality, this just won’t happen.

Application requirements will not stress 1 Gbps backhaul links.

In combination with the limitations of client capabilities, there are very few client applications and services that can generate even bursty—let alone consistent—load above 700 Mbps. But again, the issue isn’t the potential of a single client device, but the potential of all combined client devices passing traffic and sharing airtime.

High density does not stress 1 Gbps.

At first glance, high-density networks seem cause for gig stress, and thus more likely to tax network maximums. However, if anything, high-density scenarios are MORE likely to have single-stream mobile devices that don’t support protocol maximums—as well as airtime challenges that increase retries and non-data overhead—thus bringing aggregate network potential down.

Most of today’s networks can’t deliver it anyway.

How many networks are there that provide more than 1 Gbps WAN links—and web-based services/applications that can deliver that kind of speed? There’s this thing called cloud (you may have heard of it), and most client-based applications now use it.

Local LAN applications/servers are more likely to be able to handle 1 Gbps sustained. Are there many cases where these applications REQUIRE more than 1 Gbps in a specific direction AND operate in a silo where no other clients are present and moving some traffic? The answer is a bit self-evident. No.

Cost is always king.

Getting business-minded for a minute, it’s hard to believe that anyone will want to pay for 10 GbE at the edge for all APs, and no one wants to pay for higher-grade Cat7 cabling (true that Cat6 may be reasonable today) either. And of course, running multiple copper cables for each AP with link aggregation is cost prohibitive and, in most cases, superfluous. Just show the budgeteers the real-world likelihood of saturating a single, lower-cost 1 Gbps link and the budget czar will trump that decision as fast as a politician will lie. If sound technical reasoning doesn’t win, money always will.

What about 802.11ac Wave-2?

All signs point to Wave-2 11ac APs being either 3-stream (still) or—more likely—4x4:4-stream (at 1733 Mbps on 5 GHz). These boxes will also support 160 MHz channels with higher data rates. So the reasoning for the sufficiency of gigabit backhaul for Wave-2 goes something like this:

160 MHz channels are really best suited for SOHO environments. Accommodating them in enterprise products is simply not practical. Even if you wanted to, most enterprise client devices are unlikely to support 160 MHz-wide Wi-Fi channels.

That 4th stream won’t change real-world throughput tax.

Taking all the previous arguments regarding client mixtures, application demands, backhaul problems, and high density into consideration, an additional spatial stream on the AP will have little to no impact on backhaul links. Few clients, if any, will support four spatial streams in the first place. Aggregate throughput for each AP will still be constrained by the low and mid-performing clients. Even high-performing clients will struggle to generate nearly 1 Gbps of unidirectional TCP traffic.  

Multi-User MIMO does not increase maximum backhaul load either.

Now you might be thinking that MU-MIMO, or the ability for an AP to concurrently communicate with multiple clients, has a chance to change all this. Uh, no.

 There’s no doubt that MU-MIMO should improve airtime efficiency where there are many single-stream client devices and mostly downlink traffic. But, the AP still only has four spatial streams, and MU-MIMO will not be used for every transmission. In many cases, MU-MIMO transmissions will still go to only two single-stream clients simultaneously, which will not come close to the gigabit ceiling.  

Everyone has neighbors.

Wi-Fi performance is almost always dependent on RF conditions. While it’s true that maximum data transfer in a clean lab environment may get up close and personal to a gigabit ceiling more often in Wave-2, the problem is that these same high-performance networks must share airtime with neighbors.  Looking forward, it’s inevitable that there will still be a lot of 802.11n networks everywhere, and we will just have to cope with the realities of backward compatibility.

Stop gig stressing.

The moral of the story is this: While theoretical scenarios could strain a single gigabit backhaul, there’s just no way that real-world client mixtures, RF environments, application requirements, and network infrastructures are going to saturate the full capacity of a high-performing full-duplex gigabit link. So don’t be fooled by vendors wanting you to upgrade your wired networks based on theoretical scenarios and arguments. In the words of Nancy Reagan, “just say no.”