October 29, 2014

Small Business Gets Big Wi-Fi, Finally!

Businesman-holding-hands-upIn a wireless world that’s so dependent on reliable connectivity, there’s something small business owners will tell you:  Wi-Fi for small businesses really stinks.

The small business sector is one of today’s most underserved and overlooked markets, and the opportunity to provide these businesses with better Wi-Fi is compelling, to say the least.

In 2011, According U.S. census data, there were nearly 6 million small businesses with actual employees in the United States. Firms with fewer than 500 workers accounted for 99.7% of those businesses and businesses with less than 20 workers made up 89.8%.

This is a big market. And these businesses deserve some love.

Dell-oro-chart
Click to view image

Selling business-class Wi-Fi equipment to small businesses looks to be the fastest growing sub-segment within the global enterprise WLAN market.

Dell’Oro Group estimates that that the market opportunity for selling enterprise Wi-Fi gear into the small and distributed branch office segment will jump from $700 million in 2013 to $1.4 billion by 2018 (see chart).

Devicescappe-graphic
Click on Image to View

A recent survey of 400 U.S. small businesses with retail places of business, commissioned by Devicescal and conducted by /GR, found [to nobody’s surprise] that providing free Wi-Fi is good business for increasing:

  • Customer foot traffic
  • The time spent on premises (and most importantly),
  • The amount customers spend.

The study focused on independent “mom and pop” retail stores, including bars, nightclubs, restaurants, fast food places, coffee shops, clothing boutiques, book shops, and salons.

With more wireless-only devices, savvy users and mobile business applications needing higher capacity and more reliable Wi-Fi access, small businesses have been, well, stuck.

And when it comes to Wi-Fi today, small businesses have few reliable choices.

Most small businesses are typically forced down-market to use consumer-grade Wi-Fi equipment (including Wi-Fi integrated into cable modems and DSL routers provided by services providers). These solutions lack the features, functionality and gusto to adequately meet the growing demands for better and more reliable wireless connectivity.

Another [not so great] option has been the use of enterprise class wireless LAN (WLAN) systems. While feature-rich, these solutions are simply overkill and way too expensive and technically daunting for small organizations with no dedicated IT experts (which is pretty much every small business on the planet).

What the market craves is some sort of system that bridges this growing gap, with business-class Wi-Fi reliability and pervasive performance at consumer-type prices. And it must be brain-dead simple to use.


A New Way to W-Fi with Xclaim Wireless



Xclaim-ap-and-harmonyLooking to solve these problems, Ruckus today took a big step into the small business market with Xclaim Wireless.

Xclaim is a business-class Wi-Fi system, insanely priced and simple, simple, simple to configure and install.

This isn’t merely a repackaged Ruckus enterprise product simply de-featured at consumer price points. Rather, it’s a new way to Wi-Fi, uniquely developed and designed for the small business market.  

No controllers, nerd knobs, or complex network settings to memorize. Xclaim redefines the notion of better Wi-Fi for small business by combining enterprise-class power and reliability with the simplicity that small businesses are clamoring for.

Xclaim-harmony-on-ipadAt the heart of Xclaim is a custom-built (and yes FREE) mobile application, Harmony for Xclaim, that puts Wi-Fi management into the palm of your hand; radically simplifying the configuration, management and monitoring process of Wi-Fi networks. We're talking grandparents-can-do-it-simple (watch this). 

Pundits are already xclaiming what they think about all this. 

So say goodbye to the days of amenity Wi-Fi as the norm for small business. Now there’s a powerful business-class Wi-Fi solution for mobile connectivity that offers tremendous benefit for both businesses and their customers without both going either broke or crazy.

Visit Xclaim Wireless to learn all about it.

October 27, 2014

Hotspots Get Hotter with Release 2 of Hotspot 2.0


Wright

HotSpot_2.0.ai

Hotspot 2.0 Release 2 is here – expanding and improving on the considerable innovations introduced with HS2.0 Release 1.

At Ruckus, we’ve always been huge fans of Hotspot 2.0 and have taken an active part in its testing and development.  With Release 2, Hotspot 2.0 gets even better. 

Hot Spot 2.0 (HS 2.0), often referred to as Wi-Fi Certified Passpoint, is the new standard for Wi-Fi public access that automates and secures the connection. It addresses the two major challenges with legacy hotspots:

  1. the often-confusing task of connecting (which SSID, what’s this captive portal thing, does this even have Internet access?) and

  2. the open/unencrypted airlink connection. Hotspot 2.0 also enables us to interconnect all the “islands” of hotspots into larger footprints via roaming agreements between Wi-Fi operators.

Early examples include the recent announcements of bidirectional roaming between the Time Warner Cable and Boingo Passpoint services, and AT&T’s release of a new Wi-Fi Hub service with Hotspot 2.0 support.

Release 1 of HS 2.0 was based on the IEEE 802.11u standard and introduced new capabilities for automatic Wi-Fi network discovery, selection and 802.1X authentication based on the Access Network Query Protocol (ANQP).  

 With Hotspot 2.0, the client device and access points now exchange information prior to association using ANQP. The AP advertises the “backend” service providers (SPs) who can process authentication requests that are reachable from this hotspot. The client then checks to see if it possesses a credential for one of those SPs.  If it does, the client proceeds to associate and then authenticate to the network using 802.1X and the provisioned credential. Supported client credentials include SIM cards, USIMs, X.509 certificates and username/password pairs. Each credential is associated with a specific EAP type. The primary benefits of Release 1 were automating the connection experience at hotspots where the client credential was accepted and providing a secure, encrypted airlink for Public Wi-Fi. A secondary benefit is the ability to support multiple roaming partners over a single SSID, with SSID proliferation having become an increasing issue for operators looking to expand their footprint through roaming relationships.

Release 2 is largely focused on standardizing the management of the credentials; how they are provisioned, how they are stored on the device, how they are used in network selection, and how long they are valid. Some of these capabilities aren’t applicable to cellular credentials (SIM/USIM), because those are provisioned by the home mobile network operator (MNO) and are themselves the stored credential. But what about all those Wi-Fi only devices, how do we get them provisioned for service (and perhaps even linked to the subscriber’s cellular data account)? And what if the SP wants to apply some policy as to how its credential may be used (including the cellular credentials)? How do we expire a credential after a certain amount of time or usage? What do we do if a device submits a credential that has expired? And how can we do all of these things in a manner that preserves the security of the subscriber and their credential? These are some of issues that the smart folks in the Wi-Fi Alliance’s® Hotspot 2.0 Technical Task Group are addressing with Release 2 of Hotspot 2.0.

Making Smart Phones Event Smarter.


 Until Release 2 there was no standard format for managing a Hotspot 2.0 credential on a client device. Depending upon the OS or manufacturer, a text or XML file was typically used, but these might have different naming conventions, syntaxes, and locations within the file system. Release 2 leverages the Open Mobile Alliance’s Device Management (OMA-DM) framework, which provides a standardized XML tree structure within which different information can be stored in a consistent manner.




Release 2 specifies a new Per Provider Subscription Management Object (PPS-MO), which is one or more branches in the OMA-DM tree containing all of the information related to the Hotspot 2.0 credential(s) on the device. The credentials themselves may be stored in the PPS-MO (e.g. a username/password pair), or they may be located elsewhere on the device (e.g. a SIM or X.509 client certificate) and referenced within the PPS-MO. However, the PPS-MO doesn’t just contain the credential information; it also standardizes the storage of some associated Release 1 parameters and introduces a whole range of new ones. Click on the table to see a few of the new of the release 2 parameters for comparison.

MAIN-FETURES-IMAGEIt’s important to understand that the credential information and associated parameters for each provider are being stored in a separate branch of the PPS-MO tree. Further, only the provider who provisioned the credential is allowed to modify any of the parameters for that credential. So, a SIM credential branch from your cellular provider might contain preferred roaming partners and blacklisted SSIDs that apply when using EAP-SIM, while a username/password credential branch from your cable operator could contain a different set of policies to follow when using that credential with EAP-TTLS. Consistent with Release 1, Release 2 upholds the user’s preference as the ultimate decision maker for network selection, providing the ability for the user to prioritize multiple subscriptions/credentials.

A Few New Backend Servers Needed.

With Release 1, the only supporting servers required were the AAA servers providing the client authentication, or perhaps acting as gateways to a mobile operator’s Home Location Register (HLR) for EAP-SIM authentication. Release 2 adds a number of new server elements to support service registration, credential provisioning, credential management, and ensure the security of the client and credentials. Here’s an overview of these new server elements: 

  • Online Signup (OSU) Server
    Registers new users for service and provision them with a credential.
  • Policy Server (PS)
    Provisions network detection and selection policy criteria for the provider’s issued credential.
  • Subscription Remediation Server (SubRem)  
    Corrects any issues with the issued credential, policy or subscription, and also to renew prepaid type credentials.

  • CA
    Generates and issues client certificates if TLS authentication is used. 

HS2-KEY-NEW-FEATURES

All Release 2 clients receive Trust Roots that link to the Wi-Fi Alliance’s® PKI.  This means that clients can validate all Release 2 server components and even the provisioning WLAN itself, even before they’ve been provisioned with a credential of their own. Remember that these are logical entities and could be implemented on separate platforms or in a single box, perhaps combined with the AAA server.

How does it all work?

A Release 2 client will see the Release 2 support in the Hotspot 2.0 indication element of the APs beacons and probe responses.

The client then sends an ANQP query to the AP. In the ANQP response, the AP indicates that Online Signup services are available and lists the OSU providers that are reachable from this hotspot. Since the client does not have a valid credential associated with this hotspot operator, or any of its roaming partners, it does not proceed to automatically associate and 802.1X authenticate. Instead, while it is still in the pre-association phase, the user will be notified that Online Signup services are available. If the user elects to sign up, they will be presented with a list of the available Online Signup providers. The list is typically displayed as an icon, operator friendly name, and description for each operator. The icon and friendly name are actually embedded within the PKI certificate issued to the OSU server, thus ensuring that clients don’t connect to “rogue” provisioning systems. Remember that everything described so far has happened while the client is not yet associated to any WLAN.

It’s also important to note that with Release 2 of HS 2.0, a new type of WLAN is being introduced, the OSU Server-only authenticated layer 2 Encryption Network (OSEN).  Release 2 OSU deployments can use either Open or OSEN WLANs for the client provisioning process. 

Samsung-g5s
The OSU Provider List on a Samsung Galaxy S5

The intent is to ensure that the client is connecting to a valid/trusted OSU WLAN and that the registration and provisioning servers are authenticated. In order to accomplish this, there will be new Public Key Infrastructure (PKI) root trusts loaded into Release 2 clients. These will be used to validate OSU servers and the OSU WLAN if the OSEN option is used. 



Once the user selects an OSU provider from the list, the connection manager on the device will connect to the OSU WLAN (Open or OSEN). It then triggers an HTTPS connection to the OSU server URI, which was received with the OSU providers list. The client validates the server certificate to ensure it is a trusted OSU server. Then the client will be prompted to complete some type of online registration through their browser.

The final step of this registration is the provisioning of the credential and parameters to the client.   Finally, now that the client has a valid credential for the production HS2.0 WLAN, it disassociates from the OSU WLAN and connects to the HS2.0 WLAN using the standard ANQP mechanisms. The connection manager also factors any configured policies into its selection decisions when utilizing the credential. From then on, the credential provider can use this framework to update the credential, policy or subscription of the device by indicating via RADIUS messaging that the client needs to contact one of the provisioning servers and/or the client device can initiate an update based on configured intervals or user action.

What’s Next?

The Wi-Fi Alliance recently held a formal launch event for Release 2 of HS 2.0 at its Wi-Fi Global Congress at the Palace Hotel in San Francisco. Ruckus performed the public demonstration of Release 2 at the launch event while the WFA announced that Ruckus’ OSU server suite is one of two selected for the Passpoint Release 2 Certification Testbed. The Ruckus SmartCell Gateway and ZoneFlex Access Points have already been certified for Passpoint Release 2.

On the client side, Samsung already has two models of the Galaxy S5 that have been certified, there are a number of certified chipset reference designs available from companies like MediaTek, Broadcom, Qualcomm Atheros, and Marvell. Intel has also received certification for the 7260.HMWG adapter.

The WBA is planning its Next Generation Hotspot (NGH) Phase 3 trials, which will be based on Hotspot 2.0 Release 2. We expect a number of operators to participate in the NGH Phase 3 trials and some to conduct their own private trials. Commercial deployments will follow.

 So it looks like hotspots will be heating up even more with Online SignUp and standardized credential management, which is great news for everyone. 

July 19, 2014

Want Better Wi-Fi!

Gt-hill-shotI talk to thousands of people every year that are wanting. They want bigger homes, more relations, bosses without attitudes, faster cars, and more powerful firearms. While a seemingly random list, one “want” at the top of almost everyone’s list is simply better Wi-Fi.

But what constitutes better?  What does good Wi-Fi look like to you?

If you’re a prospective Wi-Fi purchaser of any kind, I’ll let you in on a secret: every supplier (us included) wants to be the first to get to you. And being the first means everything.

It lets suppliers to set the stage and define the Wi-Fi problem on their own terms, whether or not it’s the real problem you’re facing. If youbelieve it, they will, of course, proceed to tell you how they are the only company that can solve the problem. 

It’s age-old game that has been around as long as humans have known how to peddle.

You’ll hear stories of how controllers will kill you, more than one channel will cause your mobile devices to erupt in flames, if your AP and your switch don’t have the same logo your toilets will flush backwards and "hey, we make good printers so how could we go wrong with Wi-Fi?”

ALL of these stories are told from the wrong perspective. They are told to scare or impress you, the network administrator.  We believe you need look beyond what you see on the screen during a product demo and think about your users.

They really don’t give a hoot about architecture, about how many channels there are or who has the fanciest antennas. Want-Better-Wi-Fi-230X155

They only care about being connecting fast and reliably without dead spots or those dreaded buffering circles that go round and round. They’ll never see the Wi-Fi system interface or know how you got it all to work. They WANT BETTER WI-FI.

But the big challenge within the Wi-Fi world is now to quantify better. All of us spout signal-to-noise ratios, dBs, multi-user MIMO, adaptive antennas and PD-MRC, but what does it all mean?

I’ve decided to tackle this topic in a new webinar series called WANT BETTER WI-FI. These educational webinars are designed to teach you, not sell you (too much), on the keys to creating a better Wi-Fi experience for your users. We’ll answer questions like:

  • Why does signal really matter? 
  • How can you not only deal with Wi-Fi interference but fix it?
  • Should you wait for 802.11ac Wave 2?
  • Will MU-MIMO really give me 4x better performance?

But to answer these questions we have to start with the fundamentals and that is where we begin in the first webinar. Even though we call it fundamentals it’s anything but. This is the place to start and, it’s vital to be able to understand how to truly make Wi-Fi better.

In the second webinar, we’ll begin to drill down on the new 802.11ac amendment; it’s protocol challenges, multi-user MIMO and how to maximize 802.11ac’s real potential.

The last webinar in our WANT BETTER WI-FI series will take you into a lot more technical detail, but it’s worth it.

We’ll explore the pros and cons of transmit beamforming (TxBF) and compare them against adaptive antennas. And we’ll also recap and show how an entire system that’s designed for high capacity should perform.

So register here today for our WANT BETTER WI-FI webinars and we’ll teach you how you can get it and give it to your user without having to sit through another vendor presentation.

Just THAT will be worthwhile enough. :) 

July 02, 2014

Do You Believe in Magic?

Insane-guy-2We don't. But we DO believe in insanity, which seems to be running rampant these days in the networked world. 

After reading Gartner's new 2014 Magic Quadrant for the Wired and Wireless LAN Access Infrastructure, we are feeling a bit disenfranchised.

It's not so much that Ruckus failed to meet the qualifications for participating in Gartner's annual MQ report because we don't sell a wired solution (we're a wireless company for heaven's sake). It's because we believe the approach is fundamentally flawed when it comes to how the bulk of the world now purchases a wireless LAN solution.

And we're not the only ones that feel this way. (As a publicly-held company we've learned to use "we feel" statements a lot more). 

IMG - Lee Badman bloggerLee Badman, at Syracuse University, one of the industry's more braniac IT people, nailed it (read his hammering here). At Ruckus, we've been down this rant before. 

Last year, we bitched and moaned (in what became one of our most popular BLOG postsabout how Gartner abandoned their wireless LAN magic quadrant in favor of one that combined wired and wireless LAN access.  

At first we just thought the change was simply a response to big clients with big money wielding their big influence. That spawned several conference calls with Gartner's Ombudsman (no kidding) and several revisions of our not-so-nice stream of consciousness. But we were wrong; dementia has now seemed to set in.

To be fair, Gartner is more focused (as are suppliers like Cisco and Aruba) on the high end of the enterprise market (i.e. Fortune 500) where a unified wired and wireless access strategy makes sense.

These types of companies often have 25 IT guys that sit around a table for hours debating the finer points of intelligently filtering TCP and UDP packets based on application layer protocol session information found in the third octet of every jumbo frame packet.

But for the lion's share of the world's enterprise market (the unfortunate 50,000) where there are only one or two IT staff responsible for all things networking, wires are just a really good way to connect Wi-Fi access points.  

The thing is, and there's no way around this, Wi-Fi and Wi-Fi-capable devices have become so pervasive and important that most enterprises and service providers now look for best of breed, pure play Wi-Fi solutions –building their networks around reliable mobility first and foremost. This typically leads them to suppliers that offer simply better wireless.

Think about it, if users can't get connected reliably to a Wi-Fi network (which is now the default preference for access), then how is wired access going to help?  

Crazy. Huh.  That's what we thought. 

June 14, 2014

Not So Random Thoughts on Privacy and Positioning

Iphone-icture2Earlier this month at the Apple’s Worldwide Developer’s Conference (WWDC) it was uncovered by Frederic Jacobs that with the upcoming Apple iOS8 operating system, Apple devices will be able to, as a privacy mechanism, hide or mask their MAC address by randomly generating a fake MAC address to present to the Wi-Fi network. 

In iOS8 Wi-Fi scanning behavior has changed to use random, locally administrated MAC addresses within Wi-Fi probe requests and responses (the way devices and and access points talk to each other to determine if a connection can be established).  Many expect Google to make similar changes within its Android OS.

Media access control (MAC) addresses are unique identifiers that are assigned by device manufacturers. A MAC address is hard-coded onto the device’s network interface and is unique to it. These addresses are essential for networking and network diagnosis because they never change, as opposed to a dynamic IP address that can change as users move around. For a network administrator, that makes a MAC address a more reliable way to identify senders and receivers of data on the network.

The news caused a fair bit of consternation among companies that use MAC addresses as a way to identify and locate client devices being used in public spaces for the purposes of improving the customer experience when using Wi-Fi networks.

Many see Apple’s move to randomize MAC addresses as simply a way for them to push its iBeacon technology. iBeacon already uses Bluetooth Low Energy (BLE) technology for which Apple also randomizes the addresses. But make no mistake. iBeacon will undoubtedly benefit from this action. More to the point, it’s a good way for Apple to remain publicly contentious about user privacy concerns while helping its iBeacon business along.

Protecting user privacy is nothing but goodness. Most people don’t want personal information about them, like their age, birthday, gender and what color underwear they have on, exposed to anyone who might use it for some nefarious purpose.

But here’s the thing. And it’s an important thing. MAC addresses don’t expose ANY of this kind of information. Users are personally identifiable only after they have logged onto the Wi-Fi network and/or signed into a mobile app (e.g. a shopping app / or an app for a convention venue) where they provide details to gain access or opt-in to obtain information of use to them (promotions, directions, alerts, etc.).

While being able to hide the unique MAC address on a device seemingly provides an added level of protection and privacy for users, it effectively prevents increasingly popular passive network-based Wi-Fi location services from identifying and tracking devices that aren’t connected or associated to the Wi-Fi network but are still “talking” to it.

This means value-added services that users want and business have been demanding could be diminished.

MAC addresses can be tracked whether or not users actually connect to a Wi-Fi network. Even when people aren’t using or connected to a Wi-Fi network, their device (if Wi-Fi is turned on) still continues to let the network know that it’s around by transmitting probe requests.

This information is extremely useful for Wi-Fi-based location and positioning systems that are designed to provide invaluable analytics that can be used by businesses to deliver customized services to their clients who they know are within a given area.

Affected-chartThe biggest impact of this move by Apple is on devices that are not associated with the Wi-Fi network (see above chart). All associated devices remain unaffected by any changes to MAC-address randomization on any mobile OS. In addition, many advanced location-based systems, like the Ruckus Smart Positioning Technology (SPoT) service already make use of sophisticated hashing performed on MAC data to maintain user privacy. While these systems won’t see any reduction in the accuracy of location services they deliver, they will now have less data available to make use of as a result of Apple’s move. What a shame.

Fortunately iOS devices will remain identifiable. Despite MAC-address randomization, these devices have a unique, and known, range of MAC addresses. By eliminating all unassociated iOS devices from the database (positioning engine), the integrity of the user/visitor profile in a venue is maintained.

The good news for venues like malls, hotels, airport and convention centers, as well as value-added resellers and carriers looking to deliver location-based services using Wi-Fi is, the impact of the iOS8 feature is limited.

Because only unassociated iOS devices are missed (again, see chart above), organizations can continue to engage and locate a majority of their users — still a significant number compared to the limited pool of users who hope to be identified by Bluetooth signal.

Meanwhile, venues can continue to have access to high quality location analytics and customer insights, and can continue to engage their users (visitors/customers) with highly targeted location based services, (including promotions and other content. Apple’s move will help drive a massive shift from users of unassociated devices towards users with associated devices. With that, organizations need not worry about being unable to engage users and analyze their movement and behavior.

Ultimately, brands, venues and companies must begin to focus on creating customer value and satisfaction that delivers a compelling mobile experience beyond basic wireless connectivity. Location-based Wi-Fi services and brand-based mobile applications remain an ideal way to do exactly this.

So keep watching this SPoT.

June 05, 2014

A Wi-Fi Gamble That Pays Off @Interop Las Vegas?

GtN+I-slot

"Good Luck!"

That’s what I was told when an industry friend heard that we were providing the Wi-Fi service for this year's Interop Las Vegas.

To be fair, the bar for Wi-Fi at Interop was quite low. Historically the Wi-Fi network has received less than stellar reviews from the attendees which is exactly why we decided to take on the challenge. 

THE DESIGN Interop-sign

From a Ruckus perspective, given our experience at othe high-density events and venues, such as Mac Tech and Time Warner Arena, we knew that 5 GHz adoption would be quite high. In fact, no 2.4 GHz was provided on the exhibit hall floor due to high amounts of interference.

This was something that Glenn Evans, Interop's Network Director, was adamant about because of his experience at the show. The interference is high because almost every vendor brings their own system to perform demos. With so little bandwidth available in 2.4 GHz, physics will always win out. 

There were 18 APs placed evenly around the exhibit hall. Coverage, of course, was not the primary design issue. We designed for capacity.

One AP (even set at low transmit power)could have covered the entire show floor. However, we wanted to ensure the highest data rates and spectrum efficiency, so we determined that given AP mounting locations and available backhaul, 18 was the right number to achieve that goal.

Interop-1More than 50 APs were used to cover the various areas off of the exhibit floor. These areas included small breakout rooms, larger meeting rooms and the massive keynote ballroom.

Of the 68 APs used, most of them were 7982s (3x3:3 11n). In addition, we sprinkled our new 802.11ac access point, the R700, in strategic spots.

We originally wanted 802.11ac everywhere but knew it would be overkill and that 802.11n would be more than adequate for this event given the low number of 11ac in clients that could actually benefit from the new standard.

Due to the open-air high density of the show floor we used 20 MHz wide channels only.

In any ultra high-density deployment, more, smaller channels are preferable to reduce Wi-Fi contention. Off the show floor, we enabled 40 MHz channels since there was enough RF separation between APs for proper channel reuse without causing co-channel interference. In both scenarios we enabled DFS channels. Although some people tend to shy away from using them, the reality is there isn’t much to be worried about unless you have specific limitations imposed by your client devices.

THE CHALLENGES

High density environments are always a challenge, but besides the sheer density of client devices, there were some unique challenges to the deployment. The first one we encountered was simple enough: how to hide the APs. On the show floor was easy but off the show floor proved to be a bit challenging. So, in perfect Ruckus fashion, we used a bit of ingenuity and strength (they were heavy!) to conceal the APs (see picture above).

On a more technical level, we had one major challenge that we showed itself in two ways: client roaming.

The Interop network is really two networks. One is ‘show floor’ and the other is ‘off show floor’ of which are served by different router and switch vendors but only one wireless vendor. This created the problem of two separate layer 3 networks.

So what’s the problem? For ease of use and good design, the same SSID was used throughout the show. The problem is, when an attendee moves from one L3 network to the other, their device may not realize that it’s moved to a different layer three system and try to use the incorrect IP address.

There are two conventional solutions to this problem:

  1. Two separate SSIDs. This would be the easiest technical solution but not the cleanest user experience.

  2. Tunnel one set of APs to a destination (controller in this case) to the other network.This would have also worked fine but we had another trick up our sleeve.

The second client roaming problem was that client devices don’t always roam well. In an open-air environment with few obstructions, client devices tend to make very poor roaming decisions because so many APs look alike from a signal perspective. Or, better stated, they don’t look different enough.

To illustrate this, let’s look at a test we performed. I started at one end of the hall and ran a speed test. We achieved approximately 35 Mbps to the Internet, which we considered acceptable considering single stream 20 MHz channels. I’d then move to the next location, about three rows down. 25 Mbps. Move three more, 13 Mbps. Yet three more, 5 Mbps. Why the change? The client (gold iPhone 5s in this case) was sticking to the original AP. Even if I turned off Wi-Fi and reassociated, the phone would choose the original AP.

In reality 5 Mbps is plenty for an attendee to do what they want on the network. However, the underlying problem is that airtime used for lower data rates is very detrimental to the network’s capacity. Fortunately, Ruckus has a system that worked to solve both of these problems: SmartRoam.

THE TECHNOLOGY

SmartRoam was designed for public access networks just like this one. SmartRoam should not be confused with client load balancing or band steering. SmartRoam exists because most (if not all) client devices make very poor AP connection decisions. The poor choice shows up occasionally during initial association but more frequently during roaming. To solve that problem SmartRoam is designed to force a client device to move to the best AP for that client device. The key word here is force. There is no current widely supported Wi-Fi protocol to inform the station that it should move to another AP. So in order to keep stations connected to the best AP, more drastic measures need to be taken.

SmartRoam is a break before make protocol. When it senses the client device needs to move to a better AP (dictated by user adjustable metrics), it will deauthenticate the client and then withhold probe responses from that AP while the better AP (based on signal and available throughput) continues to send probe responses. As you are already thinking, this isn’t an ideal scenario. It sounds very bad to purposely break a client device connection but in the case of high-density public environments, the increase in available capacity for all is worth it. As a great man once said, “The good of the many outweighs the good of the one, or the few.”

After a short tuning period testing different scenarios, we found that SmartRoam was allowing connection anywhere on the exhibit hall floor and always connect to the best AP. This was proven by running multiple throughput tests and showing little deviation based on location.

 SUCCESS? N+I-ZD-capture

The big question that always remains after these types of events is "how should success be measured? Beyond looking at raw data movement (over one terabyte per day) we looked to social media and the opinions of the network team that’s done Interop for years.

By both measures we couldn’t be happier with the results. There was some risk involved in volunteering to implement our system at a show with so many competing wireless systems but we were very confident that we could provide world class Wi-Fi service since well, that’s what we design our systems to do and that is what we expect.

Always. 

May 10, 2014

Beamforming Basics for the Wi-Fi Challenged

With every new Wi-Fi technology or standard, industry fortune tellers are quick to cast adaptive antenna arrays into an early grave. This gets a little (actually, a lot) technical, but it's required.

Distinctly different from chip-based beamforming, for which every vendor now claims support, patented smart adaptive antennas are designed to dynamically and continuously create unique directional RF patterns (what we call BeamFlex™) proven to deliver the highest throughput for each client device at longer ranges.

This innovation, yet to be duplicated en masse, has stood the test of time and remains hugely valuable in the world of Wi-Fi.

The first attempt at dumping of adaptive antenna arrays into a death spiral happened with 11a/g and its blazing fast 54 Mbps. Yet BeamFlex survived and, in fact, thrived as the RF environment became more noisy and complex.

It happened again with the introduction of the 802.11n standard that came with the ultra-reliability of MIMO.

And, surprise, surprise, it's happening again with the destruction (sort of) of the gigabit barrier brought by the new 802.11ac standard. The problem is, to which history now attests, BeamFlex has only proven to add more value, not less, to these technological changes.

At the core of current speculation (that adaptive antenna switching is doomed with 802.11ac) is the notion that transmit beamforming (TxBF) with 11ac replaces the need for Smart Wi-Fi. This is due to the common misconception that smart, adaptive antennas—as proprietary Ruckus BeamFlex technology—and transmit beamforming—as a standards-based technology—are one-and-the-same. This whitepaper details the diffferences.

Despite some similarities (such as the goal of enhancing signal quality and the use of the word “beam”), the two technologies are fundamentally different. Here's why:

BeamFlex and TxBF are not Mutually Exclusive.

BeamFlex is truly adaptive antenna technology by which an access point (AP) selects an optimal transmit path out of many possible options. It is fundamentally an antenna technology, combining special hardware and sophisticated software, that sits on top of all radio foundations (in a protocol-agnostic way).

Ruckus-11ac-Wave2_1__pptx__Read-Only_

TxBF, on the other hand, is a digital signal processing technique that occurs within the transmitting radio, and is heavily protocol dependent. It attempts to send multiple copies of the same data so as to create constructive combinations at the receiving radio. The beauty of this is that BeamFlex (antenna technology) and TxBF (radio technology) can be perfectly wed; and a happy marriage it is. Ruckus can support both of these techniques at the same time (on some products) to deliver a cumulative benefit to signal quality.

BeamFlex works for all clients.

Because it is an antenna technique—and not a radio technique—BeamFlex works equally well for all clients of all capabilities. This means 802.11a/b/g/n/ac clients all benefit, and there are no special requirements for support. Single-stream, two-, three- and four-stream clients all benefit, and there is no tradeoff as the number of streams increases.

No transmit beamforming and Spatial Multiplexing at the Same Time.

Because it is a radio technique, effective TxBF DOES require client support (something a lot of people fail to understand). Consequently, 802.11a/b/g/n clients miss out on the perks. And some 11ac clients do not support TxBF. Looking at the pervasive adoption of 11n, we should not expect all (or even most) clients to support TxBF even by the end of 2016.

TxBF must also tradeoff with spatial multiplexing. The same transmitters cannot be used for both. In order to be effective, TxBF systems should have double the number of transmit antennas as spatial streams.

1x1 clients (this means one transmit and one receive radio chain, in Wi-Fi parlance) will be happy with even a 2x2 AP; but for a 2x2 MIMO client to benefit (in any appreciable way) from an AP’s use of TxBF, a 4x4 AP is desired. This also means that a 3x3 client sees miniscule (if any) benefit from an AP that is anything less than 6x6 (yes this means 6 transmitters and 6 receivers, can you even imagine what that looks like?).

BeamFlex does not disrupt the neighbors.

As we use 5 GHz spectrum more heavily, and especially as we expand to support wider channels (80 MHz, not 160) in 11ac, we should expect to see more Wi-Fi protocol contention from APs operating on the same channel (because neighboring APs are more likely to operate on the same channel since we have fewer non-overlapping channels). As a result, our precious 5 GHz spectrum will soon look more like the 2.4 GHz spectrum.

The value of BeamFlex directional transmissions is paramount to preserving capacity with co-channel neighbors. It does so by transmitting with directional patterns that both maximize data rates (allow clients to get on/off wireless airtime quicker) and also avoid sending RF energy where it is undesired (towards neighboring APs). This reduces unnecessary contention with neighboring APs, which drags down capacity.

TxBF is often visualized as a directional steering mechanism, but it works by creating signal peaks in point space (hopefully at the receiver’s antenna), which often creates signal peaks in unintended directions, causing interference where it is undesired.

Multi-User-MIMO is gold for BeamFlex

If you’re focused on 160 MHz channels in Wave2—or other Gigabit hype —you’re focused on a spec-bloated red herring. 160 MHz is for (some) consumer networks. The biggest bang from 802.11ac comes in the second wave of products (that hopefully come by Christmas of next year) and is what's called multi-user MIMO (MU-MIMO). This is a technique by which the AP can send downlink frames to multiple clients at the same time. And this enhancement will require new hardware (yes, for everyone). It’s a lovely protocol enhancement for boosting capacity, given the plethora of very simple 1x1 mobile devices on networks. But it comes with a big catch: signal isolation.

To multiply the efficiency of spectrum with multi-user transmission, we need to ensure that each station receives its data without receiving other stations’ data at the same time (which would cause interference and make MU-MIMO not work as intended).

It's this need for signal isolation that makes Wave2 MU-MIMO and BeamFlex a perfect complement to each other.

BeamFlex adds directionality and signal separation (along with a boost to SNR), while the TxBF component of MU-MIMO provides additional separation at the radio level. Higher data rates per station, MU signal separation, maximum spectrum efficiency. Call it, well, the perfect storm.

Is Wi-Fi performance a commodity?

Now that we’ve further bludgeoned home the technical superiority of adaptive smart antenna arrays (BeamFlex), let’s get to the real issue.

People claim that all AP hardware is the same and 11ac commoditizes Wi-Fi performance. There are two key requirements needed for Wi-Fi performance to become a commodity:

  1. There must be roughly equal—or at least “optimal”—range, capacity, throughput, optimization simplicity, reliability, and interference mitigation from all equipment suppliers, and

  2. The ratio of cost-to-performance needs to be the same for all equipment suppliers.

The Ethernet Example.

If you look at Ethernet, performance commoditization is a given because both of these issues are true.

There is enough performance (Gig Ethernet has been sufficient for its application for many years), and the cost-to-performance of line-rate Gigabit switching is nearly identical across suppliers (hence, trends like NFV are occurring).

But, unequivocally, without the most diminutive doubt, the answer to the question is that Wi-Fi absolutely does not share these characteristics (and perhaps never will).

Is Wi-Fi sufficient as of 11ac, even 11ac with Wave2? Do we currently have sufficient spectrum (or spectrum efficiency) to meet all of our capacity, performance, reliability, app delivery, and network optimization needs? If the answers are no, then it stands to reason that a product supplier that enhances the Wi-Fi experience in light of these technical deficiencies could still differentiate on cost and performance alone (but no one, including Ruckus, is really doing that).

When customers start telling us that Wi-Fi performance is as reliable as it will ever need to be, there's a surplus of radio spectrum, they have no issues connecting devices to the network and there is ample capacity to keep all users and devices happy, we will be happy to admit that adaptive smart antennas should be put to sleep.

Until then it's simply better wireless for everyone who wants to cause a Ruckus.

February 21, 2014

Can Wi-Fi be made easier to use than cellular?

Wright Floating-wi-fi-guy"Making Wi-Fi as easy as cellular” is a popular maxim when engineers, marketeers, and journalists talk about Hotspot 2.0. And it’s not hard to understand why. The cellular connectivity experience is well understood in virtually every culture, while, except to those involved with its development and testing, Hotspot 2.0 remains a big unknown. Therefore to say Hotspot 2.0 makes Wi-Fi connectivity like cellular puts it in terms that most people can understand. In fact, if you look back, you’ll find a few Ruckus press releases and presentations that use this very analogy.

However, as we approach the launch of production Hotspot 2.0 networks and begin using this technology in our daily lives, it is important to have a more precise understanding of what it is and how it works.

 It’s at this point that the comparison with cellular connectivity and roaming falls short of conveying what people need to know. For context, it’s best to start examining some of the similarities and differences between cellular and Wi-Fi with Hotspot 2.0 relative to connecting automatically, authentication, and roaming. Airlink encryption aside, users can be assured that robust security is given for both cellular and Wi-Fi (wih Hotspot 2.0) connections.

Connect Me.

To connect to any type of network, a client device must support the same physical interface and medium access mechanisms (Layers 1 and 2) as the access network.

Sometimes the compatibility cues are obvious. For example, plugging a token ring hermaphroditic connector into a 10BASE-T hub would have been quite an accomplishment, even if futile in terms of passing data. But in the wireless world, there are no visible cables or connectors and the end users need to have a fuller understanding in order to ensure that her device will connect to an available network.

The first consideration is the frequency band. Does the device “talk” on the same frequency that the network is operating. Wi-Fi currently operates in swaths of unlicensed 2.4 GHz and 5 GHz spectrum that are largely harmonized globally. The first 11 channels (3 non-overlapping) in the 2.4GHz band are de facto “world bands” as they are approved in virtually all regulatory domains. The picture in 5 GHz is currently less uniform, but there are sections (5.15-5.25 and 5.725-5.85 especially) that have been, or soon will be, adopted for unlicensed use in most parts of the world. 5 GHz is the current focus of regulatory bodies since 802.11ac requires it, and commissioners are endeavoring to open more common frequencies there.

So for Wi-Fi at least, a dual-band (2.4 GHz and 5 GHz) device bought in the US today will definitely connect to a 2.4 GHz Wi-Fi network in Europe, Africa, or Asia, and can connect to 5 GHz Wi-Fi networks in most areas of the world.

 

In the cellular world, the situation with device support is not nearly as straightforward. 

 

Because licensed spectrum is exclusively allocated in much ‘thinner’ slices to individual mobile operators at the national or regional level. And because the 2G, 3G, and LTE bands vary from country to country, it is impractical to implement a single radio access front end that can support all of the possible RF bands.

One aspect of this is the so-called LTE “band fragmentation” issue. This means that even the most sophisticated handsets have to be produced in a large range of models, which are often specific to a region, country, and/or operator. Even the “international” models can’t hope to support all of the possible operating bands for each generation of technology. At last glance there were 19 different models of the Samsung Galaxy S4s in production to support this collection of different cellular bands. 

The difference between Wi-Fi’s harmonized bands and cellular’s fragmented bands is underscored by the fact that all of the 19 different models of the Galaxy S4 use the same Broadcom BCM4335 Wi-Fi chipset.

 Meanwhile cellular chipset manufacturers are hard at work creating advanced chipsets and RF frontend solutions that can support large numbers of licensed bands, such as the Qualcomm RF360. As Qualcomm SVP of Product Management, Alex Katouzian, recently pointed out, "The wide range of radio frequencies used to implement 2G, 3G and 4G LTE networks globally presents an ongoing challenge for mobile device designers.

This severe band fragmentation issue doesn’t exist for Wi-Fi connectivity.

Another challenge that the cellular industry faces with ubiquitous device support is technology schisms. With 2G, 2.5G, and 3G most of the world settled on GSM/UMTS based coding and modulation, but big (and globally significant) operators in the US and Korea chose CDMA solutions. A similar split is occurring with LTE.  In LTE land, the world is standardizing on the Frequency Division Duplexing (FDD) implementation.  Meanwhile China is deploying a version based on Time Division Duplexing (TDD).

The bottom line being that a 3G CDMA handset can’t connect to a UMTS Node B, nor can an FDD LTE handset connect to a TDD LTE eNB.

In contrast to the technology factions that exist within the cellular industry, Wi-Fi modulation and coding implementations have effectively remained uniform as standardized by the IEEE and certified by the Wi-Fi Alliance.

The reality is that Wi-Fi devices are able to connect to just about any Wi-Fi network in the world (and Hotspot 2.0 makes it even easier), while cellular band and technology fragmentation has led to a complex mix of often incompatible devices and networks, especially when traveling outside of the home operator’s coverage area.

Authenticate Me.

Where the cellular user experience truly excels, is in the automatic authentication of the device to the network. Each device is provisioned with a unique identifier that is known, and can be verified, by its home operator’s subscriber database (Home Location Register or Home Subscriber Server – HLR / HSS). The identifier is known as an International Mobile Subscriber Identity or IMSI, and can be embedded in a SIM, USIM, or sometimes in the device itself. 

The IMSI contains the Mobile Country Code (MCC) and Mobile Network Code (MNC) for the home mobile operator, which together comprise the Public Land Mobile Network (PLMN) ID. A device capable of communicating with a cellular access network can examine the PLMN ID(s) being advertised by the network, and if they match its IMSI, be assured that authentication is possible.

Wi-Fi authentication historically has been quite fragmented primarily due to the diversity of its use (residential, enterprise, hotspot, etc.) and the resulting need for different security requirements. With 802.11, authentication can be open system, based on a static shared code (WEP, WPA-PSK, and WPA2-PSK), or on more sophisticated mechanisms like 802.1X and the Extensible Authentication Protocol (WPA-Enterprise and WPA2 Enterprise). Also, portal-based authentication is often the method of choice for public access Wi-Fi networks, usually in conjunction with 802.11 open auth. These various authentication options are also related to the type of encryption, if any that is used over the air.

Hotspot 2.0 fixes this by standardizing Public Wi-Fi authentication and security.

With Hotspot 2.0, 802.1X is mandated with EAP-SIM/AKA, EAP-TLS, or EAP-TTLS and AES 256-bit encryption required. The authentication credential can be a cellular IMSI, an X.509 client certificate, or a username/password pair.

The inclusion of non-cellular credentials opens up Hotspot 2.0 services to Wi-Fi only devices like tablets, iPod Touches, laptop computers, and even client devices within the worldwide Internet of Things. Supporting a wide range of credential types also provides for a much broader pool of authentication providers, including mobile operators, cable operators, social media companies, hotel chains, and corporations.

Through the use of the 802.11u protocol, a Hotspot 2.0 Access Point (AP) advertises the PLMN IDs, network access identifier (NAI) Realms (think domain name), and Roaming Consortiums (a 3 or 5-byte hexadecimal identifier issued by the IEEE) for which it can authenticate credentials. 

The client device examines these various markers being advertised by the AP, and if there is a match with one of its provisioned credentials, it knows that automatic authentication is possible, and proceeds to connect and begin the EAP process.

Let's Roam.

Cellular network roaming is often portrayed as a successful model that Hotspot 2.0 should attempt to emulate. But is it really?

Legere-tweetEven when consumers have devices that are compatible with a visited cellular network, it turns out they are quite hesitant to connect. Rightly or wrongly, cellular roaming has become synonymous with “bill shock”,“highway robbery”, and “OMG” in the minds of the general public and especially CEOs.

This issue was highlighted
 by the European Commission in a recent survey report showing that a large percentage of Europeans either disable cellular roaming, turn off their mobiles altogether, or
drastically curtail their usage when traveling abroad within the region. 

Cellular roaming charges are perceived to be such an issue that some upstart carriers are seeking to gain market share by promoting low, or no, cost roaming plans, seeing this as a significant differentiator from the status quo.

Another symptom is the growing abundance of airport vending machines and kiosks waiting to provide local SIMs and prepaid plans to arriving visitors with unlocked devices. The calculation for the consumer is simple.

Bubbly-tweetOption 1: spend $20-30, and perhaps a little time hassling with APN settings, to get a generous amount of voice/text/data from a local operator, or

 Option 2: roam at will using your home IMSI, make a call or two, but be sure you don’t let GMail, Facebook, or Twitter use any cellular data for the duration of your trip, risking the potential $1,000+ bill you can’t seem to expense.

Harbingers for Hotspot 2.0?

Admittedly still in its infancy, Hotspot 2.0 may create radically different models for roaming, or authentication peering. Some precursor services like Eduroam and the Cable Wi-Fi alliance provide some indication as to how it may likely evolve. 

Eduroam, like Hotspot 2.0, is an 802.1X-based automatic connection and authentication network that has come from the higher education community. It started in Europe and Asia, but increasingly has a global presence. Individual institutions join Eduroam in order for their users (students and faculty) to automatically connect at any other Eduroam college or university, and so that visiting users can likewise automatically connect to the locally hosted network. It’s a reciprocally beneficial arrangement, and each institution that joins broadens the reach for the other participants. Even retail and hospitality businesses near Eduroam campuses are starting to offer the service as an enhanced benefit to their student customers. It’s common to see a social media post from an Eduroam user surprised to see that their device has connected in some unexpected venue or location.

In the U.S., the Cable WiFi alliance is a consortium of 5 of the largest MSOs (cable operators). Each company had independently deployed large-scale Wi-Fi hotspot networks in their coverage areas as a service to their residential broadband subscribers. They then decided to join together and advertise a single “CableWiFi” SSID across their combined footprint (between 200,000 and 250,000 hotspots across the country), which can be accessed by any of their subscribers. Again, another mutually beneficial arrangement.

Both Eduroam and the Cable WiFi alliance currently utilize SSID-based solutions, but they are also actively investigating Hotspot 2.0 as the next logical development for their service.

 Looking Ahead.

So, while it has been helpful up till now to describe Hotspot 2.0 in terms of making Wi-Fi work like cellular, a fuller understanding of the nuances and differences between the technologies and models shows that Wi-Fi can effectively be made easier to use and more pervasive than today’s cellular technologies.

 Hotspot 2.0 enabled Public Wi-Fi will offer a service that will be available to all Wi-Fi devices, allow authentication by a number of types of providers, and support roaming consortiums with diverse business arrangements and models. Hotspot 2.0, wherever you may roam.  And roam you will.

February 10, 2014

Will 802.11ac Stab You in the Back(haul)?

 

 

Marcus-burton Stabbing-3Stressing about the new 802.11ac standard seems to be the industry’s new pastime.

Now that Wave-1 of 802.11ac is here with vendors promising 1.3 Gbps in 5 GHz, 1.75 Gbps aggregate per AP, and world peace, suddenly the industry has focused in the potential bottleneck of AP backhaul links. In other words, is a single Gigabit Ethernet uplink enough for each AP?

The answer is just plain “yes,” and applies not only to Wave-1, but also to Wave-2 11ac. Here’s why:

Theoretical maximums do not happen in real-world conditions.

Even though 11ac Wave-1 promises a combined 1.75 Gbps theoretical rate, it’s hard to see how real-world conditions will live up to theoretical maximums. They won’t.

1.75 Gbps is a data rate. Real TCP throughput, however, (what the client experiences), has historically been somewhere near 50% of the data rate. With 11n/ac frame aggregation and other enhancements, 65% is becoming more realistic in best-case scenarios (usually for single-client tests only). So let’s say for the sake of argument that 65% of theoretical is possible—1.15-ish Gbps. Ac-chart2

So yes, if you have:

  • 3x3:3 client devices only, one on 2.4 GHz and one on 5 GHz,
  • Very good RF conditions with no neighbors and no RF interference,
  • TCP applications that can produce and sustain 700 Mbps, and
  • TCP applications that are 100% uplink or downlink,

then you might be able to tap out a gigabit backhaul, or so the argument goes.

 But, that just won’t happen in the real world.

 Client mixtures do not support the maximum capabilities.

If a network is comprised of client devices that all support 80 MHz channels (in 5 GHz) and 3 spatial streams, then there’s an outside chance of the stars aligning…barely.

But reality says:

  1. You’ll have some single-stream client devices, like mobile phones and tablets.
  2. You’ll have some two-stream client devices, like tablets and many laptops.
  3. You’ll have some 11a/g/n devices that don’t support 11ac maximums.
  4. You’ll have some clients in the service area that aren’t 3 meters from the AP—and thus subject to lower data rates.

So if your network has any of these client types (and it does!), then you can kiss your nightmares of gigabit saturation goodbye. Every lower-capability client on your network will reduce the average airtime efficiency,
making gig-stressing conditions impossible.

Don’t Forget: Ethernet is full duplex.

When comparing Wi-Fi speeds to Ethernet speeds, we must remember that Wi-Fi is half-duplex. All airtime is shared for uplink and downlink. So when you start with a theoretical maximum channel capacity, you have to divide it between uplink and downlink. Conversely, Ethernet is full duplex with a 1 Gbps uplink and 1 Gbps downlink simultaneously. So to really stress that gigabit link, you need to push either ALL uplink or ALL downlink traffic from Wi-Fi clients. Again, if we consult reality, this just won’t happen.

Application requirements will not stress 1 Gbps backhaul links.

In combination with the limitations of client capabilities, there are very few client applications and services that can generate even bursty—let alone consistent—load above 700 Mbps. But again, the issue isn’t the potential of a single client device, but the potential of all combined client devices passing traffic and sharing airtime.

High density does not stress 1 Gbps.

At first glance, high-density networks seem cause for gig stress, and thus more likely to tax network maximums. However, if anything, high-density scenarios are MORE likely to have single-stream mobile devices that don’t support protocol maximums—as well as airtime challenges that increase retries and non-data overhead—thus bringing aggregate network potential down.

Most of today’s networks can’t deliver it anyway.

How many networks are there that provide more than 1 Gbps WAN links—and web-based services/applications that can deliver that kind of speed? There’s this thing called cloud (you may have heard of it), and most client-based applications now use it.

Local LAN applications/servers are more likely to be able to handle 1 Gbps sustained. Are there many cases where these applications REQUIRE more than 1 Gbps in a specific direction AND operate in a silo where no other clients are present and moving some traffic? The answer is a bit self-evident. No.

Cost is always king.

Getting business-minded for a minute, it’s hard to believe that anyone will want to pay for 10 GbE at the edge for all APs, and no one wants to pay for higher-grade Cat7 cabling (true that Cat6 may be reasonable today) either. And of course, running multiple copper cables for each AP with link aggregation is cost prohibitive and, in most cases, superfluous. Just show the budgeteers the real-world likelihood of saturating a single, lower-cost 1 Gbps link and the budget czar will trump that decision as fast as a politician will lie. If sound technical reasoning doesn’t win, money always will.

What about 802.11ac Wave-2?

All signs point to Wave-2 11ac APs being either 3-stream (still) or—more likely—4x4:4-stream (at 1733 Mbps on 5 GHz). These boxes will also support 160 MHz channels with higher data rates. So the reasoning for the sufficiency of gigabit backhaul for Wave-2 goes something like this:

160 MHz channels are really best suited for SOHO environments. Accommodating them in enterprise products is simply not practical. Even if you wanted to, most enterprise client devices are unlikely to support 160 MHz-wide Wi-Fi channels.

That 4th stream won’t change real-world throughput tax.

Taking all the previous arguments regarding client mixtures, application demands, backhaul problems, and high density into consideration, an additional spatial stream on the AP will have little to no impact on backhaul links. Few clients, if any, will support four spatial streams in the first place. Aggregate throughput for each AP will still be constrained by the low and mid-performing clients. Even high-performing clients will struggle to generate nearly 1 Gbps of unidirectional TCP traffic.  

Multi-User MIMO does not increase maximum backhaul load either.

Now you might be thinking that MU-MIMO, or the ability for an AP to concurrently communicate with multiple clients, has a chance to change all this. Uh, no.

 There’s no doubt that MU-MIMO should improve airtime efficiency where there are many single-stream client devices and mostly downlink traffic. But, the AP still only has four spatial streams, and MU-MIMO will not be used for every transmission. In many cases, MU-MIMO transmissions will still go to only two single-stream clients simultaneously, which will not come close to the gigabit ceiling.  

Everyone has neighbors.

Wi-Fi performance is almost always dependent on RF conditions. While it’s true that maximum data transfer in a clean lab environment may get up close and personal to a gigabit ceiling more often in Wave-2, the problem is that these same high-performance networks must share airtime with neighbors.  Looking forward, it’s inevitable that there will still be a lot of 802.11n networks everywhere, and we will just have to cope with the realities of backward compatibility.

Stop gig stressing.

The moral of the story is this: While theoretical scenarios could strain a single gigabit backhaul, there’s just no way that real-world client mixtures, RF environments, application requirements, and network infrastructures are going to saturate the full capacity of a high-performing full-duplex gigabit link. So don’t be fooled by vendors wanting you to upgrade your wired networks based on theoretical scenarios and arguments. In the words of Nancy Reagan, “just say no.”

November 17, 2013

Cashing in on Hotspot 2.0

Hratko CASH-GRAPHICNothing is hotter right now in the networked world than Hotspot 2.0.

While most of the attention on Hotspot 2.0 has centered on the technology and how it works, the really compelling “feature”(that has received nearly no attention) is the ability for the technology to generate money, and lots of it.

 What’s Hotspot 2.0 All About?

Hotspot_2.0 is a specification developed by Wi-Fi Alliance (WFA) members (watch this) to radically simplify the user process of securely connecting to a Wi-Fi network and roaming between different Wi-Fi networks by effectively duplicating the cellular phone experience through secure connectivivity that  can be automated while conforming to user and operator policy. The development has considerable multi-industry muscle behind it from the Wi-Fi Alliance for certification under the Passpoint™ program and organizations such as the Wireless Broadband Alliance (WBA) for interoperability.

Simply put, Hotspot 2.0 is focused on enabling Hotspot 2.0-capable mobile devices to automatically “discover” Hotspot 2.0-capable  access points (APs) connected to wireless LANs (WLANs) and their owners that have roaming arrangements with or a path to the user’s home network. After that, the technology securely connects the user to that WLAN with no human intervention.

With Hotspot 2.0, a massive network of Wi-Fi access points is made possible through a web of interconnections. Consequently, users enjoy a seamless experience as they move between Wi-Fi networks from almost any location. It achieves this through an overhaul of the Wi-Fi connection procedure.  Hotspot 2.0 automates the connection process and it provides airlink encryption using the Advanced Encryption Standard (AES).

Hotspot-20-graphicHotspot 2.0 (so called Passpoint-certified) access points and controllers have now been shipping for over a year from all the major infrastructure vendors. Coupled with new Hotspot 2.0-capable smartphones recently introduced by Samsung, Apple, and many others, the proverbial planets are now aligned for monetization– directly addressing the concerns many operators have had about how to actually make money with Wi-Fi when so many networks are free.

Given that approximately 90 percent of all tablets in the U.S. relied on Wi-Fi over 3G mobile broadband last year, according to industry analyst Chetan Sharma, there’s some major money to be made by establishing Hotspot roaming consortiums that bring together what is today disparate high-speed Wi-Fi data access into a unified high-speed network that people would be willing to pay for. Hotspot 2.0 is the key to making this happen.

Hotspot 2.0 Brings Together Unlikely Roaming Partners

Enabling Wi-Fi roaming and roaming consortiums looks to be every bit as financially lucrative for service providers as cellular roaming. But unlike cellular roaming, Wi-Fi roaming can be done between hotels and MSOs (cable), convention centers, department stores and mobile network operators (MNOs), football stadiums, coffee shops, and basically anyone else with a Wi-Fi infrastructure.  

With these roaming consortiums in place, users will be able to easily roam across the street, across town, or on the other side of the world. Because the potential exists for a huge number of possible roaming partners both domestically and internationally, it is possible to build roaming consortiums with thousands of partners and millions of access points.  

The larger the Wi-Fi footprint, the greater the utility of the service offering, and the greater the utility of an offering the more that people will pay for such a service. Just look at the history of cellular services as a valid and useful proof point.

The formation of roaming consortiums opens up tremendous new wireless revenue opportunities for first movers and should make for some interesting and unusual partnerships to say the least. Ironically, these first movers can include a myriad of service providers that don’t even offer a pervasive wireless service today such as over the top (OTT) providers like Google or Facebook, cable TV companies (MSOs), credit card companies, and anyone else with identity information.   

One of the first to establish a Hotspot 2.0 roaming consortium, AT&T's international roaming program for its mobile subscribers has been viewed as the first to automatically connect customers to Wi-Fi Hotspots authenticating users roaming abroad using the SIM card in their phones.  This has set the stage for future business models based on Hotspot 2.0, Passpoint, and Next Generation Hotspots (NGH).

AT&T is using ACCURIS, a roaming hub, and its AccuRoam technology authenticates Wi-Fi roamers in a manner similar to the authentication process that enables mobile users roam to a new cellular network. These new roaming hub companies such as Accuris and Syniverse can make money with Hotspot 2.0 by routing authentication requests to hotspot 2.0 operators as well by facilitating the cumbersome billing and settlement process.

Hotspot 2.0 roaming consortiums are the beginning of a big trend of mobile operators leveraging Wi-Fi not just for domestic offload to ease congestion but to also give end users give better roaming rate along with a simpler and more secure experience when connecting to different Wi-Fi networks.

Meanwhile OTT providers will be particularly interested in this Hotspot 2.0 opportunity because it lets them get location information on users by authenticating the user to a coffee shop in Seattle or at a train station in Frankfurt. This is something that is of great value in today's ad driven mobile world.

Keys To Hotspot 2.0 Monetization: Automating Connectivity and Secure Roaming

With Hotspot 2.0, operators can make money by developing a huge web of business relationships, despite the fact that many of the underlying Wi-Fi networks that will be part of any roaming consortium may actually be "free.”

Also, users will no longer always be required to navigate through a landing page at an airport to get to the "free" service. User security concerns are also diminished in public places because Hotspot 2.0 connections support airlink encryption. This is required by the standard and is supported on all Hotspot 2.0-capable devices and access points.

 When traveling, the user no longer needs to involve themselves in the tedious process of selecting from available APs when they, for instance, get off a plane, as it is all automatic. Some of the roaming partners will be operating networks for which payment is expected and the Hotspot 2.0 operator will need to work out settlements with those partners. For enterprises, the Hotspot 2.0 monetization puzzle is a little harder to put together.

 The most popular example here are hotels, which often charge for Internet access, but most of the roaming partners will operate “free” networks. In these cases, the network is being put in for some reason having nothing to do with the direct monetization of the service. Instead, they are using it sell their guests more lattes or in-room movies while wholesaling much desired Wi-Fi capacity under their control to the highest bidder. How much can be made remains an open question but there’s no debate that wireless capacity in whatever form is a desired and valuable asset, no matter if you are a university, hotel, hospital, or train station.

 Where From Here?

 Though it’s impossible to definitively determine just how much money carriers will be able to make from Hotspot 2.0 roaming arrangements its fair to speculate that subscribers could be willing to spend anywhere from $1-5 per month on top of their existing wireless or broadband subscription plans for the ability to connect automatically via Wi-Fi, if the pervasiveness of the connectivity is compelling enough. Wi-Fi roaming, as a value-added service, will undoubtedly have the potential to increase carriers’ average revenue per user, so called ARPU.

The signing up of roaming partners should be fairly straightforward as most installed network infrastructure is capable of supporting Hotspot 2.0, and it is fairly easy to configure the more popular smartphones to work in an HS2.0 network. The main value for the roaming partner is that HS2.0 provides Wi-Fi security, which is really important in a public place. It could also enable the HS2.0 operator to, in some instances, feed information back to the roaming partner about who is in their building.  This is a side benefit of having identity information on the user.

The obvious focus area is to start by establishing roaming relationships with the most heavily trafficked Wi-Fi APs and then spread out from there. These include convention centers, airports, stadiums, shopping malls, etc. The roaming partners will need a AAA server to route authentication requests back to the HS2.0 operator that heads up the roaming consortium, but this can easily be outsourced to third party roaming hub partners.

 So what is it worth to the subscriber to have access to a network with several thousand roaming partners and several million access points, all capable of providing automatic and secure connectivity? The great value in cellular, and the reason we pay so much for service, is that we can get connected almost anywhere. Wi-Fi will never be quite that ubiquitous, but it certainly hits all the heavily trafficked areas like hotels, coffee shops, airports, etc. The average user doesn’t typically hesitate to pay $10 at a hotel or even $20 on an airplane so there is clearly great value here and those examples are one-off events with limited shelf lives.  And yes the underlying network is often free, but there is the hassle factor in getting connected.  A premium of 10 to 15% on the user's cellphone bill might work if the operator can plug into a million or more APs through roaming relationships. 

With Hotspot 2.0, now is the time is for operators to start moving down this path, with significant first mover advantages to those who do, as many venues will limit the number of roaming consortiums they join. Likewise, users will flock to those consortiums with the largest footprint, paying a premium to do so, which will only make them grow even larger and faster, forever changing the wireless world as we know it.