Will 802.11ac Stab You in the Back(haul)?
Now that Wave-1 of 802.11ac is here with vendors promising 1.3 Gbps in 5 GHz, 1.75 Gbps aggregate per AP, and world peace, suddenly the industry has focused in the potential bottleneck of AP backhaul links. In other words, is a single Gigabit Ethernet uplink enough for each AP?
The answer is just plain “yes,” and applies not only to Wave-1, but also to Wave-2 11ac. Here’s why:
Theoretical maximums do not happen in real-world conditions.
Even though 11ac Wave-1 promises a combined 1.75 Gbps theoretical rate, it’s hard to see how real-world conditions will live up to theoretical maximums. They won’t.
1.75 Gbps is a data rate. Real TCP throughput, however, (what the client experiences), has historically been somewhere near 50% of the data rate. With 11n/ac frame aggregation and other enhancements, 65% is becoming more realistic in best-case scenarios (usually for single-client tests only). So let’s say for the sake of argument that 65% of theoretical is possible—1.15-ish Gbps.
So yes, if you have:
- 3x3:3 client devices only, one on 2.4 GHz and one on 5 GHz,
- Very good RF conditions with no neighbors and no RF interference,
- TCP applications that can produce and sustain 700 Mbps, and
- TCP applications that are 100% uplink or downlink,
then you might be able to tap out a gigabit backhaul, or so the argument goes.
But, that just won’t happen in the real world.
Client mixtures do not support the maximum capabilities.
If a network is comprised of client devices that all support 80 MHz channels (in 5 GHz) and 3 spatial streams, then there’s an outside chance of the stars aligning…barely.
But reality says:
- You’ll have some single-stream client devices, like mobile phones and tablets.
- You’ll have some two-stream client devices, like tablets and many laptops.
- You’ll have some 11a/g/n devices that don’t support 11ac maximums.
- You’ll have some clients in the service area that aren’t 3 meters from the AP—and thus subject to lower data rates.
So if your network has any of these client types (and it does!), then you can kiss your nightmares of gigabit saturation goodbye. Every lower-capability client on your network will reduce the average airtime efficiency,
making gig-stressing conditions impossible.
Don’t Forget: Ethernet is full duplex.
When comparing Wi-Fi speeds to Ethernet speeds, we must remember that Wi-Fi is half-duplex. All airtime is shared for uplink and downlink. So when you start with a theoretical maximum channel capacity, you have to divide it between uplink and downlink. Conversely, Ethernet is full duplex with a 1 Gbps uplink and 1 Gbps downlink simultaneously. So to really stress that gigabit link, you need to push either ALL uplink or ALL downlink traffic from Wi-Fi clients. Again, if we consult reality, this just won’t happen.
Application requirements will not stress 1 Gbps backhaul links.
In combination with the limitations of client capabilities, there are very few client applications and services that can generate even bursty—let alone consistent—load above 700 Mbps. But again, the issue isn’t the potential of a single client device, but the potential of all combined client devices passing traffic and sharing airtime.
High density does not stress 1 Gbps.
At first glance, high-density networks seem cause for gig stress, and thus more likely to tax network maximums. However, if anything, high-density scenarios are MORE likely to have single-stream mobile devices that don’t support protocol maximums—as well as airtime challenges that increase retries and non-data overhead—thus bringing aggregate network potential down.
Most of today’s networks can’t deliver it anyway.
How many networks are there that provide more than 1 Gbps WAN links—and web-based services/applications that can deliver that kind of speed? There’s this thing called cloud (you may have heard of it), and most client-based applications now use it.
Local LAN applications/servers are more likely to be able to handle 1 Gbps sustained. Are there many cases where these applications REQUIRE more than 1 Gbps in a specific direction AND operate in a silo where no other clients are present and moving some traffic? The answer is a bit self-evident. No.
Cost is always king.
Getting business-minded for a minute, it’s hard to believe that anyone will want to pay for 10 GbE at the edge for all APs, and no one wants to pay for higher-grade Cat7 cabling (true that Cat6 may be reasonable today) either. And of course, running multiple copper cables for each AP with link aggregation is cost prohibitive and, in most cases, superfluous. Just show the budgeteers the real-world likelihood of saturating a single, lower-cost 1 Gbps link and the budget czar will trump that decision as fast as a politician will lie. If sound technical reasoning doesn’t win, money always will.
What about 802.11ac Wave-2?
All signs point to Wave-2 11ac APs being either 3-stream (still) or—more likely—4x4:4-stream (at 1733 Mbps on 5 GHz). These boxes will also support 160 MHz channels with higher data rates. So the reasoning for the sufficiency of gigabit backhaul for Wave-2 goes something like this:
160 MHz channels are really best suited for SOHO environments. Accommodating them in enterprise products is simply not practical. Even if you wanted to, most enterprise client devices are unlikely to support 160 MHz-wide Wi-Fi channels.
That 4th stream won’t change real-world throughput tax.
Taking all the previous arguments regarding client mixtures, application demands, backhaul problems, and high density into consideration, an additional spatial stream on the AP will have little to no impact on backhaul links. Few clients, if any, will support four spatial streams in the first place. Aggregate throughput for each AP will still be constrained by the low and mid-performing clients. Even high-performing clients will struggle to generate nearly 1 Gbps of unidirectional TCP traffic.
Multi-User MIMO does not increase maximum backhaul load either.
Now you might be thinking that MU-MIMO, or the ability for an AP to concurrently communicate with multiple clients, has a chance to change all this. Uh, no.
There’s no doubt that MU-MIMO should improve airtime efficiency where there are many single-stream client devices and mostly downlink traffic. But, the AP still only has four spatial streams, and MU-MIMO will not be used for every transmission. In many cases, MU-MIMO transmissions will still go to only two single-stream clients simultaneously, which will not come close to the gigabit ceiling.
Everyone has neighbors.
Wi-Fi performance is almost always dependent on RF conditions. While it’s true that maximum data transfer in a clean lab environment may get up close and personal to a gigabit ceiling more often in Wave-2, the problem is that these same high-performance networks must share airtime with neighbors. Looking forward, it’s inevitable that there will still be a lot of 802.11n networks everywhere, and we will just have to cope with the realities of backward compatibility.
Stop gig stressing.
The moral of the story is this: While theoretical scenarios could strain a single gigabit backhaul, there’s just no way that real-world client mixtures, RF environments, application requirements, and network infrastructures are going to saturate the full capacity of a high-performing full-duplex gigabit link. So don’t be fooled by vendors wanting you to upgrade your wired networks based on theoretical scenarios and arguments. In the words of Nancy Reagan, “just say no.”