“Can you hear me now?” “How about now?” We’ve all done it, either while driving through a ‘dead zone’ or walking on or off a lift, ‘stepping’ into the space where the wireless ‘signal’ wasn’t sufficient to hold a conversation. Those same ‘dead zones’ might even exist in your home or office, but they’re getting smaller.
Service providers have started to focus resources on improving their customers’ Wi-Fi, either as a push to create a new value-added service or to improve ‘their’ network’s end-to-end quality of experience. Note the intentional use of inverted commas here. While each provider has their own motives, that’s not really what we’re interested in here. Instead, let’s take a look at some of the metrics and measurements that can be made for Wi-Fi systems, and what those can tell us about a wireless system.
Now, in your head, I’m sure you’re saying ‘well that’s obvious, we all care about performance’, and really that’s a good place to start. After all, the Broadband Forum recently published ‘TR-398: Wi-Fi In-Premise Performance Test Plan’, so it’s right there in the title. Like all things, however, the devil hides in the details – and performance testing requires a lot of details.
In TR-398, the test plan is broken down into several categories, RF capabilities, baseline tests, coverage, stability, and support for multiple users. All of these categories factor into how well the DUT (device under test, which is typically an access point) performs. This space is far too short to illuminate all of the details necessary to fully explain Wi-Fi performance testing theories and application, but it can touch on some of the basics for a couple of tests and how to ensure they are repeatable.
Your test environment and setup is the number one critical item. One of the biggest challenges in Wi-Fi testing is getting ‘away’ from everyone else’s wireless devices. In the lab, we use a series of test chambers that attenuate, or block, external signals from phones, laptops, watches, shoes (come on, someone, somewhere, makes a shoe with Wi-Fi in it, right?) which would interfere with the test. The test chambers are connected with a set of cables and controllable attenuators to create a reliable, repeatable test path between the DUT and the station(s).
A key goal of all testing should always be repeatability. After all, if you can show something twice, how can the engineer fix it?
This basic environment (above) provides a fair bit of test coverage, depending on how it is being used. The first couple of test cases we often run look at the maximum packet throughput and the maximum number of stations (users). Those first two cases can tell a lot about the device.
For example, the maximum packet throughput requires the Ethernet switching, packet processing, and radios to all work well, especially for the newer radio technologies. In these newer technologies, the systems will actually store packets, so they can assemble a larger radio packet, before ‘grabbing’ the air to transmit the data. This improves the overall system efficiency, especially when supporting multiple users at the same time; but it comes at a cost of increased complexity of the packet processing.
We can check that in a few different ways. First, the aforementioned maximum throughput test gives an initial ‘sanity check.’ From there, it’s possible to increase the number of stations being supported by the DUT, while measuring the throughput to each station. Further increasing the complexity, it’s possible to verify the DUT can support the 802.11ac MU-MIMO capabilities, which enables high throughputs between the DUT and multiple stations (downlink speeds). These tests can also check for airtime fairness, to verify a single station isn’t allowed to monopolise the ‘air time’ and prevent others from getting a share of the bandwidth.
Coming back to our original ‘can you hear me now?’ theme, by adding a rational platform into the test chamber, and setting the DUT on top of it, we can rerun any of these tests in a way where the DUT ‘sees’ the stations from different angles. This ensures the DUT’s antenna designs aren’t incorrectly focusing too much power in one direction or another. By adding in different attenuator values or a channel multi-path emulator between the stations and test chamber, different ranges – distances – can also be tested, bringing us back to that coverage topic.
Lastly, we’re continuing to develop test cases and metrics to expand the testing coverage and keep pace with the technology. Topics like roaming between multiple access points, steering clients to use the 2.4 or 5 GHz bands more efficiently, and mesh networks are all coming into common deployments and testing.
Service providers are able to use the TR-398 test plan as a common set of requirements when evaluating potential equipment and software, while suppliers and manufacturers can ensure their equipment meets those requirements without having to first ‘get into’ the service provider’s lab – thus speeding up the whole process.
Editor’s note: For more information on Wi-Fi performance testing, visit here.
Interested in hearing industry leaders discuss subjects like this? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.