Confusion and contradiction: The state of mobile network testing in the UK


Picture credit: iStockPhoto

In April this year a report in an industry publication suggested Ofcom was planning to purchase ‘over-the-counter’ handsets and use them to execute its own UK-wide walk and drive mobile data testing programme. According to the correspondent contacted by Ofcom, the planned testing would look at “whether UK networks have achieved consistency” – but not go so far as to test for “quality of service”.

In other words, the tests would ascertain where in the British Isles you can get a signal from EE, Vodafone, O2 and 3 but nothing else revealing in relation to the actual network performance (e.g. are you able to look up a web page, send an email or a tweet, upload a photo of your fish and chip dinner to Instagram, etc).

Would the results of such a test benefit UK consumers? Some would say that all testing on behalf of consumers is a good thing – but I am not so sure. Mobile network testing is notoriously difficult to get right, and its outcomes are fiendishly difficult to interpret; if either the testing or the interpreting goes wrong, the whole business can be more than a little misleading.

How can the testing itself ‘go wrong’? Let’s stick with the proposal outlined above by way of example. Presumably Ofcom intends to have its people surveying areas, checking their handsets for signal strength and taking note of the available data networks – 3G and 4G. Will the handsets that they use have the same outsides and the same insides?  The same make and model device may have different firmware and/or operating system versions installed depending on distribution, and this can lead to differences in the way these devices perform.  Also, how often will testers actually take measurements? What form will those measurements take? How long will they stay in a given location before they decide they understand the ‘consistency of the network’ there? If the testing is not rigorous and standardized throughout, then it could result in the collection of anomalous, arbitrary data.

Putting the methodology aside for the moment, how might the interpretation of the data go wrong? Take, for example, the recent release of a report purporting to show the state of mobile network coverage on Britain’s roads and railways. This report was put out by an interesting company that specialises in crowd-sourcing its data – i.e. taking limited network readings from the mobiles of the hundreds of thousands (if not millions) of consumers running its testing app and then combining all of that data to draw significant conclusions. The report revealed the amount of time consumers with the app spent on 2G, 3G/4G, or had no signal at all – and was picked up by several heavy-hitting mainstream media outlets.

These outlets ran with the headlines from the report – e.g. UK rail travellers ‘will have coverage’ for 72% of the time they are travelling – but did not interrogate, and thereby interpret the data, completely. No-one asked whether a significant chunk of the data came from devices sitting in idle mode (which, if it was the case, would seriously skew things for a number of reasons), or that figures weren’t given for success rates (calls dropped or blocked) or throughputs. In other words, they didn’t realise that ‘coverage’ was not necessarily the same thing as actually being able to perform tasks as basic as making a call, sending an email, or Instagramming those fish and chips I mentioned earlier. What I know about the methodology used in this test—not to mention my own experiences travelling in the UK—suggest that British travellers have significantly worse connectivity than the mainstream media was led to believe – and subsequently advertise.

Needless to say, data incorrectly collected and/or interpreted is bad news for consumers craving network and service improvements – especially if the picture painted is rosier than the reality.

Fast-forward to August this year and we find Ofcom putting out a report on voice calls in the UK – a report that draws on its own poll of consumers’ experiences, as well as crowd-sourced data from a third-party, and information from the operators themselves. Like the putative mobile data testing programme reported on in April (the one I mentioned at the start of this piece), Ofcom’s dossier on the state of the nation’s voice calls is vulnerable to criticism because of its apparently unmethodical approach, its use of difficult-to-interpret and potentially arbitrary crowd-sourced data (e.g. the data was not collected in a controlled environment), and the conflict of interest implied by using data provided by the operators themselves.

What then, you say, throwing up your hands, is Ofcom to do? What would benefit consumers? It’s quite simple really. Ofcom needs to commission its own in-depth, controlled, mobile data and voice testing from a company with engineering expertise and equipment capable of recording and interpreting layer-3 network level messaging data (i.e. the data that tells you what the network is communicating to and from a mobile device). It needs to commission that company to test regularly, and it needs to share its findings with the major operators. They will most certainly be receptive – because what’s good for consumers is good for them.

Back in April, Ofcom said it was “aware of the confusion surrounding network testing in the UK, given the increasing number of reports being issues by multiple firms, often with contradictory results.” Ofcom does have the power to put an end to this confusion – but it’s not exercising it yet.

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *