The old adage that more is always better is definitely the mantra for the big ISPs. They tout how fast their upload and download speeds are compared to the competition like that metric is the only thing that matters.
The truth is that it's the metric that is easiest to measure. Countless sites exist to measure your broadband speed. The process is simple - your browser downloads a file, measures the time to download, and calculates the download rate with simple division. Then it repeats for the upload direction.
As of March 2017, here is a sampling of the download speeds for advertised consumer Internet packages from various ISPs.
- Comcast Xfinity - 10, 25, 100, 200 Mbps
- Verizon FiOS - 50, 100, 150, 300, 500 Mbps
- Optimum - 60, 100, 200, 300 Mbps
- Charter Spectrum - 60, 100 Mbps
- AT&T - 45, 75, 100 Mbps
- Cox - 15, 50, 150, 300 Mbps
Most consumers have no where near the need for this bandwidth and in many cases would be perfectly fine with a 15 to 20 Mbps circuit even during their peak usage. And even many businesses would be likely be perfectly fine with far less than 100 Mbps.
Let's look at some requirements for a few popular applications on the web.
- Google Hangouts - 300 Kbps minimum for 2 person calls ranging up to 4 Mbps with 10 participants
- Microsoft Skype - 30 Kbps minimum for voice up to 1.2 Mbps for two-person HD Video
- Netflix - 500 Kbps minimum up to 5 Mbps for HD video streaming
Using Skype as an example, a successful voice call can be made with 30 Kbps. Even at 100 Kbps that is 1/1000th the bandwidth of a 100 Mbps circuit.
Of course, those advertised speeds are for the "last-mile" connection to the home or business, and if the ISP oversubscribes that last mile with too many connections, it creates a bottleneck that can create conditions for the real metric of network quality - packet loss.
The above image shows 2 very serious upload packet events within a several hour period on a broadband ISP circuit in Massachusetts rated for 150 Mbps down over 10 Mbps up. Using Firebind's active testing approach, a Firebind software agent was placed on a machine that was plugged directly in to the cable modem gateway. The agent was automatically configured to perform a simulated G.711 VoIP call to Firebind's target agent at AWS in Virginia. The simulated VoIP calls run for 25 seconds in each direction every 5 minutes, and at a rate of 87 Kbps/50 packets per second.
With the Firebind test operating at 87 Kbps and the upload bandwidth rated for 10 Mbps, less than 1% of the upload bandwidth was being used yet for multiple hours there were sustained 20% and even 40% packet loss events.
These two events would have caused serious performance issues for any user. Most voice and video solutions cannot tolerate anywhere close to this level of packet loss, or if they do, it's with severely degraded quality.
Web browsing sessions would be slowed dramatically due to the need for the TCP transport layer to re-transmit packets. And applications like a traditional Citrix session could be impossible to use even with a fraction of the loss above.
Seeing this network quality degradation using a bandwidth test simply would not have been practical. First, running a bandwidth test is "high impact" (destructive) since the entire goal is to flood the ISP circuit and measure the maximum bandwidth. In other words, the test traffic would have far too serious an impact on operations to provide a reliable result. Not only that, all of the test traffic could consume bandwidth quotas.
That leads to the next question... what about ping?
There are several major reasons why using ping to detect packet loss can actually be more misleading than revealing.
Whereas all user traffic relies on either the UDP or TCP transport, ping relies on ICMP and as such, network gear such as switches and routers treat it differently. As test traffic, ping is the first to get deprioritized when there is network congestion so as to give the user traffic (like your UDP-based VoIP call) a chance to succeed. Also, ping is generally restricted to 1 ping per second. Since the Firebind VoIP call operates at 50 packets per second, it has 50 times the sampling rate and 50 times the visibility. If ping is like a magnifying glass, then testing with a 50 pps VoIP call is a microscope.
In this second image above we can see that same circuit over a 2-week period. Looking at the upload loss (red) we see what starts as a repeating chronic problem increase in frequency and magnitude during the course of the two weeks. During that time we also see a serious download loss event (blue) that lasted for about an hour and reached 25% or more loss.
As we've seen above, despite there being 10 Mbps provisioned (and "sold") on the circuit, there is near continuous upload packet loss. If I'm a business user and need to use my VoIP service, I'd much rather have a "straw" of an ISP circuit far less than that 10 Mbps if I was guaranteed there wouldn't be any packet loss vs. a fire hose of a circuit with significant recurring packet loss.
If you'd like to assess your own VoIP quality, check out our free Firebind Trial.
And if you liked this post, please share it using any of the links below. Thanks!