From friend to foe: Lessons learned from Google becoming our competitor


Every startup is rightly afraid of a new competition, especially if it comes from Internet giants like Google. The stories of how Google enters the market and dominates it in a few years are not new (such as mobile OS and Android, or recently browser wars with Chrome) . In some cases Google gets a slap on the wrist or occasional $2.7 billion fine . Nevertheless, the situation is not likely to improve and Google’s dominance in search will grow into other areas, if Google decides to compete there.

This blog post hopes to give an insight into the impact of prioritizing Google funded initiative over existing players in the market, using the real numbers from specifically our small business (which I am not sure is still right to call a startup after 10 years) 😊

But before I do so, let me give you a super quick overview of what my company – Speedchecker does – we provide easy to use and accurate speed test of your internet connection. After almost 10 years we have done over 300 million tests and provided speed test technology for many other companies.


Launching Google speed test

The story begins about a year ago when Google launched their own speed test featured directly in search results in USA and followed by other English speaking markets. We knew that the UK launch would happen eventually but we did not know when.

Luckily for us Google picked an open-data solution for running their speed test , noble M-Lab. M-Lab was founded by internet visionaries such as Vint Cerf and is funded by consortium of companies including Google. This choice enabled us to analyze the rollout and provided real numbers for this blog post.

M-Lab speed test data is available to download for everyone through Google Cloud (of course). By analyzing volumes of data each day, we could produce following chart:





(Number of speed tests from UK in M-Lab dataset on random days in May, June and July 2017)

As we can see, Google started  the rollout on the 15th of May. We can also observe that Google did not do an immediate rollout across all the UK users but over the course of several days, the feature was introduced to more and more users in the search results.


Impact on visitor numbers to our website

Here is how the search results look like in the UK for one of our main keywords:
broadband speed test Google Search



As we can see Google speed test occupies significant space on the 1st page and pushes all results below.

Here is the chart plotting user visits from Google (and Bing for comparison) before and after the Google speed test release. We can observe the drop in visitors begins after Google launches the speed test.



google-vs-bing (1)



To better illustrate that the drop is because of ranking change and not seasonal factors, here is zoomed in data from Bing which does not show any meaningful change before/after 15th of May when Google launched.



bing (1)



Looking at the average drops we can estimate the loss of about 5000 visits per day from 25000 Google visits. Overall that is about 20% traffic loss from being moved from position 1 to 2.

Comparing to industry standard data e.g. by RankScience, 20% drop is quite a good result, it could be worse.






From M-Lab dataset we can also extract quite interesting insights as it contains user IP address as well. If we cross-reference user IPs seen in M-Lab data with our internal data, we can see about 5% of users use both services. We can only speculate whether it’s a good result or not, definitely for the user it is useful to get information from 2 different sources and decide what is more relevant.


From our perspective we are quite happy the Google threat is not as serious as we originally thought. Loosing 10% of our overall traffic (and 20% of Google’s) will have impact on our bottom line but we will survive. Luckily, we provide other features that user’s appreciate such as mobile apps, storing results, mapping, comparisons and more. This I believe contributed heavily to such a small drop. I have no doubt many users will favor convenience of 1 click to get result in search results directly than going to 3rd party site such as ours. Unfortunately, there is nothing we can do to compete with that and stay in business at the same time.

With Google favoring their own speedtest, M-Lab datasets are growing at a rate of almost 1 million results per day and will achieve to serve as many customers in less than a year – something  we have achieved in the last 10 years. That is the power of Google search dominance.


API Changes> State targeting for USA, new API limits and web reputation check


Added support for targeting probes with state level accuracy

We have added optional input StateCode parameter to those methods:


Currently only probes located in the USA can be targeted on state level. The API response will also contain StateCode (e.g. VA) and also State (e.g. Virginia) if you request CountryCode=US.

Added new limits on the number of probes you can request test from To avoid abuse we have added limits to how many probes you can request results at the same time in 1 API call. This is controlled by the probeLimit parameter which is required.

Here are the maximum values allowed for different tests.

PING – 100
DIG – 100

If you require those limits to be lifted, please contact us.

Added web reputation services for HTTP tests

To avoid abuse and protect the probes from accessing risky content, we have added white and black lists that will be checked for all HTTP tests (e.g. HTTP GET or Page load). Following errors can happen>

StatusCode: 221
StatusDescription: Destination blocked Message: “This URL destination has been marked as risky by our web classification engine. We are unable to test this URL. ”

StatusCode: 222
StatusDescription: Destination not classified Message: “This URL destination has not been classified by our web classification engine. We are unable to test this URL at this time. Please contact support to whitelist this URL”


API improvements


We have deployed few new improvements to the API.

Changes to default timeout and commandTimeout

We have changed default timeout and commandTimeout parameters to maximize the number of returned results and minimize the timeouts on probes (of course you can set any time out as you want by specifying those parameters)

timeout 8000
commandtimeout = 1000

timeout = 4000
commandtimeout = 1000

timeout 52000
commandtimout 1000

timeout 8000
commandtimeout = timeout – 3000

timeout 45000
commandtimeout = timeout – 3000

Global error handler. Supported errors in the HTTP Header
To better understand returned results from the API we are now returning more information about the possible API problesms right in the HTTP Header.

Some of the new Status Codes:
200 – OK
500 – Unexpected Error from the API side
520 – Errors related with the empty results

If HTTP Status Code is <> from 200, then we also respond with [Message] element in the response body that will provide more user friendly message what the error is about.

Changes in the data output model

We are returning only objects which are currently in use. Objects as PING / HTTPGET etc are hidden for DIG. And the same rules exists for the others type of tests

e.g based on results for DIG OLD RESPONSE:

"StartDIGTestByCountryResult": [ { "ASN": { "AsnID": "AS12741", "AsnName": "Netia SA" }, "CDNEdgeNode": null, "Country": { "CountryCode": "PL", "CountryFlag": "http:\/\/\/bsc-img-country-logos\/pl.png", "CountryName": "Poland" }, "DIGDns": [ { "AdditionalInformation": null, "Destination": "", "QueryTime": "13", "Status": "NoError" } ], "DateTimeStamp": "\/Date(1435158434263+0000)\/", "HTTPGet": null, "ID": 31658310, "Location": { "Latitude": 50.320525, "Longitude": 19.132328 }, "LoginDate": null, "Network": { "LogoURL": "https:\/\/\/bsc-img-providers\/logo_0039_ef7487217cc7_60.jpg", "NetworkID": "226", "NetworkName": "Netia SA" }, "PAGELoad": null, "Ping": null, "PingTime": null, "TRACERoute": null } ]


"StartDIGTestByCountryResult": [ { "ASN": { "AsnID": "AS197480", "AsnName": "SerczerNET Malgorzata Nienaltowska" }, "Country": { "CountryCode": "PL", "CountryFlag": "http:\/\/\/bsc-img-country-logos\/pl.png", "CountryName": "Poland" }, "DIGDns": [ { "Status": "OK", "Destination": "", "QueryTime": "1" } ], "DateTimeStamp": "\/Date(1435150848253+0000)\/", "ID": 17854019, "Location": { "Latitude": 53.29718, "Longitude": 23.28092 }, "Network": { "LogoURL": null, "NetworkID": "3362", "NetworkName": "SerczerNET Malgorzata Nienaltowska" } } ]

New properties available for probe results

For all probe results we added new property: Status

Status: OK, Timeout , TtlExpired (+ potentially other statuses added in the future)

For Ping and Traceroute we added also PingTime array, which can be useful to calculate standard deviations and other statistical analysis of the individual pings that were performed.

Ping Response "Ping": [ { "Status": "OK", "Destination": "", "Hostname": "", "IP": "", "PingTime": 12, "PingTimeArray": [ "13", "11", "12" ] } ]

Traceroute response

"TRACERoute": [ { "Destination": "", "HostName": "", "IP": "", "Tracert": [ { "HostName": "", "IP": "", "PingTimeArray": [ "1", "0", "0" ], "Ping1": "1", "Ping2": "0", "Ping3": "0", "Status": "OK" }, { "HostName": "", "IP": "", "PingTimeArray": [ "42", "31", "33" ], "Ping1": "42", "Ping2": "31", "Ping3": "33", "Status": "OK" }, { "HostName": "", "IP": "", "PingTimeArray": [ "12", "14", "11" ], "Ping1": "12", "Ping2": "14", "Ping3": "11", "Status": "OK" }, { "HostName": "", "IP": "", "PingTimeArray": [ "11", "11", "11" ], "Ping1": "11", "Ping2": "11", "Ping3": "11", "Status": "OK" }, { "HostName": "", "IP": "", "PingTimeArray": [ "12", "11", "13" ]", "Ping1":"12", "Ping2":"11", "Ping3":"13", "Status":"OK"}]}]}


New parameters available for test methods


We have deployed few new improvements to the API.

Support for new parameters

PING Test methods

ttl – Max allowed hops for packet (default=128, max=255)
bufferSize – Size of the Packet data (default=32, max=65500)
fragment – Fragmentation of sending packets. (default=1)
resolve – IP addresses will be resolved to domain names for each hop (default=0)
ipv4only – Force using IPv4. If no IPv4 IP address is returned will return error (default=0)
ipv6only – Force using IPv6. If no IPv6 IP address is returned will return error (default=0)

DIGDNS Test methods

commandTimeout – DNS query timeout in milliseconds (default=1000ms)
retries – Total number of retries (default=0)

TRACEROUTE Test methods

maxFailedHops – Stop the command execution after maximum errors in a row (e.g. stop after 5 ping timeouts, default=0)
ttlStart – First Hop from which the trace route should start (default=1) bufferSize – Size of the Packet data (default=32, max=65500)
fragment – Fragmentation of sending packets (default=1)
resolve – IP addresses will be resolved to domain names for each hop (default=0)
ipv4only – Force using IPv4. If no IPv4 IP address is returned will return error (default=0)
ipv6only – Force using IPv6. If no IPv6 IP address is returned will return error (default=0)

PAGELOAD Test methods

commandTimeout – Maximum allowed time for pageload in milliseconds

HTTPGET Test methods

commandTimeout – Maximum allowed time to send HTTP GET request and receive the response in milliseconds
maxBytes – Max bytes to download from response stream

2. Changes in how timeouts are handled by the API

Each method has 2 main time out parameters:

timeout – Amount of time in milliseconds in which API responds with result
commandTimeout – timeout for the actual test command. For most of the tests its obvious that commandTimeout correlates with total timeout of the API method. However, in case of Ping and Traceroute its not the case. Ping methods by default execute 3 ping commands. Traceroute executes even more and it depends on many parameters such as TTL, count etc.