Speedchecker partners with DD-WRT to build world’s largest monitoring network

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Speedchecker, a private company running large-scale software-based monitoring networks and DD-WRT, the most popular open-source router firmware, announce a partnership which will aim to build the world’s largest hardware probe monitoring network.

 

Under the terms of the partnership DD-WRT started including the Speedchecker Probe client within the DD-WRT firmware. DD-WRT users can opt-in to the Speedchecker network and get new features for their routers in exchange for providing bandwidth for Internet measurements.

 

Wi-Fi Speedchecker feature for DD-WRT
Image: Wi-Fi Speedchecker feature for DD-WRT

 

As Christian Scheele from the DD-WRT development team said:

 

“We are pleased to be part of this partnership to not only help fund the DD-WRT development but also be part of the project which enables Internet research be conducted on a large scale across many countries that are currently not represented in existing measurement networks”.

 

Since the soft-launch earlier this year over 2000 users of DD-WRT have already opted-in to the network, enabling Speedchecker to cover over 80 countries for its Internet measurements. Speedchecker offers access to its network to clients such as Microsoft and Oracle, as well as researchers in organizations such as LACNIC which publish Internet topology research.

 

CEO of Speedchecker Ltd, Janusz Jezowicz noted:

 

Historically, companies always had to make a choice of either running measurements from software probes with its wider coverage but lower accuracy, or rely on hardware probes which had limited coverage. With this partnership we are able to provide global coverage for hardware probes with low costs due to end-users running the tests on their own routers and not expensive custom hardware.

 

 

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Brand new shiny and polished Internet measurement API

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

After a few months of hard work we are pleased to announce a new version of our Probe API. We decided to completely rewrite the API specification to apply all the things we have learned over the last few years without breaking API access for our existing user base. We don’t plan to sunset the old API version yet, but new clients are not able to sign up for the old version.

The new version is so much better; we have made following improvements:

Easy to use
Our API is well documented including a Quickstart guide which will get you up to speed quickly so you can start running your measurements.

Reliable
We have learned a lot of lessons over the years about how to make the API more scalable . We are pleased to say the new API already supports millions of measurements running every day!

Multi-platform
As part of new API release we are offering access to our Android probes to all of our users. API users can leverage increased coverage by testing on all available platforms, or specifically target mobile probes using Platform source targeting. We will soon have an announcement about hardware probes which will be supported in the same way, without the need for code changes.

Great level of support
Being a small company, we have always taken personal care of each client and made sure our support team provides expert advice on internet performance measurements to assist clients in fulfilling their goals.

Transparent pricing
Our API access starts from 49 EUR per month.
Please check our pricing here.

API Features

On top of those improvements mentioned above the API features also got an upgrade. Based on feedback we got from our users we have improved the API methods to include:

Improved probe targeting

Our new API offers many more options on how to select probes for measurements. Our users can select probes by location (e.g. City / Country / Lat and Long coordinates), network (Network name, ASN or IP prefix) and more.

More information about probes
Using ProbeInfo properties API users can specify what information about the probe is useful for them to return with the measurement results. We have added new properties such as DNS Resolver IP, Screen size (useful for page load tests), User connection type and more.

Extended tests and new metrics
Our API supports existing measurements such as Ping, DNS, Traceroute, HTTP, Webpage load. We have also added a new measurement type – video streaming test.

Further changes to the API endpoints:

Ping
We added an option to run TCP ping.

DNS DIG
DIG command now responds with full DNS query information .

HTTP
We now have available metrics such as TTFB, TotalLatency, DownloadedBytes, TCP connect time.
HTTP GET measurement can also return full HTTP Headers and Body. This can be very useful for many scenarios such as finding out which CDN POPs are being accessed, CDN cache HIT/MISS analysis, keyword monitoring in the HTTP response. The possibilities are endless!

Webpage load
We offer all the web performance metrics you would expect and we have added a couple more: such as the number of requests the page has loaded as well as full HAR file. HAR file is very useful in getting a complete picture of the pageload performance and allows you to construct a waterfall model which we use in our CloudPerf product.

Free trial

We hope all the improvements we have made will encourage you to sign up for our 7-day FREE trial.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How to pick the best server based on Latency and Throughput

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Choosing an optimal server location isn’t necessarily an easy task. Managing costs and selecting an appropriately sized hosting package is just one part of the deal. As much or more important than server capacity, is finding out how your services behave from your client’s perspective. Whether it is for choosing among different hosting providers or deciding in which location it is better to deploy your server, you need to measure latency and throughput. Having an insight into these two metrics can improve your service and result in cost-effective solutions.

In this article, we take a look into these two basic aspects of connectivity: latency and throughput. We discuss their behavior and show you how to use CloudPerf to compare and choose an optimal server for deploying a web page.

As we know, TCP performance is naturally limited by Latency. As for this, the first aspect we have to look into a server’s performance is Latency, then focus on throughput. No matter how big a link may be, if your users experience a high latency, it is not possible for them to achieve high performance. The next graph, shows the interdependency of Thoughput vs. Latency.

tcp-speed-limit

Here we can observe an inverse exponential curve, which in practical terms, it means that especially in the 1-30ms range, every millisecond of latency will have a heavy effect on the maximum achievable performance. With this in mind, we can picture very clearly the intuitive notion of choosing a server as close as possible to your clients, but still take into account even the smallest differences in latency.

Let’s say we have an account with a cloud provider and we want to deploy a service for European users. We can take Digital Ocean as an example, where we can deploy a VM in Amsterdam, London or Frankfurt, among others. We deploy the same test service on each of those locations. Then we set up a Static Object measurement for them in CloudPerf pointing to a 100KB test file and a Ping measurement to each server. We make sure to select the countries of our interest and start measuring. We choose to measure for one hour, one measurement per minute.

The following table shows the latencies obtained from each location to each of the three servers. The lowest latencies have been highlighted in yellow.

latency-table

We can observe that depending on the countries we are serving to, we can expect a very different result for each server. But focusing on all countries altogether, we can say that Amsterdam and Frankfurt have the lowest latencies in general. Let’s confirm that with the graph:

latency-graph

This is one for latency, but what about throughput? Given similar enough latencies, like in this case, the effective TCP Throughput may be affected by other factors, so we take a look at the download speeds achieved for each server, now the highest Throughput has been highlighted in yellow.

throughput-table

Here we can clearly observe that clients from Austria and Germany, which showed ping values favorable to the Frankfurt server, actually show a higher throughput when serving from Amsterdam. Let’s take a look a the graph:

throughput-graph

No we have confirmed that our Amsterdam server will show the best performance. Of course results will vary depending on which countries we focus on, but we can clearly see a general advantage of using Amsterdam as a single location for this selection of countries.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

From friend to foe: Lessons learned from Google becoming our competitor

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Every startup is rightly afraid of a new competition, especially if it comes from Internet giants like Google. The stories of how Google enters the market and dominates it in a few years are not new (such as mobile OS and Android, or recently browser wars with Chrome) . In some cases Google gets a slap on the wrist or occasional $2.7 billion fine . Nevertheless, the situation is not likely to improve and Google’s dominance in search will grow into other areas, if Google decides to compete there.

This blog post hopes to give an insight into the impact of prioritizing Google funded initiative over existing players in the market, using the real numbers from specifically our small business (which I am not sure is still right to call a startup after 10 years) 😊

But before I do so, let me give you a super quick overview of what my company – Speedchecker does – we provide easy to use and accurate speed test of your internet connection. After almost 10 years we have done over 300 million tests and provided speed test technology for many other companies.

 

Launching Google speed test

The story begins about a year ago when Google launched their own speed test featured directly in search results in USA and followed by other English speaking markets. We knew that the UK launch would happen eventually but we did not know when.

Luckily for us Google picked an open-data solution for running their speed test , noble M-Lab. M-Lab was founded by internet visionaries such as Vint Cerf and is funded by consortium of companies including Google. This choice enabled us to analyze the rollout and provided real numbers for this blog post.

M-Lab speed test data is available to download for everyone through Google Cloud (of course). By analyzing volumes of data each day, we could produce following chart:

 

chart-google-test

 

 

(Number of speed tests from UK in M-Lab dataset on random days in May, June and July 2017)

As we can see, Google started  the rollout on the 15th of May. We can also observe that Google did not do an immediate rollout across all the UK users but over the course of several days, the feature was introduced to more and more users in the search results.

 

Impact on visitor numbers to our website

Here is how the search results look like in the UK for one of our main keywords:
broadband speed test Google Search

 

 

As we can see Google speed test occupies significant space on the 1st page and pushes all results below.

Here is the chart plotting user visits from Google (and Bing for comparison) before and after the Google speed test release. We can observe the drop in visitors begins after Google launches the speed test.

 

 

google-vs-bing (1)

 

 

To better illustrate that the drop is because of ranking change and not seasonal factors, here is zoomed in data from Bing which does not show any meaningful change before/after 15th of May when Google launched.

 

 

bing (1)

 

 

Looking at the average drops we can estimate the loss of about 5000 visits per day from 25000 Google visits. Overall that is about 20% traffic loss from being moved from position 1 to 2.

Comparing to industry standard data e.g. by RankScience, 20% drop is quite a good result, it could be worse.

 

 

rankscience

 

 

From M-Lab dataset we can also extract quite interesting insights as it contains user IP address as well. If we cross-reference user IPs seen in M-Lab data with our internal data, we can see about 5% of users use both services. We can only speculate whether it’s a good result or not, definitely for the user it is useful to get information from 2 different sources and decide what is more relevant.

Conclusion

From our perspective we are quite happy the Google threat is not as serious as we originally thought. Loosing 10% of our overall traffic (and 20% of Google’s) will have impact on our bottom line but we will survive. Luckily, we provide other features that user’s appreciate such as mobile apps, storing results, mapping, comparisons and more. This I believe contributed heavily to such a small drop. I have no doubt many users will favor convenience of 1 click to get result in search results directly than going to 3rd party site such as ours. Unfortunately, there is nothing we can do to compete with that and stay in business at the same time.

With Google favoring their own speedtest, M-Lab datasets are growing at a rate of almost 1 million results per day and will achieve to serve as many customers in less than a year – something  we have achieved in the last 10 years. That is the power of Google search dominance.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Are ISPs still throttling Netflix?

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail



In the recent past we have seen a massive development of online streaming services, where Netflix is one of the leading brands in this new market. Netflix has built its own CDN (Netflix Open Connect) to support its worldwide expansion. This resulted in a rapid growth of bandwidth consumption from a considerable number of users, intensifying year after year together with Netflix’s popularity. Netflix has undergone a massive structural transformation in the way it delivers content. Parting from a monolithic application design with some external CDN support to building their own CDN around the world. Currently Netflix Open Connect has 233 server locations in all 6 continents. Their endpoints are primarily located among IXPs and within some ISPs as well. A model which reminds us of Google Global Cache, by installing cachés close to the last mile to deliver specific services.

ISPs have reacted differently across the globe, resulting in some heated discussions about traffic shaping, throttling and service differentiation from ISPs, which rose considerable criticism from defenders of net neutrality and consumers alike. Two years after our previous insight into this topic, we decided to find out what is happening today, if any ISPs are showing signs of throttling Netflix. We found that the situation has improved noticeably.

We setup an experiment which runs from thousands of Speedchecker probes around the world Netflix’s SpeedTest (fast.com) and right after that our own SpeedTest using Akamai endpoints. We compared the results of both tests using our SpeedTest with Akamai as reference and found out which ISPs show noticeable differences when connecting to Netflix. Due to Netflix’s high bandwith consumption and rapidly growing popularity, adapting to such changes might pose a challenge for some ISPs.

After running the experiment for 24 hours, we found that the performance differences between Fast.com and our reference endpoints in Akamai are equivalent, which fortunately tells us that the general rule seems to be not to throttle Netflix.

peak vs offpeak_

We can also observe that the situation still changes notably between countries, with Italy showing the worst performance among the countries where our measurements ran.

Countries-peakVsOffpeak

In the following table, we can see the ISPs in which we measured the top 35 highest median speeds.



After investigating ISPs in the USA only, we we able to rank their top 10 providers as follows comparing off-peak and peak traffic hours.

us-peak2 us-offpeak2

We couldn’t detect further major ISPs showing signs of Netflix throttling in the countries we studied. In the cases of CenturyLink, Charter Communications and possibly Time Warner Cable, we can observe a clear disadvantage of Netflix during peak hours.

In conclusion, we have seen the situation of Netflix evolve in positive terms for the consumer. We can see from our test results that the global tendency is to respect net neutrality. There are still ISPs worldwide which haven’t joined Netflix OpenConnect CDN yet and therefore they cannot profit from its traffic delivery benefits. Others simply slow down altogether during peak hours which reveals difficulties at coping with high traffic demand. So far this year, Netflix global launch seems to have gone peacefully and without any major incidents. The market has made its share of pressure to the industry, pushing them to develop high performance infrastructure for the end consumer, while clearing the path for other applications high in bandwidth consumption to enter the market with lower technical and legal barriers.

Fill out my online form.


Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Speedchecker @ IETF 96

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail



This year the IETF meeting in its 96th edition took place in the vibrating city of Berlin. We didn’t want to miss out on this important gathering and decided to contribute to one of their workshops on network measurement: nmrg.


We presented a study that was made in cooperation with LACNIC Labs, where the Latin America and Caribbean region was measured using Speedchecker ProbeAPI during one year, making possible to map the region in terms of connectivity. This allowed us to identify clusters of countries that were better connected between them than to the rest of the region. A definition for connectivity had to be defined in such a way that permitted to draw interesting and useful conclusions about the situation in the LAC region.


Screen Shot 2016-07-21 at 12.10.13


The video of our presentation at IETF96 can be found here, our presentation starts at minute 43:00. The original blog post by Agustín Formoso from LACNIC Labs can be found here.

The study was received with high interest by both the chairs and audience at the workshop, spawning interesting discussions during and after the session. We are happy to be able to participate and discuss our measurement experiences with the IETF/IRTF community. We collected valuable suggestions and comments which will surely help us point not only our measurement techniques in the right direction, but also develop our products and regional presence in a way that favors coverage and scientific precision.

The IETF 96 meeting has been a very nice experience, allowing us to get in touch and meet personally important actors of this worldwide community as well as keeping ourselves up to date with relevant decisions, norms and standards that are currently being discussed.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Announcing new Feature: Page-Load Waterfall Analysis

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Here at Speedchecker, we are aware of our customer’s requirements and we strive hard to build our products not only robustly and precisely, but also expand on features which will help everybody to diagnose their sites performance in greater detail, while at the same time , retaining that ease of use and clarity our customers love. Today we proudly announce the introduction of a new feature in CloudPerf: a detailed view of all your measurements. This feature is especially useful for frontend developers, who need to examine if the webpage loading time is not impacted too much by including slow external resources such as 3rd party tracking scripts or assets.

 

Inside your Benchmark’s results page, just move the mouse pointer along the graph  and click once in the position you would like to take a deeper look. The pop-up windows will stay after your click and you can go inside the detailed view by clicking “Show Detail”.

Screen Shot 2016-06-22 at 16.57.44

In this view you can see a panel on the left side, where you can choose the benchmark where you would like to take a closer look, the country of the measurements, destination website and the time range for which the results will be shown.

Screen Shot 2016-06-22 at 17.26.12
On the top you can see a timeline where after selecting the time-range you can choose with precision the time of the day you want to take a look into. Directly underneath the list of measurements made at that time will appear, showing the details of every single one of them.
Screen Shot 2016-06-22 at 17.26.22

 

If you are running a Page-Load Test, be sure to check the box “Collect resource timing data” in its configuration before running it. If you did so, using this new feature will be enable you to take a look using a very cool and useful Waterfall Chart, which will show you the loading times of every resource of the site you are measuring, for every single measurement in the time-range you selected. This way you can follow the behaviour of your services in detail over time.
waterfall
I hope you enjoy this new addition to CloudPerf and overall we expect it helps you to gain a greater insight into monitoring your resources.

Sign up now!

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Cloud vs CDN… Are CDNs always an improvement?

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

During the last years we have witnessed an explosive evolution in how the Internet is structured. Although traditional hosting and content delivery is still the norm in most cases, its basic function has been enhanced by CDNs and the advantages of cloud computing when we need to optimize our service’s presence.


On one hand, CDNs have built themselves a reputation of speed, power, “freedom”, space, air…which reflects on their market names as well: Fastly, Highwinds, CloudFlare, Skypark, Cachefly, etc. You get the idea. In many cases the technical results do reflect their marketing images, in most situations adding a CDN will probably benefit your site… but is it always like that?


On the other hand we have Cloud Computing services have improved and matured a great deal, while at the same time can offer either their own CDN connectivity or simply it’s easy to hook one up. We have found that, even without CDNs some cloud services work at CDN-like speeds and it seems that adding one won’t bring noticeable speed boosts.


We take the case of DigitalOcean as cloud provider. Famed by their approach as a simple and clean environment and low costs, we could observe remarkable speeds in the UK using our tool CloudPerf.


In this graph we can observe how DigitalOcean compares to Google and Amazon (without CDN) in a single 1MB object download.

DOvsGoogvsAma-httpget UK

We set-up a simple web-page for comparing the time it takes to load a whole page.

DOvsGoogvsAmaz pageload
We can already observe a clear difference to both giants Google and Amazon. Both offer their own CDNs which will speed up their service, even locally. But why not get a cheaper and already fast solution?


Especially for local audiences, like this case in the UK, there could lie a very cost-effective solution for delivering low-latency content. So why isn’t this the case? One factor could be the simple, developer-oriented approach of DigitalOcean, which can be beneficial for somebody who knows how to do everything by hand, but the benefits of using a big multi-service Cloud-Provider like Google or Amazon are something you also pay for: being able to interconnect services, easy application-level management, auto-respawning of faulty machines and overall an easier management among many other interoperable options need to be taken into account… especially if you have a relatively complex machinery to operate.


You could mount a very complex network in DigitalOcean as well, but you would need to configure everything by yourself and management ends up being more tedious. In that sense, depending on your budget, sometimes paying less on one side can result in higher operational costs. Please take into account that we are actually comparing a VPS solution against complete Cloud service providers.


On the other side, DigitalOcean is an example of how the Cloud is becoming more and more affordable, to the point where it competes against traditional web-hosting prices and still delivers high performance. You can literally have a new server up and running in less than a minute and maybe 5 minutes if you’re new to it. Since there isn’t much to fiddle around, this very straightforward and simple approach will spawn you one or more new servers instantly, with public ip-addresses and pre-installed keys to log in remotely by ssh. Starting from $5 a month for a basic VM running on the Cloud.


Even if you already use another Cloud provider to run your machines, trying this out won’t hurt at all. Hooking up a CDN to it is very simple and even IF you need it. Since the high performance of the service itself is already on-par with CDNs, you can think it twice if it’s truly beneficial to use one.


This is a point we would like to stress: a CDN by itself may or may not accelerate a service. The decision of which CDN to use and whether it fits your own user distribution is a complex question which requires a well researched individual answer for each case. Having stated that CDN isn’t synonyms with “higher speed”, we woud like to ask again: why isn’t this option more popular? Maybe is it the already established usage of a Cloud platform, which makes it convenient to just spawn a machine and control everything from the same place. Maybe adopting another provider is too much extra papework, or maybe the public simply isn’t aware of this.


Let’s take a look at the following comparisons, where CloudPerf measured DigitalOcean against multiple CDNs in the UK:

DOvsCDN-httpget UK

If we compare DigitalOcean’s (here labeled as “Origin”) performance in the UK directly against CDNs we will find that it is at least on-par with most of them.

DOvsCDN pageload all

And if we filter out most of them and take a look to the CDNs performing the closest:

DOvsCDN less

We find that it performs even better than Highwinds and Skypark. With CloudFlare and CloudFront performing with better averages.

DOvsCDN pageload less

So, looking at the example we showed here, if you have your audience near any DigitalOcean location, then you’re probably not going to need a CDN to speed up you service. Nevertheless, depending on what solution you adopt, you could benefit from other features of CDNs, like added SSL security and easy escalability without having to resize machines in many cases.


When we compare prices, the picture gets even more interesting. Let’s take DigitalOcean’s $20/month package: you get 3TB of data transfer included and a generous VM. That’s $0.03 per hour and $0.0067 per GB together in one price. A similar configuration in GCP will cost approx. $28.08 only for having the VM and ~$460.8 for 3 TB of traffic, that makes ~$488,88 a month. In AWS a similar config would cost ca. ~$40.32 for the VM and ~$476.16 for 3 TB of traffic, that’s ~$516.48. So DigitalOcean with $20 a month will give you an equivalent of ~$500 investment in other platforms.


What about the bigger VMs? The biggest plan DigitalOcean offers is $640 a month for a very capable machine and 9TB data transfer included. An similar setup in GCP will cost ~$449.28 ($0.624/hr) plus $1024 for 9TB of traffic giving a total of $1473.28. In AWS you can either choose a $0.528/hr which is a less capable machine but similar in price or $1.056/hr for a more similar one to the ones used in the competition. 9TB of data transfer will cost $1428.48 and running the VMs will cost either $380.16/month or $760.32/month, giving a total of ~$1808.64 for the smaller machine or ~$2188.8 for the bigger one. In any case expect to pay around $2000 a month. All that without CDN. To make things simpler, if we assume an approximate price of $0.1 per GB in a CDN, we find that you have to put $300 or $900 on top of your already existing hosting costs for accelerating those 3TB or 9TB of data respectively.


Please take into account that these figures are an approximate calculation since, as you may very well know, pricing in Cloud services and CDNs is extremely dependant on what resources, how and when were they used. In that sense, this also favours DigitalOcean in giving you a clear and straightforward fixed pricing structure.


Summing up, we have seen that a simple Cloud VPS provider like DigitalOcean can achieve very low latencies locally in the UK. We compared its performance and pricing against Google Cloud and Amazon S3. We saw that DigitalOcean generally performs better than both. Then, comparing DigitalOcean’s performance against CDNs serving the same content, CloudPerf reported that the VPS by itself is fast enough in the UK to compete head to head against CDNs. In this perspective, we can only recommend to analyse and think twice what kind of solution you need to adopt. For that matter, making an informed decision without measurements sounds indeed contradictory. Using our tool CloudPerf, we discovered particular circumstances where a large very important location like the UK can be covered simply using a modern high-performance Cloud VPS service.


Do you want to discover your best options yourself using CloudPerf? Sign up now!

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Measure and compare CDNs with CloudPerf

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

In this tutorial we show you a practical application of CloudPerf, which is measuring and comparing CDNs. Since nowadays the CDN market is bigger than ever and equally diverse, we think it is very important to be able to compare and analyse CDN performance using an independent measurement platform before making any decision. This way you can see and compare by yourself using the same measurements across all providers of your interest.


It is important to take into account, that CloudPerf makes last-mile measurements, that is, right there where your users are. That is especially useful to contrast what CDNs or other providers say about their services and what your users are actually experiencing in real time.

Preparing your setup

    •  Make sure you have an account with CloudPerf, if not, you can get a free trial account on this link.
    • Make a list of all the URLs you would like to measure directly from your site. In this example we will use only one url to compare with four CDNs.
    • Make sure to compare a copy of the same file across all destinations.
    • Make sure all your URLs use the same protocol (http or https).
    • Set up your test domains for measuring your service with your desired CDN. You can get trial accounts on most CDNs. You can also setup a series of subdomains for testing many different CDNs with your content. So subdomain1.mysite.com is linked to CDN-1, subdomain2.mysite.com is for CDN-2 and so on.
    • Confirm that all your testing sites are using similar DNS configurations. CNAME cascading can affect measurements greatly and lead to unrealistic results.

In general, keep in mind that for testing purposes, it is best to configure all links as similarly as possible. Depending on the options each CDN gives, most of them can also provide test files for comparing their services. Some of these test files can also be found in the web. Nevertheless, we think that measurements are more meaningful if you test directly with your content. Especially if your site mixes cached and uncached content or if your site updates frequently, caching times become more relevant in the equation.

Screen Shot 2016-04-20 at 17.31.48


* NEW * – CloudPerf now offers pre-configured links to popular CDNs for your convenience. Just click the drop-down menu on the “Create New Benchmark” button in the Dashboard and select “CDN Benchmark”. The benchmark editor will now include a list of checkboxes with popular CDNs so you can compare your own destinations with our current selection of CDN providers. We will be adding more and more CDNs with time!

 

Configuring CloudPerf

In this example, we will set up a comparison between one static object against 4 CDNs. Once you have everything prepared, log in to CloudPerf and you will be taken to the Dashboard. Click on “Create new Benchmark”. You will be taken to the benchmark editor.

Screen Shot 2016-03-23 at 01.00.41
We name this benchmark “CDN Test”. We chose to measure a Static Object, but this example is also valid for using Page Load. Since we want to run the test for some hours, a 5 minute frequency is OK. For longer or permanent tests, you may want to measure less often. We select a number of countries in which we are interested to run the measurements.


CloudPerf uses a technique we call “Connection pre-warming”, in which two subsequent requests are made with every measurement: the first request will need DNS resolving and therefore will report longer measured times, while the second request has already a resolved DNS and will only report the connection time to the server. You have the option of including the DNS lookup time in your results.
This time we choose not to include DNS lookup times in our measurements, so we can observe in our results the “pure” connection time to our destinations.


For the first destination, we input the direct link to our origin server against which we will compare the CDNs’ performance. We chose to name it “Origin” and under URL we input naturally the direct link to the file we will test. Click on “Add new destination” and another line will appear. We name this and the subsequent lines with the CDN’s respective names, in this case, we simply numbered them. In the URL box we put either the address of the subdomain you previously configured to work with your site or any other link to a static object or web page to measure. For this post we compared one of our homepages with four real CDNs serving the same file.

Screen Shot 2016-03-23 at 01.15.06
Click on “Save & Update” and voilà! We have configured our benchmark to measure our site and some CDNs simultaneously from thousands of different locations. We only have to wait now and take a look at the results after enough samples are taken.

Viewing Results

If enough time has passed, our first results will be ready. We simply log back into CloudPerf and once in the Dashboard view, we click our measurement’s name, which will take us to the results page.

By default we see first the latency measurement graph. The first thing we can notice from these results is that using any of those CDNs will make our website more responsive.

Screen Shot 2016-03-23 at 11.16.36

If we remove the origin from the graph using the destinations buttons, since it is the slowest URL in this experiment, the graph will automatically zoom in and you will be able to observe and compare all four CDNs much more clearly. Please note the color change of the graph lines after modifying the destinations.

Screen Shot 2016-04-20 at 19.00.43

You can use the results table below for having a first look at the measured latencies by country.
Screen Shot 2016-04-20 at 19.03.24
If you wish to see the measurements over time in the countries of your interest only, you can select them using the Tests Running From field. You can also switch the graph between destinations and location using the Group By option.


It may be very intresting to compare results using different statistics. For example, if we select the 25th Percentile (faster connections) we can observe that CDN4 clearly outperforms the other three CDNs:
Screen Shot 2016-04-21 at 00.49.02
while selecting the 95th Percentile (slower connections) our results show us longer latencies, but less clear differences between CDNs, although maintaining the general tendencies among them.
Screen Shot 2016-04-21 at 00.54.49


Remember that CloudPerf is a very flexible tool and can be used for much more than measuring CDNs. Please take a look at our Quick User’s Guide and explore all the powerful options that CloudPerf has to offer!


Sign up now!

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Google CDN Beta is here… and it’s already one of the fastest CDNs out there!

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

servers-cloud-600x450
Some months ago, Google launched their Alpha program for their upcoming CDN service. We kept a close eye to their development and in the meanwhile, in NEXT 2016 Google has already announced the Beta phase of their CDN. We already discussed how this new product will fit in the broad palette of content distribution solutions Google has implemented. We have seen Google Global Cache, which is primarily aimed at speeding up their own services at ISP level, with more than 800 caches installed globally. CDN Interconnect is their partner program with third party providers like Cloudflare, Level3, Akamai, Highwinds, Fastly and Verizon, allowing them to use Google’s backbone network to transport content faster than ever from the source to practically anywhere where it is required, powering up CDNs not only with faster caching, but also enabling them to deliver rapidly changing content at top speeds.

Cloud CDN is Google’s own CDN solution for sites running in VMs inside Compute Engine. It is designed and implemented a bit differently from other CDNs, since it is meant to cache not only static content, but practically a whole site in more than 50 edge caches globally. It is a whole new take in the concept of CDNs, going way further than simply caching files, since it is directly integrated into their Load Balancing system and it literally means that a copy of your site will be running and serving from the closest location to your customers, with a single public IP address thanks to Anycast. In addition to HTTP/1.0 and 1.1 it also supports the new HTTP/2 protocol as well as free HTTPS, putting your site at the edge of current Web technology.


basic-edge-cache

Using our tool CloudPerf, we were able to try out and see how well it performs compared to other CDN providers, including some of their Interconnect partners. We have four exact copies of our test VM in Google Compute Engine running in different locations worldwide. Since Cloud CDN is designed to run in front of a whole site, instead of only caching static objects, we designed a simple 100kB page to test this system at its best capabilities. CloudPerf uses a real instance of Chrome to load the whole page and measure the time it takes to visualize the content in a real web browser, measuring as always from the last mile, where real users are.


Please consider that CloudPerf‘s Page Load test, by using a real instance of Chrome requires a cold start of the browser instance and includes DNS resolution times. That means that at this moment the measurements using this method will have an overhead of +/-600ms added to the real measurement time. The relative measured times between all destinations are correct since all of them are made with the same probe, but the absolute measured times include the above mentioned overhead.


Now let’s see what happens with the 100kB Page Load test in a selection* of worldwide countries.


World Pageload graph
The average measured times by country and CDN can be seen in the following table:
World Pageload table
We can clearly see that Google Cloud CDN outperforms all other CDNs in loading a whole page in most countries. We can have a look at the special case of Japan, which shows the lowest measured times, by simply filtering results in CloudPerf.
Screen Shot 2016-04-15 at 16.13.36


Going further, if we take a look at a selection of european countries**, we observe a similar situation only with Cloudfront, Level3 and Akamai coming a lot closer to Google’s performance.
PageLoad Europe

Now, in the USA the battle is fierce, although higher than Japan, the overall loading times of most CDNs are very close to each other and the general performance is really good, except for MaxCDN, which in our measurements got a little behind the rest, but still performing reasonably well in comparison to other regions. Nevertheless, it is evident that CDNs strive for top performance especially in the US market.
USA Pageload graph

How does Google achieve such top loading times practically everywhere? We think that this is precisely due to the fact that Google CDN is embedded in the Load Balancing system of Compute Engine and that means that you can configure your site to automatically replicate your VMs whenever it is necessary to a location closest to your users, meaning an overall higher response time and effectively shortening loading times.There is a very noticeable difference when we include into the equation the time it takes to load and resolve all objects of a page from a single location close to the user, as opposed to other CDNs where only traditionally cacheable content gets copied and the rest has to be retrieved from the origin.


PS: You can make your own comparisons and performance tests using CloudPerf.Sign up now!



* Australia, Brazil, France, Germany, India, Japan, Russia, Singapore, South Africa, Turkey, United Kingdom and United States

** France, Germany, Italy, Netherlands, Norway, Poland, Romania, Spain, Sweden and United Kingdom.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail