Analysis of Fibre and 4G Deployment in Riyadh

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

This is a report on the state of Fibre and 4G deployment in Riyadh based on data points collected by Speedchecker in September 2018. The report discusses the state of Fibre and Mobile coverage (the extent of coverage and the quality of service) and the Speedchecker Measurement Method. The conclusion shows how Riyadh is placed to take advantage of future improvements to networks.

Current Network Coverage in Riyadh

Summary of Network Coverage

The three main Internet providers in Riyadh are STC, Mobily and Zain. Only STC provide services over 4G, Fibre AND Copper. Mobily offer 4G and Fibre and Zain provide 4G but not Fibre or Copper. Riyadh has excellent 4G coverage and Fibre is well established in the centre of the city and plans are well underway to extend the coverage to the main city areas. Fibre beyond the main city areas is planned but not currently in progress.

coverage_summary

Fibre Coverage

Fibre is widely available across Riyadh particularly in and close to the centre. The map shown below shows that coverage is poor in the South-West of the city and in the rural areas surrounding the city.

Fibre is provided by STC and Mobily with Mobily exclusively covering the South-West and ITC the North-East. Other existing areas are covered jointly by STC and Mobily.

The In Progress areas (yellow on the map) are either STC or ITC with some coverage being provided by Dawiyat.

Zain has no fibre coverage in Riyadh as at October 2018.

riyadh_ftth_coverage

Fibre To The Home (FTTH) Coverage in Riyadh (January 2018)
Source: MCIT (https://www.mcit.gov.sa/en/wbsira-map)

4G Coverage

Riyadh has excellent 4G coverage with 4G being available in all urban districts and along the length of the main roads going into and out of the city.

Speedchecker Measurement Method

Speedchecker uses the billions of data points collected through its passive and active measurement technologies worldwide to provide insights to our customers. This is used by businesses to improve their service and by research establishments to provide invaluable information.

Each data point consists of many KPIs including speed, latency, location, connection type, device info. Our results focus on speed and latency as experienced on the device to provide insightful information on Quality of Service. More detail about the Speedchecker Measurement Method.

This data is then integrated into our customizable map-based dashboards for geospatial analysis.

STC Fibre Coverage

Riyadh has an ongoing plan to implement fibre broadband across the city. Our results clearly show a correlation between the speeds achieved in districts that have fibre and those that do not.

We analysed the fibre results from STC to see if they correlated with the rollout of fibre across Riyadh. Our results on the left show high-speed results in Red / dark orange and slower results in yellow / light orange. These can be compared with the green areas from the MICT rollout plan where fibre is already available and the yellow areas where it is in process. The blue areas show areas that are planned but not yet in process and it is in these areas that the speeds are low.

riyadh_ftth

We are still analysing the results from Mobily fibre and will publish when the analysis is complete.

State of Riyadh Mobile Networks

Speed result data points collected from Riyadh in September 2018 were analysed and allowed the top 3 mobile providers to be compared.

By adding the download speed data to our districts map of Riyadh we can clearly see that STC provided the fastest download speeds followed by Mobily and finally Zain. The maps also show a consistent difference in speeds from district to district. Districts that are the fastest or slowest for one provider tend to be the fastest or slowest for the others even though their actual speeds may vary.

riyadh_mobile_speeds

map_scale_1

 

The following table illustrates the fastest and slowest districts in Riyadh based on the average mobile download speeds. The speeds highlighted in green represent the 5 fastest speeds by provider and the red speeds are the 5 slowest by provider. It is clear from this table and the maps above that STC are getting the fastest mobile speed test results and Zain the slowest.

 

riyadh_districts

Conclusion

Riyadh has excellent 4G coverage provided by STC, Mobily, Zain and other mobile operators. The MCIT (Ministry of Communications and Information Technology) plan for rolling out fibre across Riyadh is well-established and their progress map is accurate.

All 3 companies are providing a good service with STC having more coverage and faster speeds. Our report has highlighted some areas of Riyadh that could need some improvement in service and others that are doing very well. This may inform future plans for infrastructure changes.

This is a good foundation that should ensure Riyadh will be well-placed to continue to take advantage of improvements in technology such as 5G. This will ensure that businesses and residential users can continue to enjoy all the benefits that these advances bring.

logo-smallInterested in more detailed information on the Internet quality and coverage in Middle East and beyond?

Contact us for more information

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

What happens to mobile network on the biggest event of the year? Not what you would expect!

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Hajj 2018: 2 Million visitors over 6 days

Between 19th and 24th August 2018 over 2 million visitors arrived in Mecca for Hajj. This annual pilgrimage to the holiest city for Muslims is associated with the Prophet Mohammed who is said to have lead his followers there before consecrating it to Allah. It is considered a religious duty for all adult worshippers who are able to undertake this pilgrimage at least once in their lives. This number of visitors more than doubles the usual 1.5 million population of Mecca causing almost unimaginable challenges to the city’s infrastructure. In this article we discuss just one of these challenges : mobile Internet speed and access. It is hard to imagine how the infrastructure could cope with the huge increase in demand.

Hajj 2017: Review

During Hajj in 2017 mobile data demand nearly doubled compared to 2016. Although an increase of 60-70% was anticipated the 100% jump was a surprise. This was attributed to the increase in popularity of YouTube and Snapchat. Despite the increased demand, 99% of calls were successful and 23,000 Terabytes of data were consumed. According to the UN Sustainable Development Goals report published in ITU News from September 2017 this was thanks to the deployment of 3700 ICT specialists and 13,000 2G, 3G and 4G mobile base stations in all Hajj cities. The report does not specify which Telcos were involved. Source: ITU News.

Hajj 2018: The Kingdom’s Initiative to Maximise Mobile Communication During Hajj

King Salman bin Abdulaziz and the Crown Prince Mohammed bin Salman issued a directive “to do everything possible to make it easy for pilgrims to perform the rituals of Hajj”. The initiative’s objective is to allow pilgrims to communicate with their families and enable them to access the digital services available in the Smart Hajj initiative, so that they can enhance their experience and allow them to take advantage of enhanced communication services, as per a release issued by the authorities.

In particular, a number of packages provided by some of the main mobile operators offered their customers 1 Gb for 48 hours. Source: https://www.tahawultech.com

The Challenges

The challenge of providing adequate mobile services during a large event is not simply trying to maintain the current service levels. It is also about balancing the needs of the visitors with key service areas that are essential during the event. Consideration must be given to protecting the critical infrastructure of the region to enable it to respond to serious incidents. One way this can be achieved is to ensure there is resilience and redundancy built in to the infrastructure. Consultation with interested parties is essential to ensure that the steps agreed will meet the essential needs of all concerned. A thorough risk and threat assessment will identify where the effort is required.

It is a balance between being good hosts to the visitors and ensuring a continuity of services for the locals. Short term measures and agreements will be a great help in achieving this balance and the generous provision of 1Gb over 48 hours in Mecca is one such example. This may be the headline initiative but it is clear that much more has been done in many other areas to ensure a successful Hajj.

Telco Infrastructure in Mecca

Mecca has an excellent 4G network covered by a number of major operators. Building on the improvements made for Hajj 2017 this has allowed them to improve the average download speed by 83% between 2017 and 2018. They will continue to improve as they roll out 5G and it is expected that this will be further improved as part of Saudi Vision 2030.

Saudi Telecom Company (STC) has been at the forefront of this with investment in FDD and TDD LTE spectrum assets. The rewards of this investment can be seen in our results which show STC outperform the other providers in our research.

Zain have also been investing in technologies that allow them to extract the best out of their infrastructure. They are also preparing for 5G rollout.

Mobily has partnered with Ericsson to deliver 4×4 MIMO and as with STC and Zain they are preparing a 5G rollout.

As the Telcos continue to improve installed and available capacity so the Internet speeds can be expected to increase.

Speedchecker measurement methodology

Ahead of Hajj, Speedchecker started data collection to gather as many data points in Mecca as possible before / during / after the event. The crowd sourced data samples were collected in the field using mobile phones carried by the pilgrims to Mecca. Measurements were run on mobile networks of the top providers using Android and iOS devices. The measurements were made towards local CDN PoP based in Riyadh. The recorded results are a good proxy for the internet quality users were experiencing in Mecca on their mobile devices. During the 15 days the data collection took place, Speedchecker received over 100,000 data samples and the included stats and analysis are based on this dataset.

Hajj 2018: The Results

The results show that not only did Mecca cope with the extra 2 million visitors they exceeded all expectations. It would be reasonable to expect that speeds would decline by up to 50% during Hajj when compared to the week before or the days after. However, the speed test result reveal that the steps taken in Mecca allowed visitors and locals to enjoy an increase in speed that was continued throughout the following days. Our analysis stops after the 26th August.

The chart shown below shows the median (middle value) of Mobily Mobile, Zain Mobile, STC Mobile and STC Fixed broadband. We only have STC data from 21st August (Hajj started on 19th) and we have separated the STC Mobile tests from the STC Fixed Broadband tests. There is an unexplained drop in speed for STC mobile on the 23rd August. We have included the STC Fixed Broadband to show that the problem only affected STC Mobile customers. Despite this 50% drop from STC the overall trend during Hajj was a gradual increase in download speed.

STC mobile download speeds are more than 50% faster than either Mobily or Zain and this shows that investment in infrastructure yields positive results and benefits to the end user.

mecca_by_day

The following graph compares how the average daily median speeds of each of the providers changed before, during and after Hajj. The average shows a remarkable increase throughout Hajj and into the following days. Zain’s speeds after Hajj are faster than those from before while Mobily has returned to before Hajj speeds.

mecca_by_provider

Whatever improvements and changes were made to the Telco infrastructure during Mecca the results of the download speeds show that it was a huge success.

Internet speed map of Mecca

Using mobile device GPS data we were able to map internet speeds in Mecca to a high geographic precision. Collected data were normalized and color-coded so that the fastest areas are in red and slowest in dark blue. The outskirts of Mecca which are not colored are out of scope for this study.

 

Mobily

As can be observed the fastest areas for Mobily are not in the center which can be attributed to increased demand from higher concentration of people.

mobily_heatmap

 

Zain

The Zain speed map is slightly darker and corresponds with slightly slower internet speeds than Mobily. Yet the centre is faring quite well in comparison with Mobily.

zain_heatmap

 

STC

The STC internet speed map looks comparatively much better than Mobily and Zain and proves that internet speeds are well distributed across whole of Mecca.

stc_heatmap

The internet quality around Great Mosque is better illustrated using more detailed heat map where you can see individual measurements (which are also color coded like on previous maps). The area around the mosque has very good speeds also for Zain, which indicates Zain did not underestimate the capacity needed in the center.

Mobily Zain
 mobily_circle  zain_circle

Conclusion

The 2 million pilgrims arriving in Mecca in 2018 provided a huge challenge to ensure that the quality of service that visitors and locals expect can be delivered and maintained. We have seen how the demand doubled between 2016 and 2017 and this increase was sure to continue in 2018.

The Saudi Arabia Initiative and the efforts and investments of the major mobile operators has ensured that the quality of the service has not only be sustained but improved. This improvement has continued at least for the few days after Hajj (we have no data beyond this). We don’t know how much of the improvement will be permanent but, with a similar commitment in 2019, we can be confident that Hajj will continue to be a Telco success.

Looking further forward we can see that the Saudi Vision 2030 has ambitious plans that should sustain this for the foreseeable future.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Speedchecker partners with DD-WRT to build world’s largest monitoring network

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Speedchecker, a private company running large-scale software-based monitoring networks and DD-WRT, the most popular open-source router firmware, announce a partnership which will aim to build the world’s largest hardware probe monitoring network.

 

Under the terms of the partnership DD-WRT started including the Speedchecker Probe client within the DD-WRT firmware. DD-WRT users can opt-in to the Speedchecker network and get new features for their routers in exchange for providing bandwidth for Internet measurements.

 

Wi-Fi Speedchecker feature for DD-WRT
Image: Wi-Fi Speedchecker feature for DD-WRT

 

As Christian Scheele from the DD-WRT development team said:

 

“We are pleased to be part of this partnership to not only help fund the DD-WRT development but also be part of the project which enables Internet research be conducted on a large scale across many countries that are currently not represented in existing measurement networks”.

 

Since the soft-launch earlier this year over 2000 users of DD-WRT have already opted-in to the network, enabling Speedchecker to cover over 80 countries for its Internet measurements. Speedchecker offers access to its network to clients such as Microsoft and Oracle, as well as researchers in organizations such as LACNIC which publish Internet topology research.

 

CEO of Speedchecker Ltd, Janusz Jezowicz noted:

 

Historically, companies always had to make a choice of either running measurements from software probes with its wider coverage but lower accuracy, or rely on hardware probes which had limited coverage. With this partnership we are able to provide global coverage for hardware probes with low costs due to end-users running the tests on their own routers and not expensive custom hardware.

 

 

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Brand new shiny and polished Internet measurement API

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

After a few months of hard work we are pleased to announce a new version of our Probe API. We decided to completely rewrite the API specification to apply all the things we have learned over the last few years without breaking API access for our existing user base. We don’t plan to sunset the old API version yet, but new clients are not able to sign up for the old version.

The new version is so much better; we have made following improvements:

Easy to use
Our API is well documented including a Quickstart guide which will get you up to speed quickly so you can start running your measurements.

Reliable
We have learned a lot of lessons over the years about how to make the API more scalable . We are pleased to say the new API already supports millions of measurements running every day!

Multi-platform
As part of new API release we are offering access to our Android probes to all of our users. API users can leverage increased coverage by testing on all available platforms, or specifically target mobile probes using Platform source targeting. We will soon have an announcement about hardware probes which will be supported in the same way, without the need for code changes.

Great level of support
Being a small company, we have always taken personal care of each client and made sure our support team provides expert advice on internet performance measurements to assist clients in fulfilling their goals.

Transparent pricing
Our API access starts from 49 EUR per month.
Please check our pricing here.

API Features

On top of those improvements mentioned above the API features also got an upgrade. Based on feedback we got from our users we have improved the API methods to include:

Improved probe targeting

Our new API offers many more options on how to select probes for measurements. Our users can select probes by location (e.g. City / Country / Lat and Long coordinates), network (Network name, ASN or IP prefix) and more.

More information about probes
Using ProbeInfo properties API users can specify what information about the probe is useful for them to return with the measurement results. We have added new properties such as DNS Resolver IP, Screen size (useful for page load tests), User connection type and more.

Extended tests and new metrics
Our API supports existing measurements such as Ping, DNS, Traceroute, HTTP, Webpage load. We have also added a new measurement type – video streaming test.

Further changes to the API endpoints:

Ping
We added an option to run TCP ping.

DNS DIG
DIG command now responds with full DNS query information .

HTTP
We now have available metrics such as TTFB, TotalLatency, DownloadedBytes, TCP connect time.
HTTP GET measurement can also return full HTTP Headers and Body. This can be very useful for many scenarios such as finding out which CDN POPs are being accessed, CDN cache HIT/MISS analysis, keyword monitoring in the HTTP response. The possibilities are endless!

Webpage load
We offer all the web performance metrics you would expect and we have added a couple more: such as the number of requests the page has loaded as well as full HAR file. HAR file is very useful in getting a complete picture of the pageload performance and allows you to construct a waterfall model which we use in our CloudPerf product.

Free trial

We hope all the improvements we have made will encourage you to sign up for our 7-day FREE trial.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How to pick the best server based on Latency and Throughput

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Choosing an optimal server location isn’t necessarily an easy task. Managing costs and selecting an appropriately sized hosting package is just one part of the deal. As much or more important than server capacity, is finding out how your services behave from your client’s perspective. Whether it is for choosing among different hosting providers or deciding in which location it is better to deploy your server, you need to measure latency and throughput. Having an insight into these two metrics can improve your service and result in cost-effective solutions.

In this article, we take a look into these two basic aspects of connectivity: latency and throughput. We discuss their behavior and show you how to use CloudPerf to compare and choose an optimal server for deploying a web page.

As we know, TCP performance is naturally limited by Latency. As for this, the first aspect we have to look into a server’s performance is Latency, then focus on throughput. No matter how big a link may be, if your users experience a high latency, it is not possible for them to achieve high performance. The next graph, shows the interdependency of Thoughput vs. Latency.

tcp-speed-limit

Here we can observe an inverse exponential curve, which in practical terms, it means that especially in the 1-30ms range, every millisecond of latency will have a heavy effect on the maximum achievable performance. With this in mind, we can picture very clearly the intuitive notion of choosing a server as close as possible to your clients, but still take into account even the smallest differences in latency.

Let’s say we have an account with a cloud provider and we want to deploy a service for European users. We can take Digital Ocean as an example, where we can deploy a VM in Amsterdam, London or Frankfurt, among others. We deploy the same test service on each of those locations. Then we set up a Static Object measurement for them in CloudPerf pointing to a 100KB test file and a Ping measurement to each server. We make sure to select the countries of our interest and start measuring. We choose to measure for one hour, one measurement per minute.

The following table shows the latencies obtained from each location to each of the three servers. The lowest latencies have been highlighted in yellow.

latency-table

We can observe that depending on the countries we are serving to, we can expect a very different result for each server. But focusing on all countries altogether, we can say that Amsterdam and Frankfurt have the lowest latencies in general. Let’s confirm that with the graph:

latency-graph

This is one for latency, but what about throughput? Given similar enough latencies, like in this case, the effective TCP Throughput may be affected by other factors, so we take a look at the download speeds achieved for each server, now the highest Throughput has been highlighted in yellow.

throughput-table

Here we can clearly observe that clients from Austria and Germany, which showed ping values favorable to the Frankfurt server, actually show a higher throughput when serving from Amsterdam. Let’s take a look a the graph:

throughput-graph

No we have confirmed that our Amsterdam server will show the best performance. Of course results will vary depending on which countries we focus on, but we can clearly see a general advantage of using Amsterdam as a single location for this selection of countries.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

From friend to foe: Lessons learned from Google becoming our competitor

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Every startup is rightly afraid of a new competition, especially if it comes from Internet giants like Google. The stories of how Google enters the market and dominates it in a few years are not new (such as mobile OS and Android, or recently browser wars with Chrome) . In some cases Google gets a slap on the wrist or occasional $2.7 billion fine . Nevertheless, the situation is not likely to improve and Google’s dominance in search will grow into other areas, if Google decides to compete there.

This blog post hopes to give an insight into the impact of prioritizing Google funded initiative over existing players in the market, using the real numbers from specifically our small business (which I am not sure is still right to call a startup after 10 years) 😊

But before I do so, let me give you a super quick overview of what my company – Speedchecker does – we provide easy to use and accurate speed test of your internet connection. After almost 10 years we have done over 300 million tests and provided speed test technology for many other companies.

 

Launching Google speed test

The story begins about a year ago when Google launched their own speed test featured directly in search results in USA and followed by other English speaking markets. We knew that the UK launch would happen eventually but we did not know when.

Luckily for us Google picked an open-data solution for running their speed test , noble M-Lab. M-Lab was founded by internet visionaries such as Vint Cerf and is funded by consortium of companies including Google. This choice enabled us to analyze the rollout and provided real numbers for this blog post.

M-Lab speed test data is available to download for everyone through Google Cloud (of course). By analyzing volumes of data each day, we could produce following chart:

 

chart-google-test

 

 

(Number of speed tests from UK in M-Lab dataset on random days in May, June and July 2017)

As we can see, Google started  the rollout on the 15th of May. We can also observe that Google did not do an immediate rollout across all the UK users but over the course of several days, the feature was introduced to more and more users in the search results.

 

Impact on visitor numbers to our website

Here is how the search results look like in the UK for one of our main keywords:
broadband speed test Google Search

 

 

As we can see Google speed test occupies significant space on the 1st page and pushes all results below.

Here is the chart plotting user visits from Google (and Bing for comparison) before and after the Google speed test release. We can observe the drop in visitors begins after Google launches the speed test.

 

 

google-vs-bing (1)

 

 

To better illustrate that the drop is because of ranking change and not seasonal factors, here is zoomed in data from Bing which does not show any meaningful change before/after 15th of May when Google launched.

 

 

bing (1)

 

 

Looking at the average drops we can estimate the loss of about 5000 visits per day from 25000 Google visits. Overall that is about 20% traffic loss from being moved from position 1 to 2.

Comparing to industry standard data e.g. by RankScience, 20% drop is quite a good result, it could be worse.

 

 

rankscience

 

 

From M-Lab dataset we can also extract quite interesting insights as it contains user IP address as well. If we cross-reference user IPs seen in M-Lab data with our internal data, we can see about 5% of users use both services. We can only speculate whether it’s a good result or not, definitely for the user it is useful to get information from 2 different sources and decide what is more relevant.

Conclusion

From our perspective we are quite happy the Google threat is not as serious as we originally thought. Loosing 10% of our overall traffic (and 20% of Google’s) will have impact on our bottom line but we will survive. Luckily, we provide other features that user’s appreciate such as mobile apps, storing results, mapping, comparisons and more. This I believe contributed heavily to such a small drop. I have no doubt many users will favor convenience of 1 click to get result in search results directly than going to 3rd party site such as ours. Unfortunately, there is nothing we can do to compete with that and stay in business at the same time.

With Google favoring their own speedtest, M-Lab datasets are growing at a rate of almost 1 million results per day and will achieve to serve as many customers in less than a year – something  we have achieved in the last 10 years. That is the power of Google search dominance.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Are ISPs still throttling Netflix?

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail



In the recent past we have seen a massive development of online streaming services, where Netflix is one of the leading brands in this new market. Netflix has built its own CDN (Netflix Open Connect) to support its worldwide expansion. This resulted in a rapid growth of bandwidth consumption from a considerable number of users, intensifying year after year together with Netflix’s popularity. Netflix has undergone a massive structural transformation in the way it delivers content. Parting from a monolithic application design with some external CDN support to building their own CDN around the world. Currently Netflix Open Connect has 233 server locations in all 6 continents. Their endpoints are primarily located among IXPs and within some ISPs as well. A model which reminds us of Google Global Cache, by installing cachés close to the last mile to deliver specific services.

ISPs have reacted differently across the globe, resulting in some heated discussions about traffic shaping, throttling and service differentiation from ISPs, which rose considerable criticism from defenders of net neutrality and consumers alike. Two years after our previous insight into this topic, we decided to find out what is happening today, if any ISPs are showing signs of throttling Netflix. We found that the situation has improved noticeably.

We setup an experiment which runs from thousands of Speedchecker probes around the world Netflix’s SpeedTest (fast.com) and right after that our own SpeedTest using Akamai endpoints. We compared the results of both tests using our SpeedTest with Akamai as reference and found out which ISPs show noticeable differences when connecting to Netflix. Due to Netflix’s high bandwith consumption and rapidly growing popularity, adapting to such changes might pose a challenge for some ISPs.

After running the experiment for 24 hours, we found that the performance differences between Fast.com and our reference endpoints in Akamai are equivalent, which fortunately tells us that the general rule seems to be not to throttle Netflix.

peak vs offpeak_

We can also observe that the situation still changes notably between countries, with Italy showing the worst performance among the countries where our measurements ran.

Countries-peakVsOffpeak

In the following table, we can see the ISPs in which we measured the top 35 highest median speeds.



After investigating ISPs in the USA only, we we able to rank their top 10 providers as follows comparing off-peak and peak traffic hours.

us-peak2 us-offpeak2

We couldn’t detect further major ISPs showing signs of Netflix throttling in the countries we studied. In the cases of CenturyLink, Charter Communications and possibly Time Warner Cable, we can observe a clear disadvantage of Netflix during peak hours.

In conclusion, we have seen the situation of Netflix evolve in positive terms for the consumer. We can see from our test results that the global tendency is to respect net neutrality. There are still ISPs worldwide which haven’t joined Netflix OpenConnect CDN yet and therefore they cannot profit from its traffic delivery benefits. Others simply slow down altogether during peak hours which reveals difficulties at coping with high traffic demand. So far this year, Netflix global launch seems to have gone peacefully and without any major incidents. The market has made its share of pressure to the industry, pushing them to develop high performance infrastructure for the end consumer, while clearing the path for other applications high in bandwidth consumption to enter the market with lower technical and legal barriers.

Fill out my online form.


Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Speedchecker @ IETF 96

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail



This year the IETF meeting in its 96th edition took place in the vibrating city of Berlin. We didn’t want to miss out on this important gathering and decided to contribute to one of their workshops on network measurement: nmrg.


We presented a study that was made in cooperation with LACNIC Labs, where the Latin America and Caribbean region was measured using Speedchecker ProbeAPI during one year, making possible to map the region in terms of connectivity. This allowed us to identify clusters of countries that were better connected between them than to the rest of the region. A definition for connectivity had to be defined in such a way that permitted to draw interesting and useful conclusions about the situation in the LAC region.


Screen Shot 2016-07-21 at 12.10.13


The video of our presentation at IETF96 can be found here, our presentation starts at minute 43:00. The original blog post by Agustín Formoso from LACNIC Labs can be found here.

The study was received with high interest by both the chairs and audience at the workshop, spawning interesting discussions during and after the session. We are happy to be able to participate and discuss our measurement experiences with the IETF/IRTF community. We collected valuable suggestions and comments which will surely help us point not only our measurement techniques in the right direction, but also develop our products and regional presence in a way that favors coverage and scientific precision.

The IETF 96 meeting has been a very nice experience, allowing us to get in touch and meet personally important actors of this worldwide community as well as keeping ourselves up to date with relevant decisions, norms and standards that are currently being discussed.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Announcing new Feature: Page-Load Waterfall Analysis

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Here at Speedchecker, we are aware of our customer’s requirements and we strive hard to build our products not only robustly and precisely, but also expand on features which will help everybody to diagnose their sites performance in greater detail, while at the same time , retaining that ease of use and clarity our customers love. Today we proudly announce the introduction of a new feature in CloudPerf: a detailed view of all your measurements. This feature is especially useful for frontend developers, who need to examine if the webpage loading time is not impacted too much by including slow external resources such as 3rd party tracking scripts or assets.

 

Inside your Benchmark’s results page, just move the mouse pointer along the graph  and click once in the position you would like to take a deeper look. The pop-up windows will stay after your click and you can go inside the detailed view by clicking “Show Detail”.

Screen Shot 2016-06-22 at 16.57.44

In this view you can see a panel on the left side, where you can choose the benchmark where you would like to take a closer look, the country of the measurements, destination website and the time range for which the results will be shown.

Screen Shot 2016-06-22 at 17.26.12
On the top you can see a timeline where after selecting the time-range you can choose with precision the time of the day you want to take a look into. Directly underneath the list of measurements made at that time will appear, showing the details of every single one of them.
Screen Shot 2016-06-22 at 17.26.22

 

If you are running a Page-Load Test, be sure to check the box “Collect resource timing data” in its configuration before running it. If you did so, using this new feature will be enable you to take a look using a very cool and useful Waterfall Chart, which will show you the loading times of every resource of the site you are measuring, for every single measurement in the time-range you selected. This way you can follow the behaviour of your services in detail over time.
waterfall
I hope you enjoy this new addition to CloudPerf and overall we expect it helps you to gain a greater insight into monitoring your resources.

Sign up now!

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Cloud vs CDN… Are CDNs always an improvement?

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

During the last years we have witnessed an explosive evolution in how the Internet is structured. Although traditional hosting and content delivery is still the norm in most cases, its basic function has been enhanced by CDNs and the advantages of cloud computing when we need to optimize our service’s presence.


On one hand, CDNs have built themselves a reputation of speed, power, “freedom”, space, air…which reflects on their market names as well: Fastly, Highwinds, CloudFlare, Skypark, Cachefly, etc. You get the idea. In many cases the technical results do reflect their marketing images, in most situations adding a CDN will probably benefit your site… but is it always like that?


On the other hand we have Cloud Computing services have improved and matured a great deal, while at the same time can offer either their own CDN connectivity or simply it’s easy to hook one up. We have found that, even without CDNs some cloud services work at CDN-like speeds and it seems that adding one won’t bring noticeable speed boosts.


We take the case of DigitalOcean as cloud provider. Famed by their approach as a simple and clean environment and low costs, we could observe remarkable speeds in the UK using our tool CloudPerf.


In this graph we can observe how DigitalOcean compares to Google and Amazon (without CDN) in a single 1MB object download.

DOvsGoogvsAma-httpget UK

We set-up a simple web-page for comparing the time it takes to load a whole page.

DOvsGoogvsAmaz pageload
We can already observe a clear difference to both giants Google and Amazon. Both offer their own CDNs which will speed up their service, even locally. But why not get a cheaper and already fast solution?


Especially for local audiences, like this case in the UK, there could lie a very cost-effective solution for delivering low-latency content. So why isn’t this the case? One factor could be the simple, developer-oriented approach of DigitalOcean, which can be beneficial for somebody who knows how to do everything by hand, but the benefits of using a big multi-service Cloud-Provider like Google or Amazon are something you also pay for: being able to interconnect services, easy application-level management, auto-respawning of faulty machines and overall an easier management among many other interoperable options need to be taken into account… especially if you have a relatively complex machinery to operate.


You could mount a very complex network in DigitalOcean as well, but you would need to configure everything by yourself and management ends up being more tedious. In that sense, depending on your budget, sometimes paying less on one side can result in higher operational costs. Please take into account that we are actually comparing a VPS solution against complete Cloud service providers.


On the other side, DigitalOcean is an example of how the Cloud is becoming more and more affordable, to the point where it competes against traditional web-hosting prices and still delivers high performance. You can literally have a new server up and running in less than a minute and maybe 5 minutes if you’re new to it. Since there isn’t much to fiddle around, this very straightforward and simple approach will spawn you one or more new servers instantly, with public ip-addresses and pre-installed keys to log in remotely by ssh. Starting from $5 a month for a basic VM running on the Cloud.


Even if you already use another Cloud provider to run your machines, trying this out won’t hurt at all. Hooking up a CDN to it is very simple and even IF you need it. Since the high performance of the service itself is already on-par with CDNs, you can think it twice if it’s truly beneficial to use one.


This is a point we would like to stress: a CDN by itself may or may not accelerate a service. The decision of which CDN to use and whether it fits your own user distribution is a complex question which requires a well researched individual answer for each case. Having stated that CDN isn’t synonyms with “higher speed”, we woud like to ask again: why isn’t this option more popular? Maybe is it the already established usage of a Cloud platform, which makes it convenient to just spawn a machine and control everything from the same place. Maybe adopting another provider is too much extra papework, or maybe the public simply isn’t aware of this.


Let’s take a look at the following comparisons, where CloudPerf measured DigitalOcean against multiple CDNs in the UK:

DOvsCDN-httpget UK

If we compare DigitalOcean’s (here labeled as “Origin”) performance in the UK directly against CDNs we will find that it is at least on-par with most of them.

DOvsCDN pageload all

And if we filter out most of them and take a look to the CDNs performing the closest:

DOvsCDN less

We find that it performs even better than Highwinds and Skypark. With CloudFlare and CloudFront performing with better averages.

DOvsCDN pageload less

So, looking at the example we showed here, if you have your audience near any DigitalOcean location, then you’re probably not going to need a CDN to speed up you service. Nevertheless, depending on what solution you adopt, you could benefit from other features of CDNs, like added SSL security and easy escalability without having to resize machines in many cases.


When we compare prices, the picture gets even more interesting. Let’s take DigitalOcean’s $20/month package: you get 3TB of data transfer included and a generous VM. That’s $0.03 per hour and $0.0067 per GB together in one price. A similar configuration in GCP will cost approx. $28.08 only for having the VM and ~$460.8 for 3 TB of traffic, that makes ~$488,88 a month. In AWS a similar config would cost ca. ~$40.32 for the VM and ~$476.16 for 3 TB of traffic, that’s ~$516.48. So DigitalOcean with $20 a month will give you an equivalent of ~$500 investment in other platforms.


What about the bigger VMs? The biggest plan DigitalOcean offers is $640 a month for a very capable machine and 9TB data transfer included. An similar setup in GCP will cost ~$449.28 ($0.624/hr) plus $1024 for 9TB of traffic giving a total of $1473.28. In AWS you can either choose a $0.528/hr which is a less capable machine but similar in price or $1.056/hr for a more similar one to the ones used in the competition. 9TB of data transfer will cost $1428.48 and running the VMs will cost either $380.16/month or $760.32/month, giving a total of ~$1808.64 for the smaller machine or ~$2188.8 for the bigger one. In any case expect to pay around $2000 a month. All that without CDN. To make things simpler, if we assume an approximate price of $0.1 per GB in a CDN, we find that you have to put $300 or $900 on top of your already existing hosting costs for accelerating those 3TB or 9TB of data respectively.


Please take into account that these figures are an approximate calculation since, as you may very well know, pricing in Cloud services and CDNs is extremely dependant on what resources, how and when were they used. In that sense, this also favours DigitalOcean in giving you a clear and straightforward fixed pricing structure.


Summing up, we have seen that a simple Cloud VPS provider like DigitalOcean can achieve very low latencies locally in the UK. We compared its performance and pricing against Google Cloud and Amazon S3. We saw that DigitalOcean generally performs better than both. Then, comparing DigitalOcean’s performance against CDNs serving the same content, CloudPerf reported that the VPS by itself is fast enough in the UK to compete head to head against CDNs. In this perspective, we can only recommend to analyse and think twice what kind of solution you need to adopt. For that matter, making an informed decision without measurements sounds indeed contradictory. Using our tool CloudPerf, we discovered particular circumstances where a large very important location like the UK can be covered simply using a modern high-performance Cloud VPS service.


Do you want to discover your best options yourself using CloudPerf? Sign up now!

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail