Windows Azure Outage – A Considered View

March 1, 2012

Yesterday (29th Feb 2012) our website at www.kynetix.com was down for most of the day. Our site is hosted in the cloud on the Windows Azure platform which suffered from a major outage for most of the day.

The outage, which affected the Windows Azure platform across the world, has been attributed to a leap year bug. It’s ironic that, 12 years after the damp squib that was the Year 2000 changeover, the Azure platform suffered from a bug that should have been easily preventable.

The Windows Azure outage has, of course, played into the hands of the cloud computing cynics and will give them plenty of fuel for their arguments. However, I won’t be joining them even after an outage like this and we will continue to host our site on the Windows Azure platform.

This outage will certainly knock the credibility of Microsoft as a cloud platform provider but it should not be seen as proof that cloud computing is doomed to fail. On the contrary, I think this outage will only improve the Azure platform in particular and cloud computing generally.

Every serious outage like this leads to improvements and to tighter process that will ensure that it is highly unlikely to happen again – certainly for the same reason. It will also have severely tested the capability of Microsoft to respond to such a critical issue and you can bet that Steve Ballmer will be ensuring that a full and thorough post-mortem will be held and improvements implemented.

I doubt that other cloud computing vendors will be gloating much about this outage. Every platform vendor knows that they are just one line of code away from having the same problem. In fact most of the major competitors to Microsoft (Amazon, Google etc.) have all suffered from their own outages so it can happen to anybody.

I would venture that there is not a single large enterprise that hasn’t had some form of internal service outage. From power cuts to loss of internet connectivity through to crashed servers nobody can guarantee 100% uptime.

As further proof you only need to look at that beacon of the the financial markets, The London Stock Exchange (LSE), which has had a number of outages over the last few years.

Cloud computing cynics who will justify their position on the back of this outage are akin to people who are afraid of flying and justify their position after a plane crash. Yes, it occasionally happens, but each crash greatly improves the safety and reliability for the future. A plane crash doesn’t stop people flying and I don’t think this outage will stop the growth of cloud computing.


SQL Azure Database Backups

January 20, 2011

While Microsoft has always ensured that your SQL Azure database is replicated multiple times for reliability, there is now a way to perform your own ad-hoc backup.  This is known as copying.  You can copy to the same server or to a different one.  The process is asynchronous.  Full details including the T-SQL syntax (CREATE DATABASE…AS COPY OF) can be found here.

-Krip


Sharding on SQL Azure

December 24, 2010

Sharding is a technique used to horizontally partition data across physical servers.  It is used to achieve high levels of scalability.  Each database in this architecture is referred to as a shard.

Microsoft has recently published a whitepaper to explain how sharding can be accomplished on SQL Azure today.  The application will need to implement the sharding logic and route calls to individual databases as necessary.  The whitepaper can be found here: http://social.technet.microsoft.com/wiki/contents/articles/sharding-with-sql-azure.aspx

The final section of the paper explains how an upcoming feature of SQL Azure known as SQL Azure Federations will make sharding much simpler and in the long run possibly totally transparent.  Clients will continue to route to one database, known as the root database.  That database will contain definitions for one or more Federations.  Each Federation will include the sharding details (known as Federation Scheme) – included is the Federation key(s) which describes how the data is partitioned.  Each Federation consists of one or more Federation Members which then map to physical databases.

A T-SQL command will be available to split a Federation unit into two, resulting in two physical databases with the data spread across the two.  The really cool bit is that SQL Azure will let you do this “online” by ensuring that the view of the federation unit is consistent until the split is complete!  And a T-SQL ‘merge’ is also planned which does the reverse.

I’d also like to refer you to Steve Yi’s post which explains this at a high level: http://blogs.msdn.com/b/sqlazure/archive/2010/12/23/10108670.aspx.  Steve is one of the reviewers of the whitepaper.

-Krip


The economics of the cloud

November 17, 2010

Microsoft has published a new White Paper about the Economics of the Cloud.

The paper argues that given the economies of scale large clouds could deliver computing power at up to 80 percent lower cost than small ones. The also claim that private clouds may one day carry a cost that is as much as 10x the cost of public clouds.

Read the full paper at http://www.microsoft.com/presspass/presskits/cloud/docs/The-Economics-of-the-Cloud.pdf


At Tech Ed it’s all about the cloud

November 9, 2010

From the massive posters that catch your eye when entering Messe for Tech Ed to the theme of the keynote to the hardware and software on display it’s clear Microsoft along with its partners is pushing the “cloud”. This includes both on-premise computing (private cloud) and platform as a service (Azure).

With either cloud model the focus is on improving the processes of deploying, monitoring, and scaling. To that end, Microsoft is breathing new life into its package known as System Center Operations Center (SCOM). I’ve always thought that SCOM fell more into the monitoring space than any other. However that would be selling the product short particularly in light of the new features coming in the 2012 version.

Response at the conference to these new features has been very positive. It’s joining Office programs with the inclusion of a task focused ribbon bar but that’s just icing on the cake. One click by the IT Pro you’ve authorised in your organisation and you have a new private cloud provisioned complete with applications. One tweeter referred to this as the “God button”. On the Azure front, SCOM will now keep an eye on the pulse of your systems there with the help of a new Management Pack. And the announcement of SCOM integration with Microsft’s recently acquired AVIcode means deep integration with .NET applications.

As revealed at PDC and reiterated at Tech Ed Microsoft is opening up Azure VMs for greater control by those hosting applications on them. You can direct that startup tasks be run that install 3rd party components. You will have full IIS capability meaning multiple websites not just one. You can RDP onto the VMs for complete visibility of the instance. A much richer portal along with an MMC snap-in for management are on the way. These are just to name a few of the enhancements on Azure. Microsoft is moving at lightning speed responding to customer requests.

So there’s lots to look forward to in cloud computing!

-Krip


Flying through the clouds on auto-pilot

November 7, 2010

A big benefit of cloud computing is the ease with which you can add resources to meet increasing demand. Same goes for reduced demand. This is known as elasticity.

Windows Azure makes this a snap via its config files. Sometimes you may wish to eliminate the manual effort altogether and design a solution that automatically scales based on performance and load.

This is possible via Azure’s REST based management API. Joseph Fultz explains this approach in the October 2010 MSDN article, Performance-Based Scaling in Windows Azure: http://msdn.microsoft.com/en-us/magazine/gg232759.aspx

-Krip


Gartner tips Microsoft as one of two major players in the cloud

October 28, 2010

Gartner predicts that by 2013, only two vendors will be perceived as leaders in both cloud computing and enterprise computing. Those two vendors are likely to be Microsoft and VMware, they said.

They went on to say that Microsoft has “one of the most visionary and complete views of the cloud”.

Read more here http://blogs.pcmag.com/miller/2010/10/gartner_will_microsoft_and_vmw.php


Cloud Computing Dinner

November 19, 2009

This evening we held a cloud computing dinner for a selected group of CIOs and IT Directors from the London financial services sector. We also invited two representatives from Microsoft to attend to provide information about the Windows Azure cloud computing platform.

The dinner, which was held in The Gherkin, proved to be a highly successful event. There is clearly a lot of interest in cloud computing right now. Perhaps the most surprising outcome was that security was not the number issue for the attendees. There seemed to be an acceptance that cloud computing environments could be made as secure as traditional environments.

What was of more interest to people was the location and jurisdiction of the data. There seemed to be more concern about who would have access to the data and how easily it could be accessed, transferred, controlled etc.

All attendees gave the event high marks and have indicated an interest in reconvening in 2010 to see how everybody has progressed.

If you would like to be on the invite list for future cloud dinners then please contact us at events@kynetix.com or call us at 020 7836 1800.

These dinners are intentionally non-technical in nature. They examine the wider issues of cloud computing and are thus ideally suited to board-level IT executives.


Follow

Get every new post delivered to your Inbox.