Why bandwidth monitoring is key to the cloud

PC World published an interesting article last week that explores the relationship between bandwidth, the cloud and business ROI. We’re increasingly finding ourselves pulled into cloud projects (both before they go live and after they’ve gone wrong) so we’ve had a chance to build up our own intelligence on the topic.

The basic premise of the PC World argument is pretty straightforward – cloud computing has the potential to deliver huge operational cost savings to organizations IF (and this is the big IF) they can get a handle on the bandwidth demands. Their argument is that cloud implementations are failing because organizations aren’t listening to the needs of the network and, as a result, are finding that their apps are failing to deliver acceptable levels of user experience, their back ups are timing out and their databases are getting out of synch.

Inside the cloud bandwidth requirements relate to connectivity within the virtualized environment and end-user connectivity to the cloud through the access network. For a cloud implementation to be successful both must be insynch, which requires very high levels of network visibility.

The article looks closely at Intercontinental Hotels’ cloud experience which involved a fundamental re-architect of their network so data could be quickly reachable and that their global data centers could stay insync. For them bandwidth was, and remains, a critical success factor.

Interestingly very few enterprise applications are designed to operate in a cloud environment and are found to require significant amounts of bandwidth to accommodate the amount of ‘chat’ that goes on (locally and remotely) between the various database, storage and application servers. As part of the article, Theresa Lanowitz (analyst at Voke) asks why more organizations don’t include bandwidth considerations in their cloud strategies.

So the question is, why aren’t organizations doing more to manage bandwidth and how can they ensure that they don’t get application melt down when the cloud comes on line and critical business apps start contending with Monday morning Facebook video requests?

Here are 3 insights aggregated from our recent experiences in the cloud:

1)     Very few organizations know what’s really on their networks to start with and, as a result, are stumbling around in the dark when it comes to planning and forecasting bandwidth requirements. Understanding what users are really doing, how bandwidth is really being consumed and how application use really changes by day and hour is key.

Now there are of course tools out there that provide network visibility, but it’s typically at the wrong level of the stack (layer 3 or 4) and if your network is carrying traffic at speeds in excess of 2Gbps there’s a good chance it’s only giving you half the picture.

If you’re planning a cloud move and want to know what you’re dealing with before you start then you need trusted to know what’s happening at layer 7 (regardless of line rate) which is exactly where we create value through visibility.

2)     Figuring out what’s going to happen once the cloud goes live is something of an art. It’s also absolutely critical.  Get it wrong and it all grinds to a halt, get it right and no body notices you’ve just cut the IT budget in half.

Sure, there are a number of specialist test and measurement tools on the market that can help to artificially create peak traffic loads (the article notes vendors Ixia and Spirent in this regard) but in our experience the only way to figure what’s really going to happen is take a recorded copy of your own network traffic and use that (replayed at different rates) to find out how things are going to interact. Your network is as unique as your fingerprint and there’s no substitute for testing with real network traffic.

3)     Once your cloud is up and running its essential to have highly accurate visibility tools in place at strategic points across the infrastructure and access network to monitorbandwidth and application usage to ensure that bandwidth usage doesn’t exceed known performance parameters. With the right tools the time taken to troubleshoot problems can be slashed from hours or days to minutes. From our experience it only takes a relatively subtle shift in the network application usage profile to have a substantial impact on bandwidth requirements.

On the whole, we’re in violent agreement however, what PC world has failed to acknowledge is just how hard it is for organizations to get the necessary levels of visibility at speeds exceeding 2Gbps – which is exactly where most large organizations are today.

Leave a Reply