Vast compute power at low cost using utility based pricing may be changing the economics as well as the look and feel of the modern data centre but new levels of management complexity of core IT infrastructure and applications will continue to attract many firms to investigate services offered by third-party hosts. Copy begins: The use of third-party data centres for disaster recovery (DR) has received plenty of publicity in the last five years in the wake of power outages, network failures and virus attacks. However, the focus on DR has disguised a more significant trend towards broader use of data centre hosts. Even the term disaster recovery is falling into misuse in favour of business continuity as firms seek not only to retrieve data but also use failover capabilities to avoid any significant disruption to processes even in the event of extraordinary problems. And having achieved that strategic objective, many are now going a step further and using their relationships with hosts to improve and add services. Data centre providers have evolved to offer not just colocation facilities where cages of server equipment run by various customers sit side by side, but also managed services that reduce the burden on IT departments. As many firms struggle to maintain best-practice and stay on the right side of a tide of corporate governance compliance mandates -- all on strictly rationed IT budgets -- they are increasingly leaning on third-parties to help them eliminate pain points that chew up valuable admin time and include: Remote storage management ensuring that storage performance at the primary data centre is optimal
Systems monitoring maintaining an audit of computer health and anticipating and fixing problems on the fly Security scanning incoming email for viruses, blocking dubious web sites and protecting against malicious attacks A large part of the reason for adopting third-party data centre services is that IT departments face being overwhelmed by complexity and this extends to data centres themselves, which are beginning to look very different to the server rooms of old. Some key changes that are occurring include: A move to blade servers and other ultra-dense equipment Availability of utility-like tariffs for servers and storage The arrival of virtualisation in the volume server segment Multi-core processors affording huge compute power The growing importance of Linux and open-source software Faster networking capabilities becoming available at lower cost Grid networks offering supercomputing on volume systems
Rather than the big iron monolithic mainframe and Unix boxes, many firms are turning to ultradense equipment, notably blade servers that can be racked to add incremental scale-out computing power or new capabilities such as firewall security or cacheing at very competitive prices. The downside of this approach is that without intervention this Lego-like accumulation of equipment can lead to heat dissipation problems thanks to the huge amount of power required. Understanding how to provide that power and then cool the equipment racks is one of the most significant and unwanted -- challenges for IT chiefs today. Turning to third-parties to handle such challenges leaves these leaders free to focus on strategic rather than operational issues. Other changes that have filtered down from the mainframe and Unix server world to lower-cost systems also play to the strengths of data centre hosts. Utility-like tariffs can provide charges analogous to an energy bill, allowing buyers to pay only for what they use. These tariffs can also flex to accommodate usage spikes so that peak loads can be tolerated without requiring buyers to pay for a higher software licence or for a higher server specification. Also, virtualisation software is set to revolutionise the way data centre servers can provision platforms, resources and data by partitioning various operating systems, user environments and tasks. For example, a server could run a payroll application on Unix, an email server on Linux and a fileand-print server on Windows partitions with security scanning running as a background operation. Resources can be dynamically allocated to make full use of the hardware rather than one application or environment unnecessarily hogging power.
The compute power to serve such powerful capabilities will be provided by a new generation of volume processors that pack multiple chip cores into one CPU socket. Dual-core chips will arrive this year from Intel and AMD and in the future four-, eight- and higher configurations will provide immense power. At the same time, software is changing as Linux and other open-source software offers an affordable, scalable alternative to Windows and vendor-specific flavours of Unix. Network connectivity is also cheaper as service providers rid themselves of a glut of bandwidth capacity and gigabit-plus Ethernet becomes common on local-area networks. Some companies are taking advantage of such trends to offer computing grids networks connecting huge numbers of servers that allow customers to run complex scientific, financial and other routines on a rental basis. This perfect storm of changes is leading some observers to predict that corporate data centre will eventually disappear altogether. Nicholas G. Carr in his recent MIT Sloan Management article The End of Corporate Computing suggests that just as electricity generators replaced private power services, the trend towards centralised IT utilities is inexorable. As a business resource, information technology today looks a lot like electric power did at the start of the last century [when manufacturers built and maintained their own generators], Carr writes. Companies go to vendors to purchase various components -- computers, storage drives, network switches and all sorts of software -- and cobble them together into complex information-processing plants, or data centres, that they house within their own walls. They hire specialists to maintain the plants, and they often bring in outside consultants to solve particularly thorny problems. Their executives are routinely sidetracked from their real business -- manufacturing automobiles, for
instance, and selling them at a profit -- by the need to keep their company s private IT infrastructure running smoothly. While Carr may overstate his argument to provoke, it is indisputable that the wasteful situation he describes in corporate datacentres -- where most servers and storage systems operate at very low capacity levels while others are overloaded -- is common. There are signs that some firms now recognise this and are voting with their feet away from dependence on premises-based equipment. Some of the fastest-growing technology companies today are application service providers such as Salesforce.com, which provides online salesforce automation software while other companies such as Symantec and Message Labs are thriving by offering managed email scanning services. Many firms will still want to retain an internal data centre for reasons such as security and to retain a sense of control and ownership but many others are gradually realising that they do not need to have an internal server room to stay on top of information management. As technology complexity continues unabated and the requirement for IT to serve business and minimise risk is accentuated, the opportunity to parcel out more services to data centre hosts is likely to be appealing. - Ends - About Interxion: Interxion is Europe s leading provider of carrier-neutral data centre and managed services. With 20 data centres across Europe, it has the largest footprint and currently supports 700 customers including enterprises, Systems Integrators, Internet Service Providers, hosting and telecommunications companies. For more information see www.interxion.com
Press contacts: Konstantin Borman Interxion Tel: 00 31 (0) 208 807 600 konstantinb@interxion.com Ali Moinuddin Xion Ltd Ph: +44 (0) 20 8956 2800 Fax: +44 (0) 20 8956 2801 a.moinuddin@xion.org.uk http://www.xion.org.uk