Print Page   |   Contact Us   |   Sign In   |   Join the DCA
DCA Member Search

Top Contributors

DCA Member's Blog
Blog Home All Blogs
Please post data centre industry comments, experiences, ideas, questions, queries and goings-on here - to stay informed with updates, ensure you are subscribed by clicking the "subscribe" button.


Search all posts for:   


Top tags: Date Centre  Datacentre  efficiency  EMKA UK  central  Cooling  data  data centre  London  cloud  data centre security  Location  pue  swinghandles  connectivity  EMKA  energy-efficient computing  LDeX Group  air management  anti contamination  BioLock  data centre cleaning  data centre security;  disaster recovery  EU CODE of Conduct  infrastructure  planning  power  university of east london  building 

Why is Data Centre Room Integrity Testing so important?

Posted By Richard Warren, Workspace Technology Limited, Friday 17 February 2017

Why Room Integrity Testing is so important?

If you have a gas suppression system, a room integrity test is of fundamental importance in order to check the ability of the enclosure to retain an effective concentration of gas, which is critical to the safe operation of the system.

Companies risk losing millions of pounds with one single data centre glitch, therefore in the event of a Catastrophic Data Centre Fire a company may not only lose operability, but consequently go out of business.

  • How would a loss of service affect you?
  • Do you have safeguards in place?
  • Would you be able to recover?

Workspace Technology offer a selection of Planned Preventative Maintenance (PPM) services including Room Integrity Testing to protect your business.

Once a Room Integrity Test has been completed, Workspace Technology will provide either a Pass Certificate, which can be used as a reference for insurance purposes, or a complete details report indicating why the test failed and what need to be to ensure a pass.

Click here to find out more about Workspace Technology's Room Integrity Testing Services >>

When should integrity tests be done?

1) An integrity test should be performed immediately after a system has been installed and routinely every year thereafter.

2) Any alteration such as changing a door, putting a cable(s) in or even replacing equipment can affect the room's gas holding ability (Integrity), routine servicing of equipment will not reveal this, so an Integrity Test should be carried out after any alteration.

3) Part of a Annual Planned Preventative Maintenance scheduled test.

Click here to find out more about Workspace Technology's Room Integrity Testing Services >>

For more information on our Data Centre & Server Room Solutions visit or call us on 0121 354 4894

 Attached Files:

Tags:  data centre  fire suppression  Planned Preventative Maintenance  PPM  room integrity 

Share |
PermalinkComments (0)

Don’t be held back by your legacy data centre

Posted By Felicia Asiedu, Infinity SDC Ltd, Monday 21 September 2015
Updated: Monday 21 September 2015

We live in the age of the nimble and agile challenger. Or to put it another way, companies using the latest digital technology to disrupt markets, deliver better services and steal market share from larger and more established rivals.

Equally, though, we live in the era of hugely successful business giants who dominate their chosen markets at home and abroad and many of them have a dark secret. Hidden away behind closed doors, in locked rooms they run their operations using “legacy” IT systems, many of which date back thirty years or more to the days when mainframe computers ruled the roost.

For the most part, these venerable systems run anonymously in the background, processing payroll for government departments, running record systems for health providers, managing procurement for manufacturers and handling billions of pounds worth of transactions for City institutions. Then one day something goes wrong. A bank fails to process direct debits, or benefit payments are delayed. At that point the legacy IT systems (still used by a surprising number of major organizations) step to the front of the stage as critics round on hardware and software combinations that are no longer fit for purpose.

Legacy systems are firmly embedded in the world’s business eco-system. For instance, here in the UK, companies such as John Lewis and Tesco run their operations using old mainframes. Perhaps more surprisingly, according to a recent report in Information Age, mainframes process about 30 billion transactions for the world’s banks, each and every day.


The bigger picture – escaping the legacy mindset

But it’s not just about mainframes. Arguably you can define legacy IT as any aging system that is being pushed beyond its originally intended design limits in order to cope with the transactional demands of the modern world. Perhaps it is also difficult and expensive to maintain due to a shortage of suitably experienced engineers and programmers. Certainly it will hold the organization back.

And that last point is key. A legacy system might be based around a 30 year old mainframe, it could be an aging ERP solution, or it could be a data centre that is no longer delivering what the organization needs. The common factor is that its limitations are feeding through to impaired performance.


Caught in the headlights

So why do we still see so many legacy systems fulfilling business critical roles?

Well the short answer is that change is difficult. There are certainly risks associated with maintaining legacy systems. In recent times we’ve seen flight disruption thanks to problems with an aging air traffic control system and a large financial institution’s reputation damaged by a computer failure that locked down customer accounts. But there are also risks in upgrading. Transferring data from one system to another is a huge task for a major company. There is a fear that data will be lost and that services will be disrupted. Best to stick to the stable and reliable devil you know then.

Is there really a choice?

Take the banking sector.  Analysts are already warning that incumbent banks will lose out to challengers with better systems. To remain competitive, major institutions must replace their legacy systems. To a greater or lesser extent the same is true in all sectors. What’s required is a strategy and a willingness to take some short-term pain.


An opportunity

And a strategic rethink creates opportunities not just to improve legacy systems but to also review and upgrade every aspect of IT.

Data centres provide a case in point. It was common, ten years ago, to provision a vast numbers of racks all supplying low level power (around 1kW) in order to run old machines that did not need much more than a steady supply of electricity.  Fast forward to 2015. To replace a high quantity of legacy equipment an IT manager can now purchase a single piece of hardware, usually in the form of a blade server, to do a better job than all the other equipment combined. The only snag is, those low powered racks will no longer do and all of a sudden, the IT manager now needs maybe 20x times more power and cooling and 10x less space than 10 years prior.

Another issue that can be slightly daunting is the need to forecast for years of business activity to buy the right equipment and take the right amount of space and power from day one.  The lack of flexible space, power and contract terms afforded by many data centre providers can often deter major businesses from taking steps forward with their IT.  But with the right partner, it shouldn’t be a worry.

For a major company, a change from one solution to another is not something to be undertaken lightly but within the context of a much broader strategy that will see significant increases in efficiency, there is a real opportunity to work together to plan for a change to a data centre that can provide improvements in terms of costs, reliability, resilience, security and flexibility.

In a competitive marketplace, all aspects of IT should be fit for purpose. Making the necessary changes requires an escape from the legacy mindset.


Tags:  data centre  server hosting  upgrade 

Share |
PermalinkComments (1)

5 Best Reasons to choose an online back up on VPS or Cloud.

Posted By Suhaib Logde, Saturday 12 September 2015

Cloud back up is an online data storage option. If the data is lost or damaged due to some misfortune, it can be easily retrieved with the help of online back up through Cloud hosting services. The data is stored in cloud server at remote data centers and can be accessed from anywhere using the secured user login. It enables the secure data back up as the users no more need to store their data on physical disk or drives.

5 Reasons to choose an online back up on Cloud:

Secured: Online backup on cloud is highly secured because the data is encrypted before storing into the cloud vault. And the user is given the code for decoding the encryption and thereby accessing the data. So, online cloud backup service protects the mission-critical business data.

Agile: Accessing data with online back up on cloud server is very convenient, which was earlier not there because the external storage devices had so many hassles- from carrying them along, careful handling, risk of virus infection to damages due to physical reasons. While online cloud back up allows you to access your data simply by logging into your account from anywhere, anytime, connecting to the remote cloud server data centers.

Affordable: Online backup through Cloud hosting services incur lesser costs than the traditional backup and storage devices like CD ROMs and external disks and drives.

Eliminates the limits of traditional back up machines: Traditional CDs, DVDs, pen drives and other external drives were very much susceptible to getting lost, easily got corrupted due to virus infection or some physical mishandling. Besides, on getting obsolete it could get un-readable and data may even get damaged and irreparable.

Reliable: Agile, affordable, encrypted data and secured login solutions by online cloud back up makes it extremely reliable.

Reasons to choose an online back up on Virtual Private Server:

Self-managed: FTP backup feature on VPS server also enables self-managed scheduled backups for securing users' data.

Secured: Online backup on Virtual Private Server comes with Secured FTP login which makes it highly secured as it encrypts the data.

Affordable: The costs involved in online back up on VPS server is very less especially compared to traditional backup devices.

Scalable: Besides, online backup on VPS is easily scalable. For increasing storage needs, the data backup services are expandable with enough bandwidth.

High Performance: This reliable and secured online data back up on VPS allows the users/clients to focus on their prime business, resulting into better performance and better results.

Tags:  cloud hosting  Data Centre  dedicated servers  server hosting  vps server 

Share |
PermalinkComments (0)

Your area of expertise and interests

Posted By Administration, Wednesday 19 August 2015

Dear all members, 

We have created two new tabs for your profile - 'Area of expertise' and 'Area of interest'. 

We have created these for us as the DCA to better understand and serve our members, but also to enable you to connect with other members who share similar interests and needs. This will also enable end users to find you and vice-versa.

It is a multi-select box that can be accessed when you log into your profile under 'Manage Profile > Edit Bio'

If you have any questions please feel free to contact our Membership Executive at

Tags:  data centre  end users  expertise  interests 

Share |
PermalinkComments (0)

Why roll over? Migrating data centres might be hard work, but the benefits are worth it

Posted By Ian Bryant, VP of Advanced Services at CenturyLink EMEA, Thursday 25 June 2015
Updated: Thursday 25 June 2015

How often do you get a letter from your car insurance company reminding you that your policy is up for renewal, and rather than shop around you just let it roll over? We’ve all been guilty of this at one time or another, even when the premium is higher, so it’s no wonder that when it comes to renewing a data centre contract, people just continue with their existing supplier. 

And there’s no getting around the fact that many CIOs and infrastructure directors are either reluctant, or not in a position, to migrate to the cloud. Indeed a global study of 1,600 ICT decision makers recently claimed that 41% of respondents said migrating complex apps to the cloud is “more trouble than it’s worth”.

Part of the reason for this is that many cloud vendors don’t provide a clearly defined journey to the cloud. We all know that almost everything will be hosted in the cloud eventually, but not all applications are cloud-ready, and many are quite expensive to migrate. The cloud-only vendors make light of this, but our take – as a provider of cloud, colo and managed services – is to provide plenty of options, all under one data centre roof.

If you’re in a situation where you’re thinking, “I want to keep my data and applications in a physical environment, but need to cut costs, shorten my contract length and benefit from better availability,” then migrating your data centre is definitely worth investigating. This is especially the case if you have your eye on public or private cloud hosting in the future. 

The flexibility aspect is an interesting one. The cloud business model has influenced people’s expectations of data centre contracts. In the old days contracts were three, five and even seven years long. Those days are over – 12 to 24 months is more common for a data centre contract, and six to 12 months isn’t out of the question. 

It means that data centre migration needs to be less of a big deal from the customers’ perspective. It also presents commercial challenges to data centre providers as they can’t be certain of longer term revenues. 

They have old, big annuity based contracts, allowing them to reinvest that five year income into other services. All of a sudden there’s a utility based model with no guaranteed income stream.

Companies like ours, which were originally based on a data centre business already, have the infrastructure in place and we’re always building more because we know that demand is constantly increasing. We’re opening new data centres all over the world and expanding those we have every year.

Where do businesses typically go wrong with a data centre migration?

A data centre migration goes wrong when there’s not enough planning. 

The term “lift and shift” is used within the industry, but that’s a bit of a misnomer and under-sells the size of the task. Luckily this is our bread and butter. One example: we recently migrated a customer that involved more than a thousand devices, over a series of physical moves. We couldn’t have done this without planning every phase thoroughly.

As part of our methodology, we consider a multitude of things, such as looking closely at the business model, how IT affects the business, the applications that are run, and whether there is any resilience built into the application.

We review the technology underneath it all and ask lots of questions. Is it all up to date? (It’s not uncommon for the answer here to be “no”) What about support contracts? What about your escalation process and the skills and the staff?

In any migration the discovery phase is the most important.

When migrating a data centre you need to understand every piece of hardware and every part of the network connectivity. You also need to understand every dependency between platforms and those between applications. 

Because it’s worth it

As the colocation market continues to become more competitive and businesses expect the flexibility they can see in public cloud contracts, we’ll see more and more data centre migrations. For businesses that are looking at a bimodal IT strategy, it also makes sense to migrate. This way legacy applications can be migrated to colo and new applications can be run on the cloud – all with one provider and all managed from one dashboard. It’s when companies have achieved this that they look back at the hard work in the planning phase and see that it’s all worthwhile. 

Tags:  cloud  data  data centre  decision makers  ICT  migration 

Share |
PermalinkComments (0)

DCA Relocations & Decommissioning Workshop Reminder

Posted By Kelly Edmond, Monday 5 May 2014
Updated: Thursday 1 May 2014
Here is a reminder to all who may be interested in attending the DCA Relocations & Decommissioning Workshop on Friday 16th May

This group is dedicated to best practice and collaboration on migrating, relocating and de-commissioning of data centres. With much media focus on the responsible disposal of data centres and the risks involved in moving and removing data centres, members have asked the DCA to establish a Steering Committee for data centre "Relocations and Decommissioning”. Therefore we have arranged a workshop session to be held at University of East London to gain your views on what and if collaborative action(s) are required by the DCA. All members are welcome to contribute.

All the details of this workshop is in the Event Calendar. Don't forget to RSVP!

Tags:  data centre  decommissioning  relocations 

Share |
PermalinkComments (0)

It’s all about the network – data centre connectivity

Posted By Tanya Passi, Geo Networks, Wednesday 30 April 2014
Data centres cannot operate in isolation, so it is surprising that many operators continue to invest so much time and effort in the physical building, with little thought about the connectivity requirements of their target customers. To maximise investment, connectivity must go hand in hand with the infrastructure build, rather than being bolted on as an afterthought.

As demand for data centre space continues to grow rapidly, spurred by the increasing move to cloud computing and big data initiatives, huge levels of investment are being poured into this sector. New facilities are springing up in the capital and beyond, as enterprise customers realise that they can store less critical information slightly further afield. Significant focus is placed on the construction and interior of data centres to ensure maximum levels of security, power and operational efficiency, but what about connectivity? Whilst SMEs may be satisfied with only one carrier at a data centre site, large enterprise customers are looking for diversity.

An open access connectivity model can enhance the data centre operator’s proposition by offering its customers a choice of telecom service providers. Open access involves deploying fibre to the data centre and making the underlying infrastructure available to any other network operator or end user so that they can connect directly from the site to the network, or networks, of their choice. This approach is a long term investment rather than a short term remedy, requiring data centre owners and network operators to work in harmony, offering control to the site owners and choice to the end user.

Well-connected data centres present an attractive proposition to end users because as well as choice, they ensure a strong level of competition between different network providers, guaranteeing that connectivity is competitively priced. For operators, there is no need to make multiple investments with different operators, paying each to build a bespoke link to the site, when it’s possible to contract with a single provider and achieve the same goal. It ensures optimum delivery of fibre to the site, so that customers can get access to their data everywhere and anywhere once connected, and the best return on investment for owners.

An open access model, if undertaken correctly has the potential to benefit all: the data centre operator, network provider and most crucially the end user. Connectivity can no longer be an added extra. For the most successful, innovative operators it must be a fundamental part of the data centre proposition from the get go.

Tags:  connectivity  data centre  infrastructure 

Share |
PermalinkComments (0)

A guide to data centre metrics and standards for start-ups and SMBs

Posted By Anne-Marie Lavelle, London Data Exchange, Thursday 27 March 2014
Updated: Thursday 27 March 2014

A guide to data centre metrics and standards for start-ups and SMBs

Having made that choice to co-locate your organisation’s servers and infrastructure to a trusted data centre provider, companies need to be able to understand the key metrics and standards which they should use to evaluate and benchmark each data centre operator against. With so many terms to get to grips with and understand, we felt it necessary to address the most prevalent ones for data centres.

Green Grid has developed a series of metrics to encourage greater energy efficiency within the data centre. Here are the top seven which we think you’ll find most useful.

PUE: The most common metric used to show how efficiently data centres are using their energy would have to be Power Usage Effectiveness. Essentially, it’s a ratio of how much energy consumption is it going to take to run a data centre’s IT and servers. This would incorporate things like UPS systems, cooling systems, chillers, HVAC for the computer room, air handlers and data centre lighting for instance vs how much energy is going to run the overall data centre which would be taking into account monitor usage, workstations, switches, the list goes on.

Ideally a data centre’s PUE would be 1.0, which means 100% of energy is used by the computing devices in the data centre – and not on things like lighting, cooling or workstations. LDeX for instance uses below 1.35 which means that for every watt of energy used by the servers, .30 of a watt is being used for data centre cooling and lighting making very little of its energy being used for cooling and power conversion.

CUE: Carbon Usage Effectiveness also developed by The Green Grid complements PUE and looks at the carbon emissions associated with operating a data centre. To understand it better you look at the total carbon emissions due to the energy consumption of the data centre and divide it by the energy consumption of the data centre’s servers and IT equipment. The metric is expressed in kilograms of carbon dioxide (kgCO2eq) per kilowatt-hour (kWh), and if a data centre is 100-percent powered by clean energy, it will have a CUE of zero. It provides a great way in determining ways to improve a data centre’s sustainability, how data centre operators are improving designs and processes over time. LDeX is run on 100% renewable electricity from Scottish Power.

WUE: Water Usage Effectiveness simply calculates how well data centres are using within its facilities. The WUE is a ratio of the annual water usage to how much energy is being consumed by the IT equipment and servers, and is expressed in litres/kilowatt-hour (L/kWh). Like CUE, the ideal value of WUE is zero, for no water was used to operate the data centre. LDeX does not operate chilled water cooling meaning that we do not use water to run our data centre facility.

Power SLAs: Service Level Agreements is the compensation offered in the unlikely event that power provided by the data centre operator to a client as part of an agreement is lost and service is interrupted affecting your company’s business. The last thing your business wants is to have people being unable to access your company’s website and if power gets cut out from your rack for some reason, make sure you have measures in place.

Data centres refer to the Uptime Institute for guidance with regards to meeting standards for any downtime. The difference between 99.671%, 99.741%, 99.982%, and 99.995%, while seemingly nominal, could be significant depending on the application. Whilst no down-time is ideal, the tier system allows the below durations for services to be unavailable within one year (525,600 minutes):

  • Tier 1 (99.671%) status would allow 1729.224 minutes
  • Tier 2 (99.741%) status would allow 1361.304 minutes
  • Tier 3 (99.982%) status would allow 94.608 minutes
  • Tier 4 (99.995%) status would allow 26.28 minutes

LDeX has infrastructure resilience rated at Tier 3 status offering customers peace of mind that in the unlikely event of an outage–and therefore protecting your business. We like to operate closed control cooling systems in our facilities enabling us to operate tight environmental parameter SLA’s. LDeX operate SLA Cold Aisle Temperature parameters at 23degreesC +/- 3 degreesC and RH (Relative Humidity) 35% – 60%.

Some data centres run fresh air cooling systems which make it hard to regulate RH and quite often their RH parameters are 205 – 80% and beyond. This can lead to increased humidity in the data hall and has on occasion resulted in rust on server components or a low RH can produce static electricity within the data hall. Make sure you look into this and ask about it.

Understand the ISO standards that matter to your business

ISO 50001 – Energy management

Using energy efficiently helps organisations save money as well as helping to conserve resources and tackle climate change. ISO 50001 supports organisations in all sectors to use energy more efficiently, through the development of an energy management system (EnMS).

ISO 50001:2011 provides a framework of requirements for organizations to:

  • Develop a policy for more efficient use of energy
  • Fix targets and objectives to meet the policy
  • Use data to better understand and make decisions about energy use
  • Measure the results
  • Review how well the policy works, and
  • Continually improve energy management

ISO 27001 – Information Security Management

Keeping your company’s intellectual property should be a top priority for your business and ensuring that your data centre provider offers this sort of resilience is imperative. The ISO 27000 family of standards helps organizations keep information assets secure.

Using this will help your organization manage the security of assets such as financial information, intellectual property, employee details or information entrusted to you by third parties.

ISO/IEC 27001 is the best-known standard in the family providing requirements for an information security management system (ISMS). An ISMS is a systematic approach to managing sensitive company information so that it remains secure. It includes people, processes and IT systems by applying a risk management process.

It can help small, medium and large businesses in any sector keep information assets secure.

Like other ISO management system standards, certification to ISO/IEC 27001 is possible but not obligatory. Some organizations choose to implement the standard in order to benefit from the best practice it contains while others decide they also want to get certified to reassure customers and clients that its recommendations have been followed. ISO does not perform certification.

PCI DSS – Banks and businesses alike conduct a lot of transactions over the internet. With this in mind, the PCI Security Standards Council (SSC) developed a set of international security standards to ensure that service providers and merchants have payment protection whether from a debit, credit or company purchasing card. As of 1st January 2015, PCI DSS 2.0 will become mandatory. This will be broken down into 12 requirements ranging from vulnerability assessments to encrypting data. Make sure to ask if your data centre operator has this standard.

With the increased stakeholder scrutiny that has been placed on data centres, steps need to be put in place to make sure that the data centre operator that you are looking at choosing is aligning its strategy not only to some of these metrics and standards mention, but to other security, environmental governmental regulations that have been brought in.

Working for a successful data centre and network services provider like LDeX has enabled me as a relative newbie to the data centre industry, to get to grips with these terms to facilitate client understanding regarding where LDeX sits in comparison with our competitors.

Anne-Marie Lavelle, Group Marketing Executive at LDeX Group

Tags:  connectivity  Cooling  CUE  data centre  Datacentre  Date Centre  efficiency  ISO standards  operational best practice  PCI DSS  PUE  WUE 

Share |
PermalinkComments (0)

Tender Opportunity - Royal Holloway, University of London, Surrey

Posted By Kim Cooper, Wednesday 26 February 2014

Monitoring, Maintenance Repair and Testing of Data Centre Support Infrastructure

Reference number: RHUL/MSC017/14

Closing Date for accessing documents: Tuesday 18th March 2014

Pre-qualification Questionnaire Submission Date: Midday on Wednesday 19th March 2014

We have today published an opportunity that may be of interest to your company under the ‘Current Tenders’ section of the College’s In-tend electronic tendering system. If you would be interested in bidding for this work, please go to the College website via the link below, and after reading the terms and conditions of submission for electronic tendering, progress (via the link at the top of the page) to the Intend website, to register your company details and view the tender documents. The system will email you once the registration process is complete. The Intend system is widely used throughout the university, and local government sector – so you may have come across it before during other tenders. If you are new to the system, please visit the ‘Help’ and ‘User Guidance’ information on the website, which goes through the process in detail.

We will be using the online ‘Correspondence’ and ‘Clarification’ tools for distributing any further updates/information during the tender process, so that any information is retained within the system for audit purposes.

If you have any questions regarding the registration process or any difficulties in registering your company on the system, please contact the Central Procurement Unit at the College on the number below.

If you have any questions regarding the tender documentation or the tender process itself, please send them via the ‘Correspondence’ section of Intend.

Kim Cooper - Purchasing Manager, Central Procurement Unit

Tel: +44 (01784) 414181

Tags:  data centre  maintenance  monitoring  support infrastructure  tender 

Share |
Sign In


Data Centre Alliance

Privacy Policy