Thursday, December 19, 2013

IBM Storwize - 'Taking a Technical View'

When IBM launched their all new, virtualised storage “Storwize” portfolio a few years back it consisted of a single product - the V7000. Today this is still the top-of-range model but the portfolio has expanded to include a couple of other systems.
 
The second model launched was the V3700. Like all in the range, this system runs the same SVC software as its V7000 big brother and so benefits from the same highly-intuitive GUI and ease of management, but has less hardware grunt inside and so sacrifices scalability and a couple of features. Scalability on the V3700 is considerably reduced as it only supports half the internal storage per controller and controllers cannot be clustered. Additionally, external storage virtualisation is lost meaning that the V3700 cannot manage external storage systems as if they were its own storage.
 
This is where the V5000 comes in. It fills in the gap between the top-of-range V7000 and the baby V3700. Using more powerful processing power and cache in the controllers’ two nodes, more storage can be attached to the V5000 and two V5000s can be clustered together. More key, however, is that SVC style external storage virtualisation is supported meaning that, like the V7000, it can manage external storage. This brings one of the key benefits of the SVC and V7000 code to the market at an even lower price point.
 
 
The V5000 offers a range of connectivity options including 10Gb iSCSI, FCoE or 8Gb FCP connectivity. As illustrated above, the appropriate I/O daughter card is installed meaning that while the V5000 does not offer the connection versatility of its bigger brother (as the V7000 natively offers all these protocols simultaneously) it can be configured to be suitable for either type of deployment.
A couple of other features are amiss such as compression, and extra cost licences are required to activate some other features that are included in the V7000 base code, but the V5000 is still a very powerful storage system that delivers everything many SMBs require at a price point making the technology even more competitive and affordable for more businesses.
 
The table below summarises some of the difference between the products in the Storwize family:
 
Per system so clustered models offer multiple of this value
Should you require further information regarding the IBM Storwize V5000 or have any storage requirements please do not hesitate to contact your Celerity Representative.
 
Edward Yates - Technical Consultant - Celerity Limited
 
 To view this article on Celerity Limited website click here

Tuesday, December 3, 2013

Benefits of Virtualising your Storage

The main benefit of virtualising your server estate – is to maximise the utilisation of the hardware and reduce the physical server footprint - has been known and proven to I.T. professionals for many years and adopted in some way by most.
 
In addition to massively increasing the efficiency of the server estate and reducing the procurement and running costs of each (virtual) server, new functionality becomes available that would not otherwise be possible. This includes things like migrating the server from one piece of hardware to another, taking instantaneous backups of the server as a whole and moving it from one storage system to another, all without any disruption.
 
But typically many organisations stop there, yet most of the benefits of virtualisation can be brought to the storage too. By virtualising the storage platform you gain improved functionality and more advanced storage features. Not only does storage virtualisation enable more common features such as instantaneous point-in-time snapshots for back-up for example, but can also bring new features such as:
• Instantaneous cloning of volumes to allow testing and development of production data without any impact on the live services
• Manual and automated migration of volumes between different tiers of storage within and sometimes across storage systems
• New ways of protecting data and restoring system resilience after hardware failures allowing for faster rebuilds while reducing or eliminating the performance impact on the system
• Advanced functionality such as compression or de-duplication of tier 1 production data can be done at the storage level and is transparent to the applications and servers
• Offering both NAS and SAN connectivity protocols from a single, unified storage system to streamline system management
 
Not only does storage virtualisation bring these features to your environment along with a performance boost, but often does so in a smaller footprint which can dramatically reduce the total cost of ownership.
 
So we know what storage virtualisation can bring, but how is it achieved? Typically virtualisation of storage “achieves location independence by abstracting the physical location of the data”, or creates a new virtualisation layer (or pool) of storage and all the magic happens at that layer, rather than at the actual physical disk block level.
 
As oppose to data volumes residing on dedicated RAID arrays which need specific characteristics to meet the application’s demands, volumes are created at this storage pool layer which can be made up of multiple arrays meaning more performance and greater disk utilisation thanks to the large number of disks in the pool, yet offering ease of storage design and management.
 
Should you require more information on Virtualising your Storage, please do not hesitate to contact your Celerity Representative.
 
 
Edward Yates - Technical Consultant - Celerity Limited

To read this article on our webpage please click here

Thursday, November 14, 2013

SVC and Flash Systems

The IBM SAN Volume Controller (SVC) is a functionally rich storage virtualisation appliance. Its unique abilities allow it to sit in front of many of the common storage systems from the world’s largest storage vendors and virtualise the disks arrays behind it. To the host, it appears as if nothing has changed, but to the administrator it provides a great amount of flexibility that can streamline many common tasks. There are many benefits to this technology and the approach it takes. It can introduce many cost savings such as negating host attachment license costs, reducing functionality licensing costs, especially in multi-vendor scenarios, improving performance across the whole storage estate and easing migrations from one storage device to another. With the addition of the new IBM Flash Systems family of arrays, an extra dimension is added to the capabilities of the IBM SAN Volume Controller.
 

 
IBM’s Flash Systems are high performance, resilient, flash memory arrays. Designed to provide high IOPS and accelerate applications and business functions beyond the traditional limitations of spinning disks. However, one of the benefits of an all-flash system is that any data moved onto it will benefit from the boost in performance. Less commonly accessed data would traditionally reside on slower, ‘bronze’ tier disks and be infrequently loaded into cache and, therefore, reduce an expensive use of flash memory storage space. By pairing the IBM Flash System with an IBM SVC (or IBM V7000) and using the ‘easy-tier’ function, the IBM Flash System can become a large cache of persistent high speed memory that can be advantageous to all disks storage within a storage pool by moving hot data extents into flash memory. The application and, therefore, business critical response times can be decreased dramatically.
By analysing the data over twenty-four hours, hot data naturally migrates onto the faster disk layer. The effect is to smooth out what would otherwise be traditional data hot-spots on disk or RAID arrays. Depending on the characteristics of the application and data, the IBM Flash System can be an extension of the SVC cache memory or the cache can be disabled for write-through. This can be dictated on a LUN-by-LUN basis and can be advantageous in reducing the data path to what is, essentially, memory anyway and in certain circumstances improve performance further by removing a ‘hop’ into traditional memory cache.

The IBM Flash System can also present storage to a host system in the traditional way. Disk LUNs can still be provisioned and attached to a server as any other storage array can do. The advantage is that specific applications, particularly heavily utilised databases (e.g. large TSM databases) or batch processing routines can have true flash-based performance.
The flexibility of a SVC and IBM Flash System pairing can be used to virtualise and enhance the capability of many storage arrays, from many vendors, not just IBM storage, bringing tiered storage and a wealth of functionality to disk subsystems that do not have the some of these capabilities natively. The SVC allows the connection of many simultaneous vendor disks technologies to take advantage of the IBM Flash Systems concurrently, and thus accelerating the whole storage estate.
 
 
Steve Laidler - Technical Consultant - Celerity Limited
 
For more information on IBM SAN Volume Controller (SVC) and IBM Flash Systems please contact your Celerity Representative or contact us at Celerity.
 
 To view this article on our website please click here.

Thursday, October 31, 2013

The Importance of BCP (Backup and Continuity Plan)

(To the tune of "American Pie")

"A few short hours ago
I can still remember how my heart sank when I clicked that file
But I knew that this was the chance
To show we'd planned this in advance
And we'd be up and running in a while"
 
"Though monitoring made me shiver
With every alert it delivered
Bad news on the wire
Some rumours of a fire"
 
"I can't remember if we tried
To contact kit that was inside
But BCP got justified
The day the DC died"
 
"So fail back to the secondary site
All the data replicated through the hours of night
Mirrored storage holds it all secure and tight
Proving planning for disaster was right
Planning for disaster was right"
 
My deepest apologies to the good Mr D. McLean, but the topic of Business Continuity Planning is not usually one that makes people's hearts leap with joy. In fact it quite often fills them with dread. But, like an insurance policy, you only realise how important a proper BCP is when you actually need it. By then, it is usually too late.
 
So, what is a Business Continuity Plan?
 
Quite simply, it is a plan to show, and detail, how you plan to continue running your business in the event of a disaster. It covers everything from where staff would work if they were unable to access their normal offices, how they would communicate (telephones, fixed or mobile), what aspect of the business needed to be working first and, of course, access to, or recovery of, computer-held data and systems vital to the business function. The data aspect is what I intend to concentrate on here.
 
What constitutes a disaster?
 
This varies, depending on your business, but could be anything from a burst water pipe flooding your offices, to a fire in your data centre or even terrorist activity (on July 7, 2005 a number of businesses in The City invoked their BCP plans in response to the sad events of that day elsewhere in London).
 
What can you do to ensure your business can survive a disaster?
 
That depends on how long your business can function without access to its most valuable resource; Data. If you can keep going with manual systems whilst your IT team source, build and recover replacement servers and storage then that is great, but you will be in the minority.   Most organisations would be hard-pressed to run for a day without their IT systems and many would be in trouble after a few hours.
 
The first step in protecting your data is to ensure regular backups are taken. These backups could be to physical tape, or to a virtual tape library. If data is backed up to tape then you will need to ensure that the tapes are stored somewhere safe and secure. It is no good just putting them on a shelf next to the system they are intended to restore one day. Tapes should be stored off-site, either with a specialist third party, of which there are many, or at your disaster recovery site, where they will be readily available in the event of DR.
 
If you back up to VTL or use some other disk-based backup process then consider mirroring or replicating this to your DR site. Tivoli Storage Manager V6.3 offers node replication to a secondary server, thus ensuring backup data is available from more than one source in the event of a disaster. IBM's ProtecTIER can be clustered, with nodes at multiple sites replicating their data within the cluster.
 
So, you have all that in place, all your data gets backed up to multiple, seperate locations overnight, critical data is replicated real-time over redundant fibre links, you have got all the bases covered. Congratulations! Now, have you tried restoring some of that data? Do you know for certain that your finance database can be recovered? Have you tried pulling the connection on your fibre switch to make sure it fails over to the secondary link?
 
It is all very well investing the time and effort to build resilience, but you need to know it works, and that means testing it.
 
Schedule a specific date (or dates) to fail over specific, critical systems to the DR site and make sure you can recover them. Make sure all the people involved know what is expected of them. Make sure everything is documented!  I can not stress the importance of documentation too heavily. If your one and only SAP guru is in hospital after a skiing accident when your disaster strikes, you are going to be hard pushed to get your CRM system up and running without detailed documentation.
 
Testing lets you find the bugs, omissions and plain, simple mistakes in your processes, without the pressure of a CEO breathing down your neck. It gives you time to perfect the procedures and build confidence in both your staff and your systems.
 
If you do not test, then you might be lucky and get away with it, but chances are you will be digging a hole for yourself and your business. If you do not even have a plan then that hole will be about 6 feet deep, with a headstone at one end!
 
So set up that plan, make sure everyone understands their role, and most importantly, test it regularly. That way you will not end up like Mr McLean's "good old boys"; drinking whisky and lamenting the death of the "music".
 
Should you require more information on backup and continuity plans, please do not hesitate to contact Celerity.
 
Tony Lloyd - Technical Consultant - Celerity Limited

 To read this article on Celerity's website please click here

Once Again Celerity Supports 'Movember' 2013 Campaign

Once again this year Chris Roche, MD of Celerity Limited, together with three members of the Celerity Team, Darren Sanders, Phil Reeves and Scott Deuchar are taking part in this year’s Movember Fundraising Campaign, and are to be known as ‘Mo Bros’.
 
 
 
 
Movember challenges men to grow a moustache for the 30-days of November, thereby changing their appearance and the face of men’s health.
 
For the entire month of November each ‘Mo Bro’ must grow and groom a moustache. Strict rules apply and must be adhered to throughout the month – for example there is to be no joining of the mo to the sideburns - as that is considered a beard, and there is certainly no joining of the handlebars to the chin - as that is considered a goatee.
 
‘Mo Bros’ effectively become walking, talking billboards for the 30 days of November and through their actions and words raise awareness of prompting private and public conversation around the often ignored issue of men’s health.
 
The funds raised in the UK are directed to programmes run directly by Movember and their men’s health partners, Prostate Cancer UK and the Institute of Cancer Research. Movember work with these partners to ensure that funds collected are supporting a broad range of innovative, world-class programmes in line with their strategic goals in the areas of awareness and education, survivorship and research.
 
In 2012, over 1.1 million Movember members raised £92 million globally.
 
If you would like to donate please click here and either sponsor Chris, Darren, Phil or Scott individually or make a team donation following the on-line instructions.
 
We will be updating this page regularly with photographs of how Celerity's MoBros' efforts are progressing as the weeks go by.  So ...
 
Please support our Celerity Team and help us make a difference by clicking the Donate Now! button below
 
http://uk.movember.com/mospace/6664977  Click to Donate Now

Many thanks!

 To view this article on our website click here

Thursday, October 24, 2013

AIX and PowerVM - The Best Fit For Your Business

 
                OK, so why is AIX so popular? .........

No, I am not referring to the French city-commune, located about 30 km north of Marseille, of the same name – but IBM’s Unix Operating System (OS) offering. 
 
As a Unix professional, I sometimes get asked: “…how do I select the right platform…”
 
While there is no fixed answer, usually, when you dig deeper, the picture becomes clearer. More often than not, the answer is AIX.
 
A major factor of its success, is in no small part down to how extremely well AIX is intertwined with IBM's virtualisation technology; PowerVM.
 
Companies nowadays make it a priority to ensure they are able to maximise their return on investment in IT – and rightly so. The association between AIX and PowerVM allows businesses to do just that by reducing their Total Cost of Ownership (TCO).
 
Lower TCO is achieved by the consolidation of many workloads onto a lesser number of physical systems, slashing overall maintenance costs of hardware, data centre footprint, power and cooling costs, database license fees (through reduced amount of CPU cores), physical cabling and human capital outlays. Overall performance is increased and at the same time, it can facilitate standardisation.
 
Whilst we have seen a big growth spurt in Linux – which is fine by IBM as it is supported on POWER systems – AIX is still the more mature OS. It has better vendor support and a tightly integrated hardware set. The thing that sets AIX apart is that it is the only Unix OS that has fully harnessed decades of IBM technology innovation designed to provide the highest level of performance and reliability. The hardware is optimised and heavily integrated with the OS. It also helps that IBM Power Systems are easily the most powerful of midrange Unix servers ... AIX has grown up side by side with PowerVM and POWER hardware.
 
A Quick Look at AIX 7
 
AIX’s latest guise boasts many great features:-
  • It can run applications that operated on AIX 5L and earlier. In other words, it is binary compatible to many previous versions.
  • It has built upon its previous scalability to partitions with 256 processor cores and 1024, handling the largest of workloads.
  • Along with new security features and manageability improvements, Terabyte Segment Support is automatically enabled, meaning enhanced memory management.
  • Virtualisation enhancements in the OS allow for a more simplified consolidation of older AIX environments by allowing the backup of an existing LPAR running AIX 5.3, and restoring it into an a AIX 7 Workload Partition. 
  • A really interesting feature is the ‘Cluster Aware AIX’ aspect, bringing elements of clustering technology into base OS. Vendors have long acknowledged the importance of clustering and this move underlines it. Cluster Aware AIX simplifies the configuration and management of high availability clusters, which is great news for us Systems Administrators.
Conclusion

For the enterprise company running mission-critical applications, nothing is more important than having an OS that is robust and reliable, along with a top, vendor supported virtualisation platform. Evolving business requirements dictate that infrastructures need to be flexible and are capable of adjusting accordingly.
It is for these reasons that more and more organisations are choosing AIX.
Furthermore, unlike other Unix hardware vendors, IBM provides a clear road map for AIX.
AIX continues to grow market share…. The future of AIX is strong!
If you require any further information regarding IBM Unix Operating System offering please do not hesitate to contact your Celerity Representative or email info@celerity-uk.com
 
Chris Lang - Technical Consultant - Celerity Limited
 
Click here to view this article on Celerity's website
 

Thursday, October 17, 2013

IBM Announces Ultra-Dense NetXtScale Server Systems

On 10th September 2013 IBM made a number of big announcements related to System x. These offerings include a brand new dense computing platform named IBM NeXtScale System that aims at scale-out datacentres and targets cloud, technical computing, analytics and social media computing.

NeXtScale has been developed from IBM technologies including iDataPlex and BladeCenter and provides an ultra-dense platform which can squeeze up to 84 dual processor servers into a single 42U rack. Effectively, this equates to up to 2016 processing cores per rack. Unlike iDataPlex, NeXtScale infrastructure uses industry standard 19 inch racks. With a building block approach to scalability, it allows organisations of all sizes and budgets to start small but scale rapidly and easily if required.

The idea behind modern applications is that they scale across multiple nodes and have their RAS features inherent in the software layer. Thus there is no technical reason to have so many management controllers, power supplies, fans, and other components in the box. The machine is stripped down to the bare essentials to be a node in a cluster with nothing unnecessary that adds cost. The kinds of parallel applications run by HPC datacentres, public cloud operators, and enterprises setting up private clouds do not always need all of the reliability and management features of full-on, enterprise-class servers. IBM were quoted as saying "We tried to stay away from building a luxury car. This is a high performance race car."


The new servers have been built around Intel’s new Xeon E5-2600 v2 processor family, support ultra-fast 1866 MHz RAM and feature industry-standard components such as I/O cards and top-of-rack networking switches, fitting neatly into existing infrastructure. NeXtScale can operate in temperatures of up to 40C, reducing cooling requirements and further lowering energy expenditure.


A nice feature of IBM NeXtScale is that it is pre-built by IBM and arrives at the client location racked, cabled, and labelled, ready to power on. It is suggested that this can reduce time from arrival to production readiness by up to 75 percent.

If you require any further information regarding the new IBM NeXtScale System please do not hesitate to contact your Celerity Representative or email info@celerity-uk.com

Malcolm Smith - Technical Consultant - Celerity Limited

Contact

To view this article on the Celerity Website Click here

Thursday, October 10, 2013

VMware vSphere 5.5 - What's New?

At VMWorld in August VMware released vSphere 5.5. The following is a quick rundown of the main features and enhancements VMware have announced.
 
Single Sign-On (SSO)
 
One of the biggest issues with 5.1 was SSO; It has been well documented online and even VMware have admitted it was not great. With this in mind they have listened to the feedback from customers and re-written the code from the ground up to hopefully resolve the issues. Those with access to the beta have been reporting a huge improvement in ease of installation/upgrade using 5.5.
 
• Removed requirement for a database
• Built-in replication
• Support for one and two-way trusts
 
vSphere Web Client
 
We have a small number of additions to the web client this time around but on the whole the functionality has remained the same. We have noticed an improvement in responsiveness and this can only be a good thing if the Web Client is to eventually become the primary way to manage the vSphere infrastructure.
 
• Full client support for Mac OS X
• Drag and drop
• Recent items
• Improved UI responsiveness
 
Storage
 
One of the big changes to storage is support for 62TB VMDKs up from the previous 2TB. These are now supported with NFS of VMFS-5 datastores and hosts running ESXi 5.5. Other features such as vmotion and snapshots are supported but will take longer to complete for obvious reasons.
 
• Support for 62TB VMDK
• Microsoft Cluster Service – Support for Server 2012 and the FCoE and iSCSI protocols
• 16Gb end-to-end FC support
• VMFS heap improvements
• vSphere flash read cache
 
Networking
 
There are a lot of new network features included with 5.5 as well as LACP enhancements with over 20 different choices for load balancing, 40GB NIC support and QoS tagging. These new features will require the use of a distributed switch.
 
• LACP enhancements
• 40GB NIC support
 
Other changes
 
We have had a nice bump in functionality for the vCenter Appliance bringing it more in line with its Windows counterpart and this will certainly be of interest to those looking to save money on Microsoft licences. We also see expanded vGPU support to now include Intel and AMD GPUs.
 
• vCenter Server Appliance supports 100 hosts and 3000 VMs
• Improved power management by leveraging CPU C-States
• Expanded vGPU support
 
If you would like to know more about the changes in the VMware vSphere 5.5 platform please contact Celerity or download the VMwarePDF  here.

Barry Knox - Celerity Limited - Technical Support
 
To view this article on Celerity's website click here

 
 

Thursday, October 3, 2013

How to Implement A Service Catalogue

IT departments have never before had so many competitors for their role e.g. outsourced services, who expect more for less and (believes) it is more educated than ever before on what IT can do for them. As business change continues to accelerate, and the reliance on IT services becomes more and more a prerequisite for any organisation to stay in business, the challenges on the IT department keep growing.
 
For IT to counter against this, there are two goals that need to be satisfied:
 
• IT needs to provide business value to the organisation for outcomes it wants to achieve
• IT must demonstrate its value to the business, in order to be seen by the business as an enabler of desired outcomes
Delivering a Service Catalogue for an IT organisation will help to:
• Build a clearer picture of the business services required
• Report and manage performance, quality and efficiency of the services delivered
• Apply an appropriate commercial model for the management of the services
• Focus and motivate IT staff to achieve real business success for customers
• Improve the relationship between the business and IT by being service focused
 
The definition of a service catalogue, as defined by ITIL, is an exhaustive list of IT services that an organisation provides or offers to its employees or customers. Although it is quite acceptable to create an environment that is asset orientated rather than service orientated, from a business perspective it is more logical to support the community from a services point of view.
 
There are basically three views of a service catalogue:
 
User – generally the most common type of service, i.e. request for folder access.
Business – available for a more senior user i.e. email service.
Technical – the service that unpins the above two points which contains technical details that’s relevant to the service required.
 
 
The catalogue is the only part of the Service Portfolio that would be published to customers and is used to support the sale and/or delivery of IT services. Service Portfolio Management defines and describes all of the services provided by IT. A service portfolio is the complete set of services that is managed by a service provider and is used to manage the entire lifecycle of all services, which includes three categories:
 
1. Service Pipeline (proposed or in development)

2. Service Catalogue (live or available for deployment)

3. Retired Services
 
Each service within the catalogue would typically include:
 
• A description of the service
• A categorisation or service type
• Any supporting or underpinning services
• Timeframes or service level agreement (SLA) for fulfilling the service
• Costs (if any)
• How to request the service and how its delivery is fulfilled
• Escalation points and key contacts
• Hours of service availability
 
The following 5 basic steps are proposed to develop an initial service catalogue:
 
1. Obtain Management Support This is not only an authorisation from IT: it is imperative to involve the business in the process as well. Working with management, choose a person(s) to build the initial catalogue and identify the Customer(s) or Business Unit(s) you wish to involve in the process. Make a formal presentation about the benefits of the Service Catalogue, how you plan to use it and why you need business participation.
 
2. Establish A Service Catalogue Team The initial service catalogue team should represent various viewpoints from within IT and from the business. Choose members from IT at all levels and functions; invite members from the business unit as well. Often, the business perceives things very differently from IT.
 
3. Define IT Services The team should examine IT and Business activities in an effort to document the major IT services in production. For example, “email”, “SAP”, “Internet”, etc. Be aware that business and IT could have different names for the same service. Using the inventory of services, the team must work to achieve consensus on the services and their names. Once the names are clear, document what the users of the service perceive as their needs.
 
4. The Dry Run After the completion of an initial catalogue, review it to ensure that it is clear and easy to understand. The catalogue may cause a change for some – old service names, for example “email” that IT uses may be different than the new name, for example “Exchange”. It is important to handle these changes through engagement and not through edict.
 
5. Implement Ensure all access points to IT e.g. the Service Desk understands and has implemented the new service names and associated processes.
 
6. Publish the Service Catalogue to the business by posting it to the company intranet, if available, and solicit the business input about its contents.
 
One of the most important threads through the above steps is to ensure appropriate change controls are in place as the service catalogue is a moving target. Depending on the size of the business it may be worth phasing in the changes across the business units to reduce the risk and introducing a pilot scenario.
 
Should you require any further information on Implementing a Service Catalogue please contact Celerity Limited.
 
Kevan Dix - Project Manager - Celerity Limited
 
To view this article on Celerity Limited's website <<click here>>

Thursday, September 26, 2013

Benefits of a Service Desk

There are many benefits and advantages of a using a service desk in business today. Many large companies are now using service desk solutions to help reduce their overall service costs. The main types of service desk are:
 
• Call Centre: Only call dispatching, no other services.
• Unskilled Service Desk: Call dispatching, incident tracking, feedback to clients.
• Skilled Service Desk: Large numbers of incidents are solved at the Service Desk.
• Expert Service Desk: Incorporates Incident Management and Problem Management. Most incidents are solved at the Service Desk.
 
The benefits to the business can affect many areas including the end user, staff and management as well as the overall company.
 
The service desk can provide the end user with a single point of contact (SPOC) this can be in the form of an email, phone call or via a web portal. This enables the facility to record the issue or request a service depending on the business needs. If available, users could make use of an online knowledge base and FAQs for self-help and keep track on issues and requests that they have placed earlier.
 
Staff who are employed on a service desk have the benefit of a centralised database to record incidents and requests, classify and escalate to a higher level or third party if needed. Staff can also make use of a built in knowledge base to help resolve issues as quickly as possible thus increasing client/customer satisfaction. Depending on the systems in use, automated escalation and SLA can also help staff keep track of incident and resolution timescales.
 
Management will have the ability to produce statistical information to report on performance, resolution rates, and overall customer satisfaction.
 
Businesses need technical service desks to resolve service management issues consistently and efficiently. By centralising incident resolution, request management and reporting, infrastructure support becomes easier and most cost effective for the business.
 
A Service Desk provides:
• Increased customer service perception and satisfaction.
• Increased access to assistance through a single point of contact.
• Improved quality and quicker resolution of customer incidents and requests.
• Improved staff teamwork and communication.
• Enhanced focus and a proactive approach to service provision.
• Improved usage of IT support resources and increased productivity of business personnel.
• Enables businesses to cut downtime significantly and to decrease service-impacting events, assuring a higher Quality of Service.
• Reduced costs i.e. Staff training, user self-service and minimise on-site support.
 
Companies from all over the world are now using service desk solutions simply because they benefit in so many areas, no matter what their business needs are.
 
Should you wish to learn more information about Celerity's Service Desk offerings please contact Celerity Limited
 
Gary Eckman - Technical Support - Celerity Limited
 
To view the article on our website click here
 

Tuesday, September 24, 2013

Celerity Launches 'PowerForce' Program

Celerity launches its Power Force program aimed at existing IBM POWER customers with older POWER servers to demonstrate how they can drastically reduce their operating costs by running workloads unchanged on current POWER servers.
 
With IT budgets under constant strain Celerity has developed the Power Force program to allow customers to exploit both the short and medium term savings driven by refreshing their AIX infrastructure, usually delivering some in year savings with significant savings from year two onwards. These savings are driven by a reduction in hardware and software maintenance costs, reduced power and cooling costs, the reduction of data centre footprint, and significantly reduced software licensing costs driven by the reduction in the number of processors needed to provide equivalent performance.
 
The integrated program developed by Celerity’s Business Development Manager Alan Mackenzie-Wintle and Technical Director Chris Hall is designed to be self-funding, quick to implement, and non-disruptive as it requires very little technical input from the customer.
 
Using our technology model which has been built up utilising Celerity’s experience in POWER systems over the last 10 years, we are able to give customers an estimate of the typical savings that can be made over a three year period within 24 hours of engagement. This is followed by a detailed study, which takes approximately two weeks to carry out, and results in a report containing both detailed information of the savings available on the customer’s specific estate together with details of a costed target server infrastructure. In many cases we are able to provide this study at limited or no cost to the customer.



Celerity started the program earlier in the year but wanted to extend the service to a wider audience because of its success. Peter Reakes, Celerity Sales Director commented, "We have been able to show major savings sometimes in seven figures across a variety of customers and wanted to broaden the reach of service. In many instances we can show savings in year one, even with the capital costs of new servers, then greater savings in years two and three. Even customers who consider their Power AIX estate as legacy and have plans to migrate to different platforms can often benefit due to the scale and immediacy of the savings".
 
For more details on the Celerity Power Force Program please contact your Celerity Representative or email: marketing@celerity-uk.com

Thursday, September 19, 2013

Is The Time Right to Move to LTO-6?

 
Posted on: 19/09/13
Most customers who have invested in tape backup are using one generation or another of LTO drives. Particularly those who were using LTO-3/4 in a library didn’t require the extra capacity of LTO-5 or features such as encryption are still using this technology and are wondering if the time is right to look at the new generation of LTO-6 Ultrium tape drives (or a least that’s what I was asked recently).
The tapes now come in capacities ranging up to 6.25TB of compressed data or 2.5TB uncompressed. As all generations of LTO tape you can read/write to the previous tape model, in this case LTO-5 tapes, and will read tapes two generations back (LTO-4 tapes). The Ultrium 6250 also now interfaces via 6GB/s SAS as well as Fibre Channel, this largely depends on whether you opt for a standalone drive or library. As with LTO-5 it also supports encryption at the drive level meaning encryption is fast and doesn’t add overhead to the backup server. I’ve also seen more and more that data encryption is becoming a mandatory requirement for organisations to ensure the secure handling of customer’s sensitive data.
They’ve also improved how the hardware compression works, so in terms of additional capacity LTO-6 tapes feature improvements over LTO-5 that are notable and shouldn't be overlooked. LTO-5 drives compressed at a ratio of 2:1, whereas LTO-6 is capable of a 2.5:1. That enables a capacity rise up to 2.5TB compared to 1.5TB on LTO-5. LTO-6 features improved data transfer speeds stretching up to 160MB/s from 140MB/s on LTO-5 so not only will you get more data on a tape it will also be written more quickly.
 
So back to the original question - is the time right to move to LTO-6? As always this largely depends on requirements. Do you need the additional capacity and performance or features such as data encryption?
 
One thing that’s always important to factor in is the cost of tapes that you are presently using; if you are currently using LT0-3 or older you will definitely need to replace all of your current tape cartridges. This for some is a bigger cost than purchasing the new drives.
Another option worth considering is by introducing SAN attached data de-dupe technologies in conjunction with tape backup/archive rather than refreshing the tape backup. This gives greater scope and flexibility with the technology being used and could reduce your costs in the long term; however this is a topic of discussion for another day.
 
If you wish to discuss other options or require further information please contact Celerity Limited.
 
Neil Hulme, Technical Consultant, Celerity Limited
 

Thursday, September 12, 2013

Why is the IBM Storwize V7000 So Successful?

 
Posted on: 12/09/13
 
IBM has a great mid-range storage system offering that fits most business needs and has been successfully adopted by many clients worldwide for a few years now; and is still as strong as ever. It even would not look out of place in the Enterprise arena based on its scalability, but how did it gain such market adoption and momentum from its debut and how has it managed to remain a key player ever since?

When the product was launched it was advertised as a new product that was leading edge and not bleeding edge. But what does this mean? This requires a little history lesson. Years back IBM launched an enterprise class storage virtualisation device called the SAN Volume Controller (SVC) that promised to increase performance and add functionality to both your current and future storage systems. And it delivered.
 
Unfortunately it had a price tag as equally impressive. Due to the success and capabilities of the SVC, an entry edition was launched which offered similar hardware but with a revised pricing structure to make it affordable in the mid-market sector.
 
Because of the increased demand for the SVC portfolio, a package built around the IBM SVC and IBM DS5000 storage was launched known as the IBM Virtual Disk Solution (VDS). This brought with it the values of the SVC’s storage virtualisation along with actual storage capacity meaning it was suited to both current and new storage implementations.
 
But it turns out this product was just to ‘test the water’ so to speak. Having proven a demand for a new storage system that not only offered its own virtualised storage but could virtualise currently deployed IBM and non-IBM storage, the Storwize V7000 was born.
 
As opposed to the VDS package which included the two separate SVC nodes, associated UPSs and a DS5000 storage controller (and expansions), the Storwize V7000 brought all of this in to one physical 2U package. This runs the same code as the current Enterprise class SVCs, has all the resilience of a clustered SVC, and supports both its own internal and any other external storage which it then virtualises bringing its rich feature set to it. History lesson over.
 
The latest release of the Storwize V7000 code offers these key features:
 
• Scale-up with support for 240 internal disks and scale-out allowing for 4 dual-node controllers to be clustered as one logical V7000 supporting 960 internal disks and petabytes of externally virtualised storage, all managed through a single web based intuitive GUI.
• Real-time compression creating up to 80% space savings and increased performance for tier 1+ production applications and databases.
• EasyTier support which automatically and dynamically places compressed and uncompressed hot-data from normal hard disks on to SSDs bringing massive performance benefits with just a small amount of SSD.
• Synchronous and asynchronous replication to another V7000 or SVC for disaster recovery.
• Unified version available adding NAS protocol support such as CIFS and NFS to the system.
• Typical virtualisation features included such as snapshots, thin-provisioning, non-disruptive volume migration etc.
• Migration wizard to massively simplify and minimise the risk associated with migrating data from one storage system to another (itself or otherwise).
 
So, to answer the question as to why the V7000 is so successful ... there is little like it in the market place and by being cost competitive, incredibly simple and intuitive to manage, and offering everything and more than its competitors, it is difficult to find a storage infrastructure where it does not fit with a client’s business needs.
 
For more information on IBM Storwize V7000 please contact your Celerity Representatve
Edward Yates, Technical Consultant, Celerity Limited
 

Tuesday, September 3, 2013

Celerity Achieves ISO 14001 Accreditation

Celerity is delighted to announce that in addition to retaining the ISO 9001 Quality Management Standard for a further year, it has now also achieved the ISO:14001 Environmental Management Standard, strengthening the organisation’s continued commitment to quality delivery whilst managing and reducing business impacts on the environment.
This independent assessment was conducted by a leading Certification Body, the British Assessment Bureau and recognises Celerity as an environmentally responsible business, committed to reducing environmental impacts and meeting expectations for sustainable success.

ISO 14001:2004 was first introduced in 1996 as a British Standard and requires organisations to adopt an environmental policy and action plan to manage their impact on their environment. Certified organisations are committed to continuous improvement and are assessed annually to ensure progress is being maintained.

Now an internationally accepted standard, ISO 14001 acknowledges that Celerity has implemented an effective environmental management system and as a business can remain commercially successful without overlooking its environmental responsibilities. The standard also ensures that as the company grows, environmental impact will not grow alongside it. ISO 14001 provides the framework to allow Celerity to meet increasingly high customer expectations and demands of corporate responsibility as well as legal or regulatory requirements.

During a time when virtualisation, consolidation, energy efficiency and sustainability are all key priorities on the IT agenda as organisations strive to achieve environmental excellence, the ISO 14001 certification proves Celerity’s long term commitment to the environment together with good practice and the ability to help other companies meet their own environmental objectives. Celerity commits to continue encouragement of reducing the company’s carbon footprint which will contribute positively to the natural environment.

Chris Wilson, Commercial Director said,

"We are delighted to have achieved ISO:14001 accreditation as we feel we have a moral duty to help ease pressure on the environment. To be recognised by this international standard confirms our commitment to contribute positively to our natural environment. This goes hand-in-hand with a lot of work we do for our clients. Consolidating and virtualising their datacenter’s infrastructures provides them with huge cost and efficiency savings as well as reducing their carbon footprints by dramatically cutting their power usage."

To see more information please visit http://www.celerity-uk.com/news/214/celerity-achieves-iso-14001-accreditation

Thursday, August 29, 2013

IBM FlashSystem

Celerity - [suh-ler-i-tee]. Noun. Swiftness; speed.

 So, what would happen if Celerity were to be coupled with
IBM's new FlashSystems?
Our Technical Headquarters excitedly took delivery of some new toys last month and have since been playing, training and testing 2 IBM FlashSystem boxes ever since. If all goes to plan then next month we aim to take over the world!  Well ok, these FlashSystem boxes might not be quite that good, but they are certainly making us explore the new opportunities that are available to clients now that we have them in our possession.
 
 
HDDs may be getting bigger but they are not getting faster. We all know that SSDs are much faster than HDDs, but they cannot realise their potential because they are stuck behind a slow disc interface. Late last year IBM acquired solid state veterans of 34 years Texas Memory Systems for their flash memory based systems, later to be re-named IBM FlashSystems as part of the IBM System Storage portfolio. IBM’s flash memory storage arrays remove the bottlenecks of HDDs and SSDs which greatly speeds up access to your data held on flashcards containing the fastest 32nm Toshiba flash chips. The flashcards use DRAM as a buffer to help achieve up to 570,000 read IOPS with less than 100 ms latency. With 4 models offering up to 24 TB storage in a 1U enclosure using less than 400 watts, it offers one of the industry’s best IOPS per watt ratio.
 
IBM Flash System arrays give reliability; small footprint, low power consumption and low latency that makes them perfect to accelerate Oracle, DB2, SQL databases, VDI and critical applications on Windows, Linux and AIX. Reduced I/O wait time can increase CPU efficiency allowing more to be done in less time, resulting in a lower total cost of ownership. To get the performance of a FlashSystem using traditional HDD arrays would need the equivalent of up to 750 HDDs, with all the power, cooling and space requirements to go along with them!
 
IBM FlashSystems are built for micro latency. They let you access your data fast, but they can also integrate with IBM’s SAN Volume Controller (SVC) and IBM Storwize V7000 to allow extra features such as easy tier, preferred read and manual data placement, to give a huge boost in performance by storing hot data on the faster flashcards, and cool data on slower cheaper HDDs and SSDs. This means that they can be dropped into your existing data centre to give you that increase in performance whilst still keeping existing storage.
 
 
 
The lower capacity (1–10TB) 710 and 720 arrays use single level cell (SLC) chips which offer a 33x improvement in endurance over some vendor’s multi-level cell (MLC) chips. The higher capacity (6-24TB) 810 and 820 arrays use Enterprise-grade eMLC chips; which typically offer 10x greater chip longevity on writes over that of standard MLC. Endurance is improved and systems are protected with; ECC at chip level; Variable stripe RAID (VSR - to protect against chip failure); 2D flash Raid (eliminates single point of failure); wear levelling and over provisioning. Hot swappable parts and the 2 management NICs on the 720 and 820 give them enterprise reliability and with the on-board batteries data can be written to the flash chips in the event of a power outage.
 
The low TCO and high ROI of IBM FlashSystems means that your business will perform better and allow you to pursue new opportunities using your existing hardware and software - you’re undoubtedly missing out if you do not at least consider flash memory technology to enhance your storage environment.
 
Please contact a Celerity Representative for more information on IBM FlashSystems.
IBM FLASHSYSTEM ASSESSMENT FOR ORACLE

 Submit a 1 hour Oracle AWR, or Statspack report and we will provide you with a FREE detailed performance assessment to demonstrate how much of an improvement FlashSystems can bring to your organisation. You can then actually experience these benefits in your own environment with a 2 week on-site trial of the equipment – what have you got to lose?
 
John Carson - Technical Consultant - Celerity Limited


Thursday, August 8, 2013

Veeam Backups with Data Domain

Posted on: 08/08/13
 
Celerity recently implemented an EMC Data Domain into our own environment for use as our main backup storage device and because our infrastructure is primarily virtualised we use Veeam as our backup solution of choice. After some initial testing we have found that using Data Domain as your backup storage in combination with Veeam can help you cut costs and improve data retention.
 
Veeam, as a standalone product, is extremely good which is why it is the #1 VM backup solution on the market, but when combined with the power of Data Domain it can be even better.
 
The major limitation with Veeam is that deduplication is limited to the virtual machines within each individual backup job. Veeam suggest that you keep similar machines in the same backup job, which works well for the OS files, but what about all that data spread across multiple backup jobs? And, what happens if you want to run an ad-hoc backup on a single machine?
 
Limitations within Veeam mean you must run a full job which could contain multiple virtual machines to get a backup of the virtual machine you want.
 
And so, this is where a deduplication storage system such as Data Domain comes into its own. It allows you to deduplicate globally across all virtual machine backups regardless of the job they are in. This means you can group your backups into smaller sub-jobs, or have singular backups for certain machines, without impacting on the amount of storage that would otherwise be required.
In essence what this means is that you can backup more machines, more of the time while using less space.

 
A Data Domain storage solution does not just benefit Veeam, it can also be used to backup physical machines to further reduce storage requirements for backups. You also have access to all of the other features Data Domain has to offer such as replication to another Data Domain device for offsite backups or for use as a Virtual Tape Library, which may be of interest with the release of Veeam Backup and Replication v7.
 
When time allows, I intend to do some further testing on the difference between regular Veeam backups and Data Domain to obtain the required information to confirm my initial findings, the results of which I will include in a future article.
 
Should you require to discuss your requirements please do not hesitate to contact a Celerity representative.
 
Barry Knox - Technical Consultant - Celerity Limited

Thursday, August 1, 2013

Endpoint Manager for Core Protection

Antivirus and anti-malware is a necessary evil in today’s world and is a standard deployment in all organisations. IBM’s Endpoint Manager for Core Protection offers more than standard AV and anti-malware products while using fewer resources than many of its competitors.

Some bloated AV products consume more CPU, network bandwidth and may need many dedicated servers to achieve less than Endpoint Manager for Core Protection. It does this by using IBM Endpoint Manager’s powerful and versatile, yet lightweight delivery and management infrastructure. This also includes patch management to keep all of your protected computers installed software up-to-date. At its heart is a version of Trend Micro’s Office Scan whose cloud based database gives up-to-date real time protection using a variety of methods to keep all of your Windows and Mac platforms protected.


IBM Endpoint Manager for Core Protection stops threats before they arrive by checking files, URLs and emails for malicious potential in real time.
Source: ibm.com

 

Features include:

• Anti-Virus and Anti-Malware, Endpoint Firewall, Patch Management, Asset Discovery, Compliance Management and Optional Data Loss Prevention and Device Control to protect your network from all angles.

• File, web and email reputation along with behaviour monitoring for anti-virus and anti-malware. Provided by Trend Micro, who have 25 years’ experience in business security.

• Virtualisation aware. Scanned and certified gold images only need changes rescanned during duplication to speed up scanning time in VDI environments. Serialises scans of Citrix XenDesktop and VMware View virtual endpoints to avoid antivirus storms.

• Lightweight threat protection. Using the cloud to reduce the amount of data held on endpoints. Putting less strain on endpoints and networks whilst giving them the latest threat information.

Optional Data Protection

To help enforce organisational security policies IBM’s Endpoint Manager for Core Protection offers an optional Data Loss Protection plug-in. This adds dlp and device control to safeguard your data against accidental or deliberate loss.

Not only can it regulate, log access to drives, USB devices, ports based on security and user policies, but it can identify files based on attributes, keywords and patterns. This can ensure that sensitive data is controlled to allow organisations to comply with data privacy laws.

Please contact a Celerity Representative for more information on Endpoint Manager for Core Protection.

John Carson - Technical Consultant - Celerity Limited

Thursday, July 18, 2013

What's New in Veeam Backup and Replicate Version 7

 
Posted on: 18/07/13
 
Nearly two years ago I wrote an article evangelising about Veeam Backup and Replication, having discovered what a powerful and simple to use product it was. Since then as a company we have used it in anger and tested it to the point of destruction and it has still never let us down.
 
Veeam has recently announced some enhancements and new features for Version 7, which is expected to be available sometime in Q3 of this year. Some of these new features are going to allow us to protect our data more thoroughly than ever before.
 

It is recommended that a reliable backup strategy should include at least three copies of your data using at least two different formats and at least one off-site backup. Veeam Version 7 has now made it so much easier to get backups offsite;
 
1. Native tape support has been added in Version 7 with support for virtual tape libraries (VTLs), tape libraries and standalone drives. Most people would probably agree that tape is still a reassuringly essential part of their backup strategy so this added feature will most probably be well received.
 
2. Backup copy jobs will allow backup files to be copied to other locations without the need for additional backup jobs, copy scripts or storage replication.
 
3. Built-in WAN acceleration is a feature included with Version 7 which overcomes the obstacles of limited bandwidth when copying backups offsite across WAN links. With caching, variable-length deduplication and optimisations for transferring Veeam backups across the WAN, Veeam claim that it is up to 50x faster than a standard file copy and easy to use. There are no agents to install and no network setup.
 
These are only a few of the new features which will be included with Veeam Backup and Replicate Version 7. There will be a total of seven new features and over fifty enhancements. Some of the new features will only be available in the new Enterprise Plus edition but if you have already purchased Enterprise edition licenses before 1st July, you will get all the Version 7 features for free when it becomes available.
 
To find out more about the enhancements in Version 7, go to http://go.veeam.com/v7
 
Should you require more information on Veeam please contact a Celerity Representative.
 
Malcolm Smith - Technical Consultant - Celerity Limited
 
- See more at: http://www.celerity-uk.com/news/204/whats-new-in-veeam-backup-and-replicate-version-7#sthash.e11wt0HR.dpuf

Thursday, July 11, 2013

Tape-v-Disk Technology By Neil Murphy, Celerity Limited

Ever since I have been working in the storage and backup sector of the IT industry, for some 10 years now, people have been stating that “tape is dead”. Some firms have even gone as far as naming their companies along those lines, such as Sepaton, an American based company that deals with disk-based solutions; read it backwards, ‘sepaton = notapes’. There has been an increasing move to disk based solutions. Especially when working alongside some form of de-duplication and compression technology, meaning that you can get a lot more of your data on a lot less disks, and the cost of disks is constantly lowering making it the more viable and cost effective option. Why then would large OEM’s such as IBM, HP, Quantum, Sony, Fujifilm to name but a few, continue to invest huge sums of money in the research and development of a technology that is no longer considered practical? The answer is quite simple; tape-based backup solutions are still, and will continue to be, a huge factor in a corporation’s backup and recovery infrastructure.

Evolving Storage Needs
IT storage managers are expected to manage and protect data with constrained resources while dealing with increased expectations, tighter budgets, increased regulations and heightened security concerns. Businesses are also increasingly focused on total cost of ownership and rising energy costs.

• Data is at risk and must be protected – there is a myriad of potential data destructors: system error, theft, hackers, viruses, sabotage and natural disaster
• Data is growing exponentially - some say by 50% or more each year
• Business environment is constantly changing - increasing budget challenges and customer demands
• Need to store more data for longer periods - information is key to an organisation's success

Tape and Disk Together
The ideal solution is an infrastructure that incorporates both disk and tape formats, each working alongside one another. Tape works well with disk solutions to address different needs. Disk can help with fast backup and retrieval for high performance application needs. However, according to a University of California-Santa Cruz three month study, more than 90% of disk stored data was typically never accessed again, and another 6.5% was only accessed once. This data could be stored on cost-effective tape. Tape is well-suited for this type of data as it is a less expensive and less energy-consuming storage medium. Once data becomes infrequently accessed it should be moved to tape.

Technology Diversification
It is important to have copies on different forms of media to avoid a media or system process disaster. In this case, a mixture of disk and tape, perhaps in a disk-to-disk-to-tape environment. A data protection plan must incorporate a copy of critical data that is stored offline and offsite. Offline data can protect from system errors, hackers and viruses. The data should also be offsite. That way, in the event of a site-wide disaster, the offsite copy of data can be used to recover. These are part of backup and data protection best practices. So, it seems that despite what some might say, tape is still going to be with us for some time. In fact, if you do a generic search on LTO technology, the results show that LTO-7 and LTO-8 are already in the pipeline:

 
With this in mind, what does LTO-6 offer that earlier incarnations couldn’t?

HUGE TAPE CAPACITY
Up to 6.25TB (assuming 2.5:1 compression). One LTO-6 tape can hold the data of more than three LTO-4 tapes

BLAZING SPEED
LTO Ultrium-6 technology has up to 400 MB/s data transfer rates (assuming 2.5:1 compression) to improve efficiency which is over 1.4TB per hour of blazing backup performance per drive.

COMPATIBILITY
LTO-6 drives are designed with backwards-compatible read-and-write capability with LTO-5 cartridges, and backward read capabilities with LTO-4 cartridges, protecting investments and easing implementation.

WORM (Write Once Read Many)
LTO WORM tape support helps address compliance requirements

DATA SECURITY
Tape drive-based 256-bit AES encryption helps protect sensitive information.

LTO-6 WITH LTFS
One of the exciting features available with the LTO-5 and LTO-6 tape drives is the Linear Tape File System (LTFS). LTFS gives LTO-5 and LTO-6 users the ability to use tape in a fashion like disk or other removable storage media for outstanding management and usability. The Linear Tape File System (LTFS) is the first file system that works in conjunction with LTO tape technology to set a new standard for ease of use and portability for open systems tape storage. With this system, accessing data stored on an LTO tape cartridge is as easy and intuitive as using a USB flash drive. And with the operating system's graphical file manager and directory tree, utilising data on an LTO tape cartridge is as easy as dragging and dropping.

• LTO-5 and LTO-6 specifications enable the capability for two media partitions which can be independently accessed to help provide faster data access and improved data management
• With LTFS, one partition holds the content and the other holds the content’s index; the tape can be self-describing to improve archive management
• Enables capabilities that manage files directly on tape allowing for easy sharing of the tape cartridge across platforms
• Makes viewing and accessing tape files easier than ever before. Explore tape content with directory tree structures and drag and drop files to and from the tape
• Addresses the growing needs of a variety of marketplace segments with rich media such as Media and Entertainment, Medical, Digital Surveillance, Seismic Exploration, Government, Cloud and more!

You can drag and drop files from your server to the tape, see the list of saved files using a standard operating system directory (no backup software catalogue needed), and use point and click to restore. To implement this feature, you simply need to download and install the LTFS software on your host machine, usually provided by the tape drive vendor of your choice. So, with these types of technological advances being available with LTO-5 and LTO-6 media, it is very interesting to see what features might be available with the next generation of LTO technology. Watch this space!

Should you have any storage requirements please contact a Celerity Representative. www.celerity-uk.com

Neil Murphy - Principal Consultant - Celerity Limited