I think we’re at the point in time now, where it’s safe to assume most organisations and businesses are consuming cloud services in some way, shape or form. Whilst this is good news, it seems many are behind the curve in terms of their approach to this. Cloud services have been main stream now for quite some time, but simply moving your current solutions to the cloud and running them the same isn’t really a transformative approach.
So strategically, where should you start? Well the first port of call should be a thorough discovery exercise. What do you need to know? Well, quite a lot. It’s important to understand items such as; app name, type, business & IT owners, user base, Operating System, SLA/OLAs, Maintenance windows, vendor(s), IP’s, DNS, SVC records, firewalls, category (i.e. CoTS, LoB etc.) and many more. Overall there are over 30 different areas of data to gather and populate per service (some aren’t needed for decision making but will be later on when implementing the plan). I’d also advise putting this data into a format where you can run BI over it to better visualise the services. Sometimes the visualisations are quite surprising and give a perspective perhaps not previously considered.
One of the most vital areas of focus, should be on mapping dependencies. There is an array of tools out there available to do this, such as Microsoft’s MAP toolkit, Azure Migrate, OMS Service Map etc. Some of these tools are a little limited on the non-MS side of the estate, whilst others can cover everything, but there’s other vendor tools out there too such as SolarWinds, Movere and ADDM.
It’s imperative that you invest and do this exercise properly, as the number one problem in moving or transforming pre-existing services with the cloud isn’t getting it there, it is what connectivity has been broken in the process. I’ve seen so many on-premises services be taken down by migrations of others because no one realised there was a link between them. Maybe one is pulling data from the other and can no longer communicate with it. Now, imagine if the one broke was a finance system for example, and it only pulled data through once a month. Everything runs fine for a few weeks and then suddenly a vital business process doesn’t work and everyone loses the plot.
Once you have all your data, it’d be logical to ensure it’s all grouped together correctly. You don’t just want a list of all your servers, you’d ideally want them grouped into the relevant services they function as. That would include all their applicable databases and so on. As you do this, you’ll then start to notice that some services share resources, such as the databases, which then helps you to plan out whether some of these services need to therefore be migrated together, to ensure the shared resources can move at the same time.
After everything is detailed and your data grouped appropriately, it’s time to start forecasting running costs in the target platforms. Azure, AWS etc. all have calculators available which can help work this out for you. What is important though, is to ensure the current operational costs are articulated accurately, which is very rarely ever done. If you aren’t factoring in staff salaries, licensing, hardware, power, DC rent/lease, cabling etc. then it’s not outputting a true cost of ownership. In addition to this, those historical and future Capex upgrade projects need to be split out and allocated proportionally, after all, you may have been paying third party consultancy fees to upgrade in waves over the past decade or two. You likely won’t be embarking upon the same Capex led infrastructure upgrades once in the cloud, keeping the infrastructure or service up to date is all part of the service.
So, the final evaluation points need to centre around your choice of strategy. Using all the data that has been gathered, decisions can now be made regarding the future architecture and target state for each application or service. What is key here, is to ensure business drivers and requirements are well documented and repeatedly challenged. There may be many services that could potentially be retired or replaced but tend not to be, simply because individuals in an organisation are resistant to change, or perhaps sentimental about a service they have owned, managed or developed.
Now the first strategic approach to this was something I worked on a few years back with a former colleague of mine Steve Harwood, who now works as an architect at Microsoft. Steve put together some brilliant approaches which I’ve seen pop up in the years since across many organisations, in various formats, but with the same underlying themes and concepts. So what options do you have?
- Remain / Keep Put – There may be a logical reason as to why a service can’t move (perhaps security). Maybe the solution just isn’t compatible and more time is needed to work out how to finally move away from the reliance upon it.
- Retire / Decommission – There will be services that meet the same requirements as others. You only need one, so theoretically, you can retire some of these. Also, some just aren’t needed anymore. They may have served a purpose at some point in time but now the requirement to have them isn’t justifiable anymore and they have been made obsolete.
- Re-host / Lift & Shift – Typical data centre migration approach. Old school “move it as-is”. This can be done using various tools and methods but commonly you’d replicate the service to your preferred destination, eventually switching the primary service over and decommissioning the legacy one.
- Re-factor / Optimise – If you’re moving to the cloud, then you’ll be aware of the scalability and flexibility of the platforms. Moving a service over by re-hosting can be improved upon by re-factoring its resources, for example, using the database platforms available in the cloud. Pay only for what you consume.
- Re-build – Technically a type of re-hosting, however, in this method you’d rebuild the same service in the cloud and then move over relevant data etc. This way, you can perhaps upgrade the version in the process or make design improvements and optimisations to how the individual application architecture is structured.
- Replace / Commoditise – Every boardrooms favourite. Get rid of practically everything associated to the service and replace it with Software-as-a-Service (SaaS), commoditised and paid for ideally on a per user basis.
- Reinvent / transform – Commonly the most difficult option and not a logical choice if you’re constrained by time (i.e. exiting a hosting contract). This means moving a service to the cloud but reinventing it in the process. Perhaps converting to a web app offering and then developing this to re-engineer how it works, what functionality it delivers, adding content in etc. Most services in the future will run through constant development cycles under this approach, that’s where IT is shifting, but there’s no reason time and cost permitting, why you can’t start now. There’s always some low hanging fruit and quick wins.
Typically, as part of a strategic initiative like this, it’d be wise to take the opportunity to develop a service catalogue. The more you can commoditise your services and applications the better you can articulate the cost per user/department/function. Using the dashboards and analytical tools available (and there’s no bias here, all the market leaders provide these), you can start to work out true costs for meeting business requests through IT. In addition to this, the service catalogue will enable you to build out processes for control of the estate, i.e. if the requirement from a user or department isn’t met through a current service, a robust but efficient request and approval process can be put in place. This should prevent the degradation of the estate, give more spend control and also ensure that services are only provided for needs and requirements suitably justified by the organisation.
To finish, I thought it might be useful to just list what I feel are a few key guiding principles for this sort of change initiative or programme:
- Try to avoid moving services as-is. Maybe that finance server only gets hammered once a month. Maybe some services never run at more than 65% capacity. Take the opportunity to scale down and take advantage of the tech available.
- Be aggressive with decision making and challenge every business requirement. As mentioned earlier, many services are kept for the wrong reasons and actually, upon challenge can be retired or replaced.
- In line with the above, communicate heavily and involve the business in the process. If they are resistant to change, being assertive over a service that isn’t required will only be met with more negativity. Be aggressive in the decision making but work closely and support the business throughout the process. Involve them and guide them towards the same decision. Articulate the benefit of what the change is achieving.
- Don’t retain current staff in the same roles. Your whole operating model will likely need to change and whilst optimisations can be made to ITSM and resourcing, it’s best to look at the future technology roadmap and the operating model and retrain staff. Create new roles that can focus on business benefit as opposed to keeping the lights on and running operations, much of which the cloud service provider will now do for you.
- Automate and standardise the process. This means using code to deploy infrastructure and various other elements. That DevOps teams you’ve probably been trying to build out? Get them involved.
Hopefully some of this is useful, despite being a tad long, and gives a little more strategic business perspective to an approach containing so much technology change.
Image Credit: Microsoft.