Tags: Cloud, Cloud computing, cloud-services, Data center, devops, IBM, Information technology, infrastructure, paas, Platform as a service, private cloud, public cloud, softlayer, Software as a service, United States
This post was originally published on ThoughtsOnCloud on June 17th, 2014.
I’ve been a cloud architect for the last three years or so and have seen dramatic changes in the IT industry and its view of cloud. I’ve also observed different attitudes to cloud in different industries and countries.
I took on the cloud architect role because I saw that customers were asking about cloud, but they all had different ideas of what this meant. Depending on whom they spoke to first they could think it was hosted managed software as a service, or they could think it was on-premise dynamic infrastructure—or many other permutations between. My job was created to talk to them at the early stage, explain the full scope of what it means, to look at their business requirements and workloads and align them to the most appropriate solution.
Three years later you would hope that it’s all a lot clearer and in many ways it is, but there are still preconceptions that need to be explained, and also the cloud technologies themselves are moving so rapidly that it’s hard enough for people like me to stay abreast of it, let alone the customers.
To begin, I noticed some fairly obvious differences, many of which still hold. The large financial institutions wanted to keep their data on premise, and they had large enough IT departments that it made sense for them to buy the hardware and software to effectively act as cloud service providers to their lines of business. Some investment banks saw their technology as a key differentiator and asked that I treat them as a technology company rather than a bank, so they didn’t want to give away the ownership of IT, the attributes of cloud that they were looking for were standardisation, optimisation and virtualisation.
On the other hand I was working with retail companies and startups who saw IT as an unnecessary cost, a barrier to their innovation. They saw cloud as a form of outsourcing, where a service provider could take on the responsibility of looking after commodity IT and let them focus on their core business.
A third industry is government and public sector. This is very different in the UK to other countries. In the United States, the government is investing in large on-premise cloud solutions, and this avoids many of the security and scalability issues. In the UK, with a new government following the global financial crisis, there is an austerity programme, which led to the Government ICT Strategy and Government Digital Strategy and the announcement of the Cloud First Policy. This requires that government bodies use hosted, managed cloud offerings, as well as encouraging the use of open source and small British providers.
Our health sector is also very different to the U.S., with our public sector National Health Service being one of the largest employers in the world, whereas in the U.S. health has much more of an insurance focus.
Over the years in all industries there has been a lot of fear, uncertainty and doubt about the location of data and whether or not there are regulations that make this an issue. I’m glad to say that we’ve now worked through a lot of this and it’s a lot clearer to both the providers and the consumers.
In practice most of the cloud investment that happened was infrastructure as a service (IaaS). Much of this was private cloud, with some usage of public cloud IaaS.
We used to have a lot of interest from customers, whether they be meteorological or academic research, looking for high performance computing clouds. This made a lot of sense, as the hardware required for this is very expensive and some customers only need it for short periods of time, so to have it available on a pay as you go basis was very attractive. Last year, IBM acquired SoftLayer, which includes bare metal IaaS as well as virtualised. This means that HPC cloud is more attainable and with this has come a change of perception of cloud from virtualisation and closer to the hosted, utility based pricing view.
The big change this year is the move from IaaS to platform as a service (PaaS). With the nexus of forces of mobile devices (phones, tablets, wearable devices, internet of things), social media generating large amounts of unstructured data, and high performance broadband, there is a new demand and ability to deliver cloud based mobile apps connecting and exploiting data from multiple sources. This reflects a shift in the IT industry from the systems of record, which store the traditional, fairly static, structured data, to the new systems of engagement, which are much more about the dynamic customer interface and access to the fast changing data.
Developers are becoming key decision makers. They often work in the line of business and want to create business solutions quickly, without the blocker of the traditional IT department. Optimising the speed to market of business solutions by using IaaS, devops has been the first step in this. Now customers are looking to PaaS to give them immediate access to the whole software development environment of infrastructure as well as the necessary middleware for developing, testing, and delivering solutions quickly and reliably with minimal investment. This also includes the new open source middlewares and hyperpolyglot languages.
Finally, SaaS. We are talking to many companies, public sector bodies, and education establishments, who want to become entirely IT free. They don’t want a data centre and they don’t want developers. This requirement is now becoming achievable as IBM and others are committed to making a significant proportion of their offerings available as SaaS solutions. Of course, this brings new challenges around hybrid cloud integration and federated security.
Do my views of the trends in UK cloud align to yours? I’d love to see your comments on this.
Tags: Citigroup, Cloud, Cloud computing, continuous delivery, devops, governance, iaas, IBM, Information technology operations, IT, paas, Platform as a service, services, software development, UrbanCode. cloud
This post was originally published on ThoughtsOnCloud on April 29th, 2014.
As I explained in my earlier blog post, “Cloud accelerates the evolution of mankind,” I believe that cloud has the power to change the world. It achieves this by giving us speed—and this has an exponential effect.
When I first became a professional software developer in the late 1980s, we spent two years designing and writing a software product that our team thought was what the market wanted. We went through a very thorough waterfall process of design (code, unit test, functional verification test, system test, quality assurance) and eventually released the product through sales and marketing. This cost us millions and eventually sold 14 copies.
More recently we’ve begun to adopt lean principles in software innovation and delivery to create a continuous feedback loop with customers. The thought is to get ideas into production fast, get people to use the product, get feedback, make changes based on the feedback and deliver the changes to the user. We need to eliminate any activity that is not necessary for learning what the customers want.
Speed is key. The first step was to move from waterfall to a more iterative and incremental agile software development framework. After that, the biggest delay was the provisioning of the development and test environments. Citigroup found that it took an average of 45 days to obtain space, power and cooling in the data center, have the hardware delivered and installed, have the operating system and middleware installed and begin development. Today, we replace that step with infrastructure as a service (IaaS).
The next biggest delays in the software development lifecycle are the handovers. Typically a developer will work in the line of business. He will write, build and package his code and unit test it. He then needs to hand it over to the IT operations department to provide a production-like environment for integration, load and acceptance testing. Once this is complete the developer hands it over completely to the operations department to deploy and manage it in the production environment. These handovers inevitably introduce delay and also introduce the chance for errors as the operations team cannot have as complete an understanding of the solution as the developer. Also there will be differences between each environment and so problems can still arise with the solution in production.
By introducing a DevOps process, we begin to merge the development and operations teams and give the power to the developer to build solutions that are stable and easy for IT operations to deliver and maintain. Delivery tasks are tracked in one place, continuous integration and official builds are unified and the same deployment tool is used for all development and test environments so that any errors are detected and fixed early. With good management of the deployable releases, development can be performed on the cloud for provisioning to an on-premises production environment or the reverse; solutions can quickly be up and running in the cloud and as their usage takes off it may prove economical to move them on premises.
Of course there is risk in giving the power to the developer. The handover delays happen for a reason—to ensure that the solution is of sufficient quality to not break the existing environment. This is why the choice of tooling is so crucial. The IBM UrbanCode solution not only automates the process but provides the necessary governance.
As I discussed in my blog post “Cloud’s impact on the IT team job descriptions,” introducing IaaS to this means that the role of the IT operations department is reduced. They may still be needed to support the environment, especially if it is a private cloud, but the cloud gives self service to the developer and tester to create and access production-like environments directly. It brings patterns to automatically create and recreate reproducible infrastructure and middleware environments.
In my next post I will discuss the next step that can be taken to increase the speed of the development cycle:platform as a service (PaaS). I’d love to hear what do you think are the benefits of DevOps with the cloud and other ways to accelerate delivery? Please leave a comment below.
Tags: bluemix, Cloud, Cloud computing, development, devops, iaas, IBM, infrastructure as a service, operations, orchestrator, paas, Platform as a service, private cloud, public cloud, smartcloud, softlayer, software, urbancode
My point of view on accelerating business development with improved time to market by using lean principles enabled by devops and cloud.
Tags: Application programming interface, Business, Cloud, Cloud computing, deployment strategy, IBM, Information technology management, infrastructure, integration challenges, journey, KPMG, middlesex university, monolithic applications, Platform as a service, roadmap, Service-oriented architecture, software, technology, transformation, transition
An abridged version of this was first published on Cloud Services World as “Transition vs Transformation: 8 factors to consider when choosing the best route to cloud” in July 2013
Many companies have arrived at cloud naturally over the years through data centre optimisation. Traditional IT had separate servers for each project and application. We then had consolidation of the physical infrastructure and virtualisation to allow multiple operating systems and application stacks to run on the same server, plus shared application servers and middleware. Standardisation and SOA allowed the monolithic applications to be broken up into functions that could be reused, shared and maintained independently. By using patterns and putting the application stacks into templates, automation enabled the self-service, elasticity and the dynamic workload provisioning benefits of private cloud.
Now that the functions have been sufficiently encapsulated and commoditised we are seeing more and more customers moving them onto a public cloud and perhaps linking them back to their core existing business as a hybrid cloud. Some companies are teaming together and sharing services in a club cloud.
However you don’t have to go through all these steps in order. It is possible to migrate workloads straight to the cloud, but it’s important to do this using a carefully considered methodology.
According to KPMG’s Global Cloud Provider’s Survey of 650 senior executives, the number one challenge of their adoption of cloud is the high cost of implementation/transition/integration. Steve Salmon of KPMG said “Implementation and integration challenges are critically important to overcome and can threaten both the ROI and business benefits of cloud.”
To get the most benefit from moving to the cloud it is critical that you understand your current portfolio of applications and services and align them with a cloud deployment strategy. Begin by looking at the strategic direction of your company. Next, analyse the business applications and infrastructure components and create a prioritised list of suitable workloads for migration to the cloud as well as an analysis of the potential costs and migration impacts. Then look at your existing environment and determine an appropriate cloud computing delivery model e.g. private, public, hybrid, community etc. Define the architectural model and perform and gap analysis, build the business case and then implement based on a roadmap.
Application Migration Assessment
An assessment needs to be carried out against each application, or group of applications assessing the benefit and effort of moving to the cloud. This will form a roadmap/prioritisation of which applications to move in which order. A typical application migration roadmap would be based on the following chart. This shows risk/ease of migration against gain. In terms of time it is recommended to start migrating the apps in the top right corner of the diagram and end with the ones in the bottom right.
Transition vs. Transformation
When considering whether to move an application to the cloud, it is important to consider both the business purpose of the application and the role that application plays in supporting business and IT strategies. This context is important for considering whether to transition an application to a cloud environment, or whether to rearchitect or “transform” the application – and if so, how to do it.
Transition, commonly referred to as the “lift and shift model,” is applied to situations when the application runs as-is or with minimal changes to the architecture, design or delivery model necessary to accommodate cloud delivery. For example, an application with no duplication of functionality and that supports current performance and security requirements would be a good candidate to transition to a cloud. The transition of such an application typically includes:
- Selecting a private or public cloud environment to host the application.
- Provisioning and configuring the Infrastructure-as-a- Service and the Platform-as-a-Service needed to deploy the application.
- Provisioning the application to deliver built cloud characteristics such as monitoring, metering, scaling, etc.
When identifying enterprise applications for transition, there are a number of factors to consider.:
- Business model –Business services and capabilities should be separated from the underlying IT infrastructure on which they run.
- Organisation – Enterprise IT governance should be well established. Service usage is tracked and the service owner and provider are able to reap paybacks for the services developed and exposed by them.
- Methods and architecture –The application architecture should support service-oriented principles and dynamically configurable services for consumption within the enterprise, its partners and throughout the services ecosystem.
- Applications –The application portfolio should be structured so that key activities or steps of business processes are represented as services across the enterprise.
- Information –The application’s business data vocabulary. This enables integration with external partners, as well as efficient business process reconfiguration.
- IT infrastructure – Services can be virtualised such that any given instance may run on a different and/or shared set of resources. Services can be discovered and reused in new, dynamic ways without a negative impact on the infrastructure.
- Operational management – Service management incorporated into the application design addresses demand, performance, recoverability and availability. It also tracks and predicts changes to reduce costs and mitigate the impact on quality of service.
- Security –Good application and network security design supports both current and future enterprise security requirements.
In some cases, business and IT objectives and conditions warrant larger, more comprehensive changes to an application that is moving to the cloud than are possible under the transition approach. Transforming existing applications involves rearchitecting and redesigning the application to be deployed in either a private or public cloud environment. This path involves the application being redesigned to fit a more open computing model, for example to accommodate service-oriented architecture (SOA), exposed APIs or multi-tenancy. An SOA application model is valuable in that it allows for integration across custom and packaged applications as well as data stores, while being able to easily incorporate business support services that are needed for cloud deployment.
Typically, transforming applications for a cloud environment includes the same set of criteria as transitioning applications to a cloud environment, but with different conditions. Often, applications targeted for transformation are tightly coupled with enterprise legacy systems and do not meet current security, availability and scalability requirements. The situational factors that support the transformation decision include:
- Business model – In an application that is a candidate for transformation, business services tend to be isolated with each line of business maintaining its own siloed applications. Also, there is minimal automated data interaction or process integration between the silos. By transforming the application for cloud delivery, the organisation can extend the business value of the service or application to other lines of business (LOBs) and partners.
- Organisation –When each business unit owns its own siloed applications, it defines its own approach, standards and guidelines for implementing, consuming and maintaining application-delivered services. These may not align well with the need of the organisation as a whole.
- Methods and architecture – In siloed applications there is no consistent approach for developing components or services. LOBs tend to throw requirements “over the fence” to the IT organisation, which then develops solutions with-out feedback from the business. The application architecture is typically monolithic and tightly coupled, with minimal separation between presentation, business logic and the database tiers. Often there is mostly – or in some cases only – point-to-point integration.
- Applications –Usually, portfolios of discrete applications take minimal advantage of service-oriented architecture concepts, and business processes are locked in application silos.
- Information – Information sharing tends to be limited across separated applications. Data formats are often application-specific and the relatively inefficient extract-transform-load process is the primary means for information sharing between applications.
- IT infrastructure –Platform-specific infrastructures are maintained for each application and infrastructure changes have had to be put in place to accommodate service orientation, where it exists.
- Operational management – Service management functionalities such as monitoring and metering to manage enterprise business applications and/or services are either not supported at all, or only to a limited extent.
- Security –Enterprise application and network security enhancements are required in transformation candidate applications to meet current and future security requirements.
After selecting the appropriate cloud delivery model (private or public), the decision to transition or transform an existing enterprise application is important in order to help ensure a successful move to cloud. When resources are limited, it is possible for an enterprise to choose to run the transformation in parallel with transition to meet short-term needs, while planning for longer-term, best-in-class performance through application transformation.
Consider Engaging a Third Party
Enterprise-wide cloud implementation can be a challenging process. Operating under ever-tightening budgets, IT staffs typically spend most of their resources to simply maintain existing server environments. Even those organisations capable of building their own clouds often find, emerging from the testing stage, that they would benefit from outside management support. Because of these challenges, organisations must carefully think about how to best source their cloud technologies. In making a sourcing decision, they should keep in mind business design, necessary service levels and deployment models. The question of who will migrate, integrate and manage applications must also be addressed. After considering these issues, many organisations choose to turn to third-party technology partners for help with enterprise cloud endeavours. Find a third-party provider that offers its clients a choice in the areas of scope, service levels and deployment. The partner should offer deep expertise in strategic visioning and in cloud migration.
To protect the client’s cloud infrastructure, technology partners should provide multiple security and isolation choices, robust security practices and hardened security policies proven at the enterprise level. Security procedures should be transparent, providing visibility into cloud activities and threats. Data monitoring and reporting must be offered. Since business needs are ever-evolving, the best cloud partners also offer a full portfolio of associated services in the fields of security, business resiliency and business continuity. Finally, the client organisation should be able to maintain control over its cloud environment. Technologies that provide this type of control include online dashboards, which can allow the client organisation to manage the cloud environment from virtually anywhere in the world.
An enterprise can choose to run the two approaches in parallel transitioning some to meet short-term needs, while planning for longer-term application transformation.
Middlesex University has embarked on the journey to cloud. They needed to reduce the number of machines from around 250 to 25, electricity usage by 40%, and physical space requirements from approximately 1,000 square feet to 400 square feet. Steve Knight, Deputy Vice-Chancellor, explained, “We need a system that allows flexibility according to our changing requirements. We were looking for a platform solution to complement our longer term plans to achieve a dynamic infrastructure.”
A popular roadmap, similar to the journey Middlesex is on, is:
- Begin to consolidate, rationalise and virtualise the existing on premises infrastructure into a private cloud. Perhaps install a modern expert integrated system which allows you to start consolidating key applications onto a smaller and more economical estate. This should be done gradually so as not to effect current service delivery. Applications and images on the private cloud should be portable to a public cloud at a later date.
- Start “experimenting” with hosted unmanaged cloud running new development and test workloads.
- Look to a MSP to manage your on premises infrastructure and/or private cloud. Start using the cloud provider for disaster recovery and backup.
- Eventually look to move everything to a flexible hosted managed cloud.
So in summary, a blended approach is needed, choosing whether to cloud-enable existing applications or to write new cloud native applications, depending on the characteristics of the applications and their integration requirements.
Tags: Business, Cloud computing, IBM, Information technology, LondonMetropolitanUniversity, Metro Bank, Personal computer, Raspberry Pi
This was first posted on businesscloud9 in November 2012
In my previous blog post here I discussed the potential impact of Cloud on the roles performed by a traditional IT department. I discussed that the build, run and operational roles will be significantly reduced as they will move to the cloud to be performed by automated systems or by managed outsourcing companies. However, there will be an increase in the importance of IT strategy, governance, and relationship management. Commercial and financial skills will be key in aligning the relationships with the different cloud providers, and the internal and external customers. The IT department will be more closely linked to the business, and technical skills will still be needed to integrate the services and provide first line help desk support. I suggested that there is a potential challenge with skills. With many of the traditional junior roles in development and operations moving outside the enterprise, what can be done to ensure that candidates for these new strategy and coordination roles will gain the experience they need?
I think that for most roles there won’t be a big change in training. Currently project management, relationship management, and financial roles in the IT industry are not typically staffed by people who studied Computer Sciences. The skills were taught as part of industry agnostic courses or even after being recruited into work. There is no need for the content of the training itself to change. Although there have been many reports of a decline in the number of applicants for traditional university science courses this may not be an issue as the balance of skills in the IT department will change to include more of these less technical, more business aligned roles.
For the IT strategy, governance and control skills, some of these will need to be taught as dedicated subjects within Computer Science courses. Some of the lower level technical skills that are being outsourced will need to be taught so that they can be appreciated and understood even though they will not be used directly in industry.
E-skills research states that employment in the IT industry is forecast to grow at 5 times the national average over the next decade. With the advent of devices such as the Raspberry Pi, I believe that there will be a resurgence of people gaining technical skills in their home life which hasn’t been seen since the home PCs of the 80s were replaced by games consoles.
Where IT roles have been replaced by automated systems, there is a need for training on how these systems work, how to choose them and how to use them. Students will need to understand the concept of the Cloud Service Provider and how to engage with them. It would be very valuable for students and new hires to spend a period of time on secondment to a Cloud Service Provider. This way, they could develop the technical hands on expertise that has been outsourced, forming a basis of the required governance, strategy and control skills.
Universities are changing rapidly. With the introduction of student tuition fees this year students are now customers and universities are more competitive and more focused on attracting students than ever before. They are starting to work closely with businesses. Companies like IBM are working closely with industry and academia, including schools and universities, to develop smarter skills that will prepare students for the jobs of tomorrow – whatever they might be.
Through the IBM Academic Initiative, IBM is partnering with a number of universities to expand the resources and experiences offered to students, better preparing them for the careers of tomorrow. For example, IBM collaborates with the University of the West of Scotland giving students access to software and technology training to gain skills in business analytics and business modelling.
IBM is also piloting an Academic Skills Cloud to make its software available in a cloud computing environment to more easily allow universities to incorporate technology into their curricula, enabling universities to be more agile and nimble in keeping students up to date with the latest technologies. London Metropolitan University is the first in the UK to use this. Anthony Thomson, Chairman of Metro Bank, said, “Metro Bank hires for attitude and trains for skill, but can only recruit among the select number of graduates with strong aptitude for IT. Academic initiatives, such as the one set up by IBM and London Metropolitan University, are extremely useful in helping to build a level of graduates who have the suitable skills set that is required by employers.”
In summary, the future looks bright. We have most of the skills that we need and good progress is being made to establish a pipeline of the right skills for the future.
Tags: Accenture, BT Group, carbon, Cloud, Cloud computing, CloudStore, Data center, data centre, ecology, Efficient energy use, energy, environment, Environment Agency, green, IBM, Iceland, Microsoft, Reuven Cohen, sc_community, sustainability, United-Kingdom, Virtustream. g-cloud
This post was originally published on ThoughtsOnCloud on May 3rd, 2013
Today, the world is focused on green – from technology to transportation, the environment is on everyone’s minds. You would think it’s self evident that the cloud is green. Common sense tells us that a shared service will be more economical and more ecological than everyone using their own separate, non-optimum infrastructures. It’s like how it seems obvious that travelling by bus is better for the environment than travelling by car. However, statistics show that people who travel by bus burn more energy than people who travel by car because the buses often only have a few people on them and they’re big and heavy and use a lot of diesel.
It’s obvious that instead of everyone running their own old PC that is not properly maintained and only used infrequently, it’s better to share a server hosted by someone who knows how to look after it. It would be a modern ecological server, probably based in Iceland or somewhere where keeping the system cool is not so difficult. It would be a shared elastic environment with everyone using partitions on the same system so they maximize the capacity and they can hibernate the environment when it’s not in use so as to not waste any power or wear out the disks.
However, the more capacity you give to people, the more they seem to use it. If you add an extra lane to a motorway it doesn’t ease congestion, it just gets used by more people. You’d think that with the advent of new technology, such as cars, people’s journey time to work would reduce but actually commuting time now is roughly the same as it was 100 years ago, it’s just that people travel further. A kitchen bin is always full and you carry on trying to cram more into it. If you put another bin next to it they wouldn’t both be half full, you’d be trying to cram waste into the top of both. People increase their usage to use up capacity.
So if we give people access to more computing power through the cloud then they’ll use it. Individuals may be able to find a cure for cancer during their lunch break and make the world a better place but they’ll use a lot more carbon than they did before cloud.
A study by Accenture found that “cloud solutions can reduce energy use and carbon emissions by more than 30 percent when compared to their corresponding Microsoft business applications installed on-premise.”
Reuven Cohen, SVP of Virtustream said “I’m sure that if you were to compare a traditional data center deployment to a near exact replication in the cloud you’d find the cloud to be more efficient, but the problem is there currently is no way to justify this statement without some kind of data to support it.”
By moving your processing to the cloud, you’re moving it to a generally available resilient environment with multiple instances of power, network and cooling. How much energy is actually used by the network? In the UK the Environment Agency publishes a CRC (Carbon Reduction Committee) Energy Efficiency Scheme Performance League Table. In the 2011/2012 results BT Group (UK network provider formerly known as British Telecom) ranked third in terms of absolute carbon emissions, with two other network providers and seven data centres also appearing in the top 100.
The UK Government’s CloudStore includes the question of whether the data centre adheres to the EU Code of Conduct in each submission. It will be interesting to see whether this is used by customers as part of their search criteria and whether this results in an increase in data center tracking and reporting of their energy usage.
So, is the cloud green? The answer depends on definitions of the cloud and what is green. I think the cloud is green. I think that running a workload in a shared modern data center will use less carbon than running it in a traditional on-premise environment. However if the data center is not efficient or the infrastructure is not already in place; if an excessive amount of network is used, if you change the service level requirements or if you increase your usage then this could have negative ecological consequences, but in other ways the world will become a better place.