Sending SMS messages using Twilio and Bluemix


Here’s an excellent post on setting up a Bluemix app to send SMS messages.

Originally posted on Martin Gale's blog:

I’ve been tinkering with an Internet of Things project at home for a while which I’ll write up in due course, but in the course of doing so have knocked up a few useful fragments of function that I thought I’d share in case other people need them. The first of these is a simple Node.js app to send an SMS message via Twilio using IBM Bluemix.

There’s lots of material on Twilio and Bluemix but by way of a very rapid summary, Twilio provide nice, friendly APIs over telephony-type services (such as sending SMS messages), and Bluemix is IBM’s Cloud Foundry-based Platform-as-a-Service offering to enable developers to build applications rapidly in the cloud. Twilio have created a service within Bluemix that developers can pick up and use to enable their applications with the Twilio services. One of the things I wanted for my application was a way of notifying me that…

View original 805 more words

Sam’s Views on Cloud for Government Policy Makers

I was honoured to be asked to present yesterday on “Cloud Skills, Flexibility and Strategy” at the Westminster eForum Keynote Seminar: Next steps for cloud computing.

English: The Palace of Westminster from Whitehall.

English: The Palace of Westminster from Whitehall. (Photo credit: Wikipedia)

As explained on its website, Westminster Forum Projects enjoys substantial support and involvement from key policymakers within UK and devolved legislatures, governments and regulatory bodies and from stakeholders in professional bodies, businesses and their advisors, consumer organisations, local government representatives, and other interested groups. The forum is structured to facilitate the formulation of ‘best’ public policy by providing policymakers and implementers with a sense of the way different stakeholder perspectives interrelate with an aim is to provide policymakers with context for arriving at whatever decisions they see fit.

The abstract to the session asked about the extent to which Government departments embracing the cloud, what progress is being made in achieving the UK’s Data Capability Strategy on skills and infrastructure development, whether organisations are doing enough to address the emerging shortfall in skills and also asked about the contradiction between mobile device power and cloud.

I was part of a panel and the following was my five minute introduction.

In my five minutes I’d like to talk about the power of cloud and within that to address three areas raised in the abstract to this session – shared services and shared data; mobile; and skills.

We see cloud as being used in three different ways – optimisation, innovation and disruption. Most of what I’ve seen so far in cloud adoption is about optimisation or cost saving. How to use standardisation, automation, virtualisation and self service to do the same things cheaper and faster.

What’s more interesting is the new things that can be achieved with the innovation and disruption that this can provide.

I’ve been working with various groups – local authorities, police forces, and universities, discussing consolidating their data centres. Instead of each one managing their own IT environment, they can share it in a cloud. They justify this with the cost saving argument but the important thing is, firstly, that they can stop worrying about IT and focus on what their real role is, and secondly that by putting their data together in a shared environment they can achieve things that they’ve never done before.

English: The road to Welton, East Riding of Yo...

English: The road to Welton, East Riding of Yorkshire, just south of Riplingham. Taken on the Riplingham to Welton road at MR: SE96293086 looking due south. This is typical south Yorkshire Wolds country. (Photo credit: Wikipedia)

For example, Ian Huntley would never have been hired as a caretaker and so the Soham murders would have been less likely to happen if the police force had access to the data that he was known by a different force.

And we wouldn’t have issues with burglars crossing the border between West and North Yorkshire to avoid detection if data was shared.

In Sunderland we predict £1.4m per year in cost savings by optimising their IT environment but what’s more important is that this has helped to create a shared environment for start up companies to get up and running quickly so it’s stimulating economic growth in the area.

Another example is Madeleine McCann. After her disappearance it was important to collect holiday photos from members of the public as quickly as possible. Creating a website for this before cloud would have taken far too long. Nowadays it can be spun up very quickly. This isn’t about cost saving and optimisation, it’s about achieving things that could never have been done before.

This brings me to the question in the abstract about mobile: “As device processing power increases, yet cloud solutions rely less and less on that power, is there a disconnect between hardware manufacturers and app and software developers”. I think this is missing the point. Cloud isn’t about shifting the processing power from one place to another, it’s about doing the right processing in the right place.

English: GPS navigation solution running on a ...

English: GPS navigation solution running on a smartphone (iphone) mounted to a road bike. GPS is gaining wide usage with the integration of GPS sensors in many mobile phones. (Photo credit: Wikipedia)

In IBM we talk about CAMS – the nexus of forces of Cloud, Analytics, Mobile and Social, and we split the IT into Systems of Record and Systems of Engagement. The Systems of Record are the traditional IT – the databases that we’re talking about moving from the legacy data centres to the cloud. And, as we’ve discussed, putting it into the cloud means that a lot of new analytics can happen here. With mobile and social we now have Systems of Engagement. The devices that interact with people and the world. The devices that, because of their fantastic processing power, can gather data that we’ve never had access to before. These devices mean that it’s really easy to take a photo of graffiti or a hole in the road and send it to the local council through FixMyStreet and have it fixed. It’s not just the processing power, it’s the instrumentation that this brings. We now have a GPS location so the council know exactly where the hole is. And of course this makes it a lot easier to send photos and even videos of Madeleine McCann to a photo analytics site.

We’re also working with Westminster council to optimise their parking. The instrumentation and communication from phones helps us do things we’ve never done before, but then we move onto the Internet of Things and putting connected sensors in parking spaces.

With connected cars we have even more instrumentation and possibilities. We have millions of cars with thermometers, rain detection, GPS and connectivity that can tell the Met Office exactly what the weather is with incredible granularity, as well as the more obvious solutions like traffic optimisation.

Moving on to talking about skills. IBM has an Academic Initiative where we give free software to universities, and work with them on the curriculum and even act as guest lecturers. With Imperial College we’re proving cloud based marketing analytics software as well as data sets and skills, so that they can focus on teaching the subject rather than worrying about the IT. With computer science in school curriculums changing to be more about programming skills we can offer cloud based development environments like IBM Bluemix. we’re working with the Oxford and Cambridge examination board on their modules for cloud, big data and security.

Classroom 010

Classroom 010 (Photo credit: Wikipedia)

To be honest, it’s still hard. Universities are a competitive environment and they have to offer courses that students are interested in rather than ones that industry and the country need. IT is changing so fast that we can’t keep up. Lecturers will teach subjects that they’re comfortable with and students will apply for courses that they understand or that their parents are familiar with. A university recently offered a course on social media analytics, which you’d think would be quite trendy and attractive but they only had two attendees. It used to be that universities would teach theory and the ability to learn and then industry would hire them and give them the skills, but now things are moving so fast that industry doesn’t have the skills and is looking for the graduates to bring them.

Looking at the strategy of moving to the cloud, and the changing role of the IT department, we’re finding that by outsourcing the day to day running of the technology there is a change in skills needed. It’s less about hands on IT and more about architecture, governance, and managing relationships with third party providers. A lot of this is typically offered by the business faculty of a university, rather than the computing part. We need these groups to work closer together.

To a certain extent we’re addressing this with apprenticeships. IBM’s been running an apprenticeship scheme for the last four years This on the job training means that industry can give hands on training with the best blend of up to the minute technical, business and personal skills and this has been very effective, with IBM winning the Best Apprenticeship Scheme from Target National Recruitment Awards and National Apprenticeship Services and Everywoman in technology.

In summary, we need to be looking at the new things that can be achieved by moving to cloud and shared services; exploiting mobile and the internet of things; and training for the most appropriate skills in the most appropriate way.

Using a Cloudant database with a BlueMix application

I wanted to learn how to use the Cloudant database with a BlueMix application. I found this great blog post Build a simple word game app using Cloudant on Bluemix by Mattias Mohlin. I’ve been working through it.


I’ve learned a lot from it – as the writer says “I’ll cover aspects that are important when developing larger applications, such as setting up a good development environment to enable local debugging. My goal is to walk you through the development of a small Bluemix application using an approach that is also applicable to development of large Bluemix applications.” So it includes developing on a PC and also setting up Cloudant outside of BlueMix.

So here’s my simplified version focusing purely on getting an application up and running using a Cloudant BlueMix service and staying in DevOps Services as much as possible.

The first step is to take a copy of Mattias’s code so go to the GuessTheWord DevOps Services project.

click on “Edit Code” and then “Fork”


I chose to use the same project name GuessTheWord – in DevOps Services it will be unique as it’s in my project space.


This takes me into my own copy of the project so I can start editing it.

I need to update the host in the manifest file otherwise the deployment will conflict with Mattias’s. So in my case I change it to GuessTheWordGarforth but you’ll need to change it to something else otherwise yours will clash with mine. Don’t forget to save the file with Ctrl-S, or File/Save or at least changing file.


Now I need to set up the app and bind the database on BlueMix so I click on “deploy”. I know it won’t run but it will start to set things up.

At this point I logged onto BlueMix itself for the first time and located the new GuessTheWord in the dashboard.


I clicked on it and selected “add a service” and then scrolled down to the Cloudant NoSQL DB


and click on it. I clicked on “create” and then allowed it to restart the application. Unsurprisingly it still did not start as there is more coding to do. However the Cloudant service is there so I clicked on “Show Credentials” and saw that the database has  username, password, url etc so the registration etc on the Cloudant site is not necessary as this is all handled by BlueMix.

image015image017Clicking on Runtime on the left and then scrolling down to Environment variables I can see that these Cloudant credentials have been set up as VCAP_SERVICES environment variables for my app. So I just need to change the code to use these.

I switch back to DevOps Services and go to the server.js file to modify the code for accessing this database.

I change line 27 from
Cloudant = env[‘user-provided’][0].credentials;
Cloudant = env[‘CloudantNoSQLDB’][0].credentials;

So we’re providing the high level environment variable not the name or the label.

Unfortunately there is also an error in Mattias’s code. I don’t know whether the BlueMix Cloudant service has changed since he wrote it but he builds the url for the database by adding the userid and password to it but actually these are already in my environment variable url

so I change line 30 from

var nano = require(‘nano’)(‘https://’ + Cloudant.username + ‘:’ + Cloudant.password + ‘@’ + Cloudant.url.substring(8));
to simply
var nano = require(‘nano’)(Cloudant.url);

Now save the file and click deploy. When it’s finished a message pops up saying see manual deployment information in the root folder page.


So I click on that and hopefully see a green traffic light in the middle.


Click on the GuessTheWord hyperlink and should take you to the working game which in my case is running at


However there are still no scores displayed as there is no database table or entries.

I spent a long time trying to do this next part in the code but eventually ran out of time and had to go through the Cloudant website. If anyone can show me how to do this part in code I’d really appreciate it.

So for now, go to the GuessTheWord app on BlueMix and click on the running Cloudant service


From here you get to a Launch button


Pressing this logs you on to the Cloudant site using single sign on


Create a new database named guess_the_word_hiscores. Then click the button to create a new secondary index. Store it in a document named top_scores and name the index top_scores_index. As Mattias says, the map function defines which objects in the database are categorised by the index and what information we want to retrieve for those objects. We use the score as the index key (the first argument to emit), then emit an object containing the score, the name of the player, and the date the score was achieved. Following is the JavaScript implementation of the map function, which we need to add before saving and building the index.

function(doc) {
emit(doc.score, {score : doc.score, name :, date :});


Again, we should really be able to do the following as part of the program startup but anyway, the following should add an entry to the database, replacing guessthewordgarforth in the URL with the host name you chose for your application:

You should see a success message. Enter the following URL, again replacing guessthewordgarforth with your application host name.

The entry you just added should appear encoded in JSON e.g.


So, the code and the database are working correctly. Now it just remains to play the game. Go to

(replacing guessthewordgarforth with your hostname)

This time it will include Bob in the high score table


and click on “Play!”


Cloud computing trends in the UK: IaaS, PaaS & SaaS

This post was originally published on ThoughtsOnCloud on June 17th, 2014.

I’ve been a cloud architect foEnglish: Flats on Deansgate with cloud. Manche...r the last three years or so and have seen dramatic changes in the IT industry and its view of cloud. I’ve also observed different attitudes to cloud in different industries and countries.

I took on the cloud architect role because I saw that customers were asking about cloud, but they all had different ideas of what this meant. Depending on whom they spoke to first they could think it was hosted managed software as a service, or they could think it was on-premise dynamic infrastructure—or many other permutations between. My job was created to talk to them at the early stage, explain the full scope of what it means, to look at their business requirements and workloads and align them to the most appropriate solution.

Three years later you would hope that it’s all a lot clearer and in many ways it is, but there are still preconceptions that need to be explained, and also the cloud technologies themselves are moving so rapidly that it’s hard enough for people like me to stay abreast of it, let alone the customers.

To begin, I noticed some fairly obvious differences, many of which still hold. The large financial institutions wanted to keep their data on premise, and they had large enough IT departments that it made sense for them to buy the hardware and software to effectively act as cloud service providers to their lines of business. Some investment banks saw their technology as a key differentiator and asked that I treat them as a technology company rather than a bank, so they didn’t want to give away the ownership of IT, the attributes of cloud that they were looking for were standardisation, optimisation and virtualisation.

On the other hand I was working with retail companies and startups who saw IT as an unnecessary cost, a barrier to their innovation.  They saw cloud as a form of outsourcing, where a  service provider could take on the responsibility of looking after commodity IT and let them focus on their core business.

A third industry is government and public sector. This is very different in the UK to other countries. In the United States, the government is investing in large on-premise cloud solutions, and this avoids many of the security and scalability issues. In the UK, with a new government following the global financial crisis, there is an austerity programme, which led to the Government ICT Strategy and Government Digital Strategy and the announcement of the Cloud First Policy. This requires that government bodies use hosted, managed cloud offerings, as well as encouraging the use of open source and small British providers.

The British Parliament and Big Ben

The British Parliament and Big Ben (Photo credit: ** Maurice **)

Our health sector is also very different to the U.S., with our public sector National Health Service being one of the largest employers in the world, whereas in the U.S. health has much more of an insurance focus.

Over the years in all industries there has been a lot of fear, uncertainty and doubt about the location of data and whether or not there are regulations that make this an issue. I’m glad to say that we’ve now worked through a lot of this and it’s a lot clearer to both the providers and the consumers.

In practice most of the cloud investment that happened was infrastructure as a service (IaaS). Much of this was private cloud, with some usage of public cloud IaaS.

We used to have a lot of interest from customers, whether they be meteorological or academic research, looking for high performance computing clouds. This made a lot of sense, as the hardware required for this is very expensive and some customers only need it for short periods of time, so to have it available on a pay as you go basis was very attractive. Last year, IBM acquired SoftLayer, which includes bare metal IaaS as well  as virtualised. This means that HPC cloud is more attainable and with this has come a change of perception of cloud from virtualisation and closer to the hosted, utility based pricing view.

The big change this year is the move from IaaS to platform as a service (PaaS). With the nexus of forces of mobile devices (phones, tablets, wearable devices, internet of things), social media generating large amounts of unstructured data, and high performance broadband, there is a new demand and ability to deliver cloud based mobile apps connecting and exploiting data from multiple sources. This reflects a shift in the IT industry from the systems of record, which store the traditional, fairly static, structured data, to the new systems of engagement, which are much more about the dynamic customer interface and access to the fast changing data.

Developers are becoming key decision makers. They often work in the line of business and want to create business solutions quickly, without the blocker of the traditional IT department. Optimising the speed to market of business solutions by using IaaS, devops has been the first step in this. Now customers are looking to PaaS to give them immediate access to the whole software development environment of infrastructure as well as the necessary middleware for developing, testing, and delivering solutions quickly and reliably with minimal investment. This also includes the new open source middlewares and hyperpolyglot languages.

Finally, SaaS. We are talking to many companies, public sector bodies, and education establishments, who want to become entirely IT free. They don’t want a data centre and they don’t want developers. This requirement is now becoming achievable as IBM and others are committed to making a significant proportion of their offerings available as SaaS solutions. Of course, this brings new challenges around hybrid cloud integration and federated security.

Do my views of the trends in UK cloud align to yours? I’d love to see your comments on this.

It’s all about the speed: DevOps and the cloud

This post was originally published on ThoughtsOnCloud on April 29th, 2014.

As I explained in my earlier blog post, “Cloud accelerates the evolution of mankind,” I believe that cloud has the power to change the world. It achieves this by giving us speed—and this has an exponential effect.

DevOps and the cloudWhen I first became a professional software developer in the late 1980s, we spent two years designing and writing a software product that our team thought was what the market wanted. We went through a very thorough waterfall process of design (code, unit test, functional verification test, system test, quality assurance) and eventually released the product through sales and marketing. This cost us millions and eventually sold 14 copies.

More recently we’ve begun to adopt lean principles in software innovation and delivery to create a continuous feedback loop with customers. The thought is to get ideas into production fast, get people to use the product, get feedback, make changes based on the feedback and deliver the changes to the user. We need to eliminate any activity that is not necessary for learning what the customers want.

Speed is key. The first step was to move from waterfall to a more iterative and incremental agile software development framework. After that, the biggest delay was the provisioning of the development and test environments. Citigroup found that it took an average of 45 days to obtain space, power and cooling in the data center, have the hardware delivered and installed, have the operating system and middleware installed and begin development.  Today, we replace that step with infrastructure as a service (IaaS).

The next biggest delays in the software development lifecycle are the handovers. Typically a developer will work in the line of business. He will write, build and package his code and unit test it. He then needs to hand it over to the IT operations department to provide a production-like environment for integration, load and acceptance testing. Once this is complete the developer hands it over completely to the operations department to deploy and manage it in the production environment. These handovers inevitably introduce delay and also introduce the chance for errors as the operations team cannot have as complete an understanding of the solution as the developer. Also there will be differences between each environment and so problems can still arise with the solution in production.

By introducing a DevOps process, we begin to merge the development and operations teams and give the power to the developer to build solutions that are stable and easy for IT operations to deliver and maintain. Delivery tasks are tracked in one place, continuous integration and official builds are unified and the same deployment tool is used for all development and test environments so that any errors are detected and fixed early. With good management of the deployable releases, development can be performed on the cloud for provisioning to an on-premises production environment or the reverse; solutions can quickly be up and running in the cloud and as their usage takes off it may prove economical to move them on premises.

Of course there is risk in giving the power to the developer. The handover delays happen for a reason—to ensure that the solution is of sufficient quality to not break the existing environment. This is why the choice of tooling is so crucial. The IBM UrbanCode solution not only automates the process but provides the necessary governance.


As I discussed in my blog post “Cloud’s impact on the IT team job descriptions,” introducing IaaS to this means that the role of the IT operations department is reduced. They may still be needed to support the environment, especially if it is a private cloud, but the cloud gives self service to the developer and tester to create and access production-like environments directly. It brings patterns to automatically create and recreate reproducible infrastructure and middleware environments.

In my next post I will discuss the next step that can be taken to increase the speed of the development cycle:platform as a service (PaaS). I’d love to hear what do you think are the benefits of DevOps with the cloud and other ways to accelerate delivery? Please leave a comment below.

Enhanced by Zemanta

Cloud With DevOps Enabling Rapid Business Development

My point of view on accelerating business development with improved time to market by using lean principles enabled by devops and cloud.

The Roadmap to Cloud by Sam Garforth

An abridged version of this was first published on Cloud Services World as “Transition vs Transformation: 8 factors to consider when choosing the best route to cloud” in July 2013

Many companies have arrived at cloud naturally over the years through data centre optimisation. Traditional IT had separate servers for each project and application. We then had consolidation of the physical infrastructure and virtualisation to allow multiple operating systems and application stacks to run on the same server, plus shared application servers and middleware. Standardisation and SOA allowed the monolithic applications to be broken up into functions that could be reused, shared and maintained independently. By using patterns and putting the application stacks into templates, automation enabled the self-service, elasticity and the dynamic workload provisioning benefits of private cloud.


Now that the functions have been sufficiently encapsulated and commoditised we are seeing more and more customers moving them onto a public cloud and perhaps linking them back to their core existing business as a hybrid cloud. Some companies are teaming together and sharing services in a club cloud.

However you don’t have to go through all these steps in order. It is possible to migrate workloads straight to the cloud, but it’s important to do this using a carefully considered methodology.

According to KPMG’s Global Cloud Provider’s Survey of 650 senior executives, the number one challenge of their adoption of cloud is the high cost of implementation/transition/integration. Steve Salmon of KPMG said “Implementation and integration challenges are critically important to overcome and can threaten both the ROI and business benefits of cloud.”

To get the most benefit from moving to the cloud it is critical that you understand your current portfolio of applications and services and align them with a cloud deployment strategy. Begin by looking at the strategic direction of your company. Next, analyse the business applications and infrastructure components and create a prioritised list of suitable workloads for migration to the cloud as well as an analysis of the potential costs and migration impacts. Then look at your existing environment and determine an appropriate cloud computing delivery model e.g. private, public, hybrid, community etc. Define the architectural model and perform and gap analysis, build the business case and then implement based on a roadmap.

Application Migration Assessment

An assessment needs to be carried out against each application, or group of applications assessing the benefit and effort of moving to the cloud. This will form a roadmap/prioritisation of which applications to move in which order. A typical application migration roadmap would be based on the following chart. This shows risk/ease of migration against gain. In terms of time it is recommended to start migrating the apps in the top right corner of the diagram and end with the ones in the bottom right.


Transition vs. Transformation

When considering whether to move an application to the cloud, it is important to consider both the business purpose of the application and the role that application plays in supporting business and IT strategies. This context is important for considering whether to transition an application to a cloud environment, or whether to rearchitect or “transform” the application – and if so, how to do it.

 Transition, commonly referred to as the “lift and shift model,” is applied to situations when the application runs as-is or with minimal changes to the architecture, design or delivery model necessary to accommodate cloud delivery. For example, an application with no duplication of functionality and that supports current performance and security requirements would be a good candidate to transition to a cloud. The transition of such an application typically includes:

  • Selecting a private or public cloud environment to host the application.
  • Provisioning and configuring the Infrastructure-as-a- Service and the Platform-as-a-Service needed to deploy the application.
  • Provisioning the application to deliver built cloud characteristics such as monitoring, metering, scaling, etc.

When identifying enterprise applications for transition, there are a number of factors to consider.:

  • Business model –Business services and capabilities should be separated from the underlying IT infrastructure on which they run.
  • Organisation – Enterprise IT governance should be well established. Service usage is tracked and the service owner and provider are able to reap paybacks for the services developed and exposed by them.
  • Methods and architecture –The application architecture should support service-oriented principles and dynamically configurable services for consumption within the enterprise, its partners and throughout the services ecosystem.
  • Applications –The application portfolio should be structured so that key activities or steps of business processes are represented as services across the enterprise.
  • Information –The application’s business data vocabulary. This enables integration with external partners, as well as efficient business process reconfiguration.
  • IT infrastructure – Services can be virtualised such that any given instance may run on a different and/or shared set of resources. Services can be discovered and reused in new, dynamic ways without a negative impact on the infrastructure.
  • Operational management – Service management incorporated into the application design addresses demand, performance, recoverability and availability. It also tracks and predicts changes to reduce costs and mitigate the impact on quality of service.
  • Security –Good application and network security design supports both current and future enterprise security requirements.

In some cases, business and IT objectives and conditions warrant larger, more comprehensive changes to an application that is moving to the cloud than are possible under the transition approach. Transforming existing applications involves rearchitecting and redesigning the application to be deployed in either a private or public cloud environment. This path involves the application being redesigned to fit a more open computing model, for example to accommodate service-oriented architecture (SOA), exposed APIs or multi-tenancy. An SOA application model is valuable in that it allows for integration across custom and packaged applications as well as data stores, while being able to easily incorporate business support services that are needed for cloud deployment.

Typically, transforming applications for a cloud environment includes the same set of criteria as transitioning applications to a cloud environment, but with different conditions. Often, applications targeted for transformation are tightly coupled with enterprise legacy systems and do not meet current security, availability and scalability requirements. The situational factors that support the transformation decision include:

  • Business model – In an application that is a candidate for transformation, business services tend to be isolated with each line of business maintaining its own siloed applications. Also, there is minimal automated data interaction or process integration between the silos. By transforming the application for cloud delivery, the organisation can extend the business value of the service or application to other lines of business (LOBs) and partners.
  • Organisation –When each business unit owns its own siloed applications, it defines its own approach, standards and guidelines for implementing, consuming and maintaining application-delivered services. These may not align well with the need of the organisation as a whole.
  • Methods and architecture – In siloed applications there is no consistent approach for developing components or services. LOBs tend to throw requirements “over the fence” to the IT organisation, which then develops solutions with-out feedback from the business. The application architecture is typically monolithic and tightly coupled, with minimal separation between presentation, business logic and the database tiers. Often there is mostly – or in some cases only – point-to-point integration.
  • Applications –Usually, portfolios of discrete applications take minimal advantage of service-oriented architecture concepts, and business processes are locked in application silos.
  • Information – Information sharing tends to be limited across separated applications. Data formats are often application-specific and the relatively inefficient extract-transform-load process is the primary means for information sharing between applications.
  • IT infrastructure –Platform-specific infrastructures are maintained for each application and infrastructure changes have had to be put in place to accommodate service orientation, where it exists.
  • Operational management – Service management functionalities such as monitoring and metering to manage enterprise business applications and/or services are either not supported at all, or only to a limited extent.
  •  Security –Enterprise application and network security enhancements are required in transformation candidate applications to meet current and future security requirements.

After selecting the appropriate cloud delivery model (private or public), the decision to transition or transform an existing enterprise application is important in order to help ensure a successful move to cloud. When resources are limited, it is possible for an enterprise to choose to run the transformation in parallel with transition to meet short-term needs, while planning for longer-term, best-in-class performance through application transformation.

Consider Engaging a Third Party

Enterprise-wide cloud implementation can be a challenging process. Operating under ever-tightening budgets, IT staffs typically spend most of their resources to simply maintain existing server environments. Even those organisations capable of building their own clouds often find, emerging from the testing stage, that they would benefit from outside management support. Because of these challenges, organisations must carefully think about how to best source their cloud technologies. In making a sourcing decision, they should keep in mind business design, necessary service levels and deployment models. The question of who will migrate, integrate and manage applications must also be addressed. After considering these issues, many organisations choose to turn to third-party technology partners for help with enterprise cloud endeavours. Find a third-party provider that offers its clients a choice in the areas of scope, service levels and deployment. The partner should offer deep expertise in strategic visioning and in cloud migration.

To protect the client’s cloud infrastructure, technology partners should provide multiple security and isolation choices, robust security practices and hardened security policies proven at the enterprise level. Security procedures should be transparent, providing visibility into cloud activities and threats. Data monitoring and reporting must be offered. Since business needs are ever-evolving, the best cloud partners also offer a full portfolio of associated services in the fields of security, business resiliency and business continuity. Finally, the client organisation should be able to maintain control over its cloud environment. Technologies that provide this type of control include online dashboards, which can allow the client organisation to manage the cloud environment from virtually anywhere in the world. 

Example Roadmap

An enterprise can choose to run the two approaches in parallel transitioning some to meet short-term needs, while planning for longer-term application transformation.

Middlesex University has embarked on the journey to cloud. They needed to reduce the number of machines from around 250 to 25, electricity usage by 40%, and physical space requirements from approximately 1,000 square feet to 400 square feet. Steve Knight, Deputy Vice-Chancellor, explained, “We need a system that allows flexibility according to our changing requirements. We were looking for a platform solution to complement our longer term plans to achieve a dynamic infrastructure.”

A popular roadmap, similar to the journey Middlesex is on, is:

  • Begin to consolidate, rationalise and virtualise the existing on premises infrastructure into a private cloud. Perhaps install a modern expert integrated system which allows you to start consolidating key applications onto a smaller and more economical estate. This should be done gradually so as not to effect current service delivery. Applications and images on the private cloud should be portable to a public cloud at a later date.
  • Start “experimenting” with hosted unmanaged cloud running new development and test workloads.
  • Look to a MSP to manage your on premises infrastructure and/or private cloud. Start using the cloud provider for disaster recovery and backup.
  • Eventually look to move everything to a flexible hosted managed cloud.

So in summary, a blended approach is needed, choosing whether to cloud-enable existing applications or to write new cloud native applications, depending on the characteristics of the applications and their integration requirements.


Get every new post delivered to your Inbox.