What is OpenTelemetry?

Originally published here on August 18, 2022.

OpenTelemetry is a collection of tools and APIs for collecting, processing, and exporting telemetry data from software. It is used to instrument applications for performance monitoring, logging, tracking, tracing, and other observability purposes.

What is Telemetry? The word is derived from the Greek “tele” meaning “remote,” and metron meaning “measure.” So, it’s the collection of metrics and their automatic to a receiver for monitoring.

So, what is OpenTelemetry? OpenTelemetry is an open-source project and unified standard for service instrumentation and measurement. It replaces OpenTracing and OpenCensus.

It can be hard to track what is going on in a distributed environment, and OpenTelemetry simplifies this by allowing for much greater observability across the entire breadth of the system, whether a mainframe or distributed, data center or cloud.

What Problem Does OpenTelemetry Seek to Solve?

Observability is a technology buzzword that is often bandied around without much context or thought to mean, but in control theory, “observability” measures how well we can get to grips with the internal workings of a system using only what is output externally.

Anyone who has ever operated or deployed a modern, microservice-based software application will have struggled to understand at least a few aspects of its performance and behavior. This is because the “outputs” from the system are generally substandard and don’t go into the required level of detail.

It’s impossible to make head or tail of a complicated system if it’s a sealed black box. The only method we have that can shine a light on those black boxes is high-quality telemetry.

OpenTelemetry is a cross-platform and open-source solution designed to be used with any language or runtime. OpenTelemetry has official support for many popular programming languages and platforms and is backed by some of the biggest names in the software industry who have cooperated toward this becoming a standardized method of data reporting.

OpenTelemetry consists of two main components: the Software Development Kit (SDK) and the collector. The SDK is used to allow for easy instrumentation of your application’s code to allow it to generate the requisite telemetry data. The collector then comes into play and is used to process and export that data into a format that is more easily understood.

You can use an OpenTelemetry Collector to send your data to an extremely wide range of backend services for further processing, analysis, or just collection and storage. Some of these backend services include Prometheus, Jaeger, Honeycomb, Datadog, New Relic, Zipkin, and of course, the Nastel i2M Platform components of XRay and AutoPilot.

Why is OpenTelemetry Important for Middleware?

Until the advent of OpenTelemetry, every technology provider had their own custom way of trying to achieve the same thing. Sometimes in an industry, there is an opportunity for convergence of ideas and technologies to make life easier for everyone involved. This is what OpenTelemetry represented, and that is why it has been an important evolution in middleware management, interoperability, and data collection.

OpenTelemetry is important for middleware because it provides a unified way to collect data from distributed and disparate systems. The data that is collected by OpenTelemetry can be used to troubleshoot issues, understand any unusual performance bottlenecks, and monitor the health of your system more generally.

OpenTelemetry is designed to work alongside various middleware solutions, such as message queues, databases, and web servers (e.g., IBM MQ, Apache Kafka, WebSphere, CP4I). This makes it easier to instrument your applications and get much greater visibility into how they are performing across the board.

In a distributed environment, it can be complicated to get a handle on what is going on system-wide. This is where OpenTelemetry can help you get your hands on the data that you need to observe and understand what is going on.

This data is valuable and time-sensitive as it is only relevant for a short period of time unless it is being used to try to see some bigger picture of the data and investigate and understand a larger pattern of events.

How Can Nastel Help?

Nastel Technologies is a leader in the field of transaction tracking and observability and offers a range of products designed to enhance the ability to observe your transactions in real-time. The Nastel XRay software can receive OpenTelemetry data as well as other telemetry data and combine it to give full end-to-end observability. It can also filter and combine data to pass more sophisticated information to other OpenTelemetry Collectors for processing.

Nastel’s XRay software is available on a wide cross-platform basis and offers many detailed dashboards and displays that make it clear what is happening in real-time. There is also a single pane of glass view that offers the ultimate in observability, giving users the chance to see everything that is going on throughout their entire application stack.

Transaction Tracking vs. Transaction Tracing

OpenTelemetry provides the basics of transaction tracing, recording the steps taken by a piece of data throughout its journey until it has reached its destination. Nastel enhances this into transaction tracking. Although the terms are extremely similar, they are not to be confused as one is more complex and reliable than the other.

Transaction tracing, as evinced by OpenTelemetry, is a good start, but transaction tracking builds upon that start and then adds in some big data analysis in order to “stitch together” the path of the data and track it every single step of the way.

The use of big data analytics by Nastel in looking at the tracking of the transaction throughout the system means that the entire pathway can be understood and checked against the historical record. Big data allows us to predict with a high degree of certainty which routes a particular transaction is likely to take and to be aware of any changes in system processes and logs while this happens.

XRay can overlay system performance data on top of the transaction data so we can see in real-time if the transaction tripped an error state in any particular part of the journey or caused performance issues.

Understanding all of the factors that can affect a transaction is an important facet of gaining a full understanding of the entire journey through the system. This underlines the advantage offered by Nastel Technologies and the XRay system, in particular with the superior tracking technology.

What’s New In IBM MQ 9.3

Originally published here on June 20, 2022,

The latest long-term support (LTS) and continuous delivery (CD) release of IBM MQ will be released for the distributed platforms on June 23, 2022.

IBM MQ 9.3, again, has a focus on securely powering cloud-native applications across hybrid-multicloud as well as making it easier to get started.

The MQ 9.3 release includes:

  • An improved getting-started experience.
  • Simplified cloud-native deployment and improved high availability, scalability, and resilience.
  • The ability to tap into existing data with streaming queues – see our blog post on this here.
  • Enhanced administration with the MQ Console, whilst the MQ Explorer has been removed from the IBM MQ install package.
  • Security enhancements that build on the existing robust security mechanisms in MQ.
  • Enhancements to IBM MQ Advanced Managed File Transfer capabilities – see our page on MFT support here.
  • Updates to the supported platforms i.e. Adoptium and IBM Semeru Java 11, Oracle and Adoptium Java 17, Windows 11, JMS 3.0, NET 6 support.

Nastel has been participating with the development team on this technology and analyzing the impact and benefits to our customers as part of our own cloud and container initiatives.

Nastel products support this new version of MQ and the new features.  For further information on the Nastel solution please read here.

To see the official IBM press release see here.

Nastel Announces Secure Self-Service for Solace PubSub+ As Part of New Integration Infrastructure Management Release

Originally published here on March 24, 2022.

Enterprises Relying on IBM MQ, Kafka, TIBCO EMS, Solace PubSub+, and Other i2 are Adopting Nastel Navigator for Integration Infrastructure Management (i2M)

Nastel Technologies today announced the immediate availability of Nastel Navigator 10.4, the world’s leading integration infrastructure administration and secure self-service configuration management solution. Nastel Navigator is a key component of the Nastel i2M Platform, Nastel Navigator X, used by middleware, operations, DevOps, and application teams.

This latest release features administration, automation, and self-service management support for Solace PubSub+, an integration infrastructure (i2) event streaming platform for event-driven architectures across the hybrid cloud, multi-cloud, and IoT environments available as an appliance, software, or cloud form. “When customers began investing in Kafka to augment their IBM MQ estates, Nastel responded with the industry’s best i2M solution for Kafka. Now we’re doing it again for enterprises with Solace PubSub+,” said Richard Nikula, VP of R&D at Nastel.

“Most of the world’s largest banks rely on Nastel’s i2M platform to manage, monitor, and gain intelligence from their complex integration layer,” said Steven Menges, Nastel’s VP of Product Management and Advisory Board Director. “Adding enhanced i2 management and development self-service for Solace PubSub+ to existing capabilities for Kafka, TIBCO EMS, IBM’s MQ, IIB, and ACE brings huge productivity gains to enterprises with Solace as part of their integration stack.”

A key component of this release is a modernized Workgroup Security Manager (WSM), which includes security enhancements and more efficient import and management of groups and users to meet the needs of the very largest global enterprises. “Highly granular security made even simpler to use” was requested recently by Nastel’s Customer Advisory Board and Navigator 10.4 has quickly delivered it.

CTO Albert Mavashev added, “Nastel is focused on helping customers achieve their speed-to-market requirements for new applications and updates for all middleware.”

The new release also includes the following features and functionalities to make Middleware, i2 Operations, and Service Delivery teams even more efficient:• Enhanced Dashboards and Filtering Efficiencies

• Web GUI: Enhanced Support for IBM MQ Streaming Queues
• Support for the Confluent Kafka add-ons for Schema, Replicator, kSQL to facilitate CI/CD pipelines
• Expanded support for TIBCO EMS to support TIBCO JNDI Connection Factories.
• Performance Enhancements for the IBM MQ Agent & Connection Manager
• Nastel Platform RAS (Reliability, Availability, Serviceability) Enhancements

Related Resources:

• i2M Solutions for Solace PubSub+
• i2M Solutions for Kafka
• Navigator Administration & Self-Service

Read more in the complete press release here.

What is IBM Cloud Pak for Integration?

Originally published here on March 10, 2022.

IBM Cloud Pak for Integration (CP4I) is a platform that helps you quickly and easily integrate your hybrid cloud applications with the systems and applications that are important for running your business. It can help to collaborate between the different application teams and businesses that exist in your organization and ensure that they are working together at maximum efficiency. IBM Cloud Pak for Integration is designed around simplifying the way you connect to the IBM Cloud and to other external clouds, and their applications and systems.

It includes a variety of integration tools and services that can help you connect your cloud applications with on-premises systems, software-as-a-service (SaaS) applications, and other cloud services. The integration tools include MQ, API Connect, App Connect Enterprise, DataPower, Event Streams (Kafka), and Aspera. See here for more on this. IBM Cloud Pak for Integration is an important part of the IBM cloud strategy.

Migrating to IBM Cloud Pak for Integration has never been easier with Nastel Navigator X which offers advanced “single pane of glass” oversight, providing monitoring of all transactions and middleware in CP4I and beyond.

CP4I is a container-based IBM middleware solution that enables you to integrate your applications quickly and easily with other IBM and non-IBM applications, regardless of where in the world they are hosted and whether on-premise or on any cloud. Nastel Navigator X for CP4I reduces business risks, clarifies visibility, and enhances alerting systems to speed up business developments while at the same time-saving money and increasing accountability, governance, and security.

The use of cloud-based computing power and applications means that even thin clients can utilize advanced software solutions that they may not have had the computing power to run on their own previously. This allows for the distribution of less expensive equipment in order to run the necessary applications and keeps some of the more expensive hardware either securely on-site or entirely cloud-based and not in a physical location owned by your company. This means they are being used as and when needed rather than being purchased outright, which can be an excellent business cost-saving, particularly across larger and more complex companies.

Below are just some of the advantages of IBM Cloud Pak for Integration:

  • Rapid integration: With IBM Cloud Pak for Integration, you can quickly and easily integrate your applications with other IBM and non-IBM applications, regardless of where they are hosted. IBM Cloud Pak for Integration includes a wide range of pre-built connectors to popular applications and services, so you can get up and running quickly.
    Seamless integration makes for a much more joined-up experience for all concerned. The chance to use pre-built connectors also ensures that time isn’t wasted on re-inventing the wheel and that solutions are available to deploy so much more quickly than they would otherwise ever have been.
  • Greater flexibility: IBM CP4I gives you greater flexibility to integrate your applications with the systems and data that are important to you. IBM CP4I supports a wide range of integration scenarios, including both synchronous and asynchronous integrations. And, because CP4I can be cloud-based, you can access it from anywhere, anytime.
    The ability to access cloud-native applications is a massive boon offered by the cloud pak and this allows the technical teams more room to maneuver so they can balance the needs of onsite and offsite teams in a much more seamless way than ever before. This is something that will become ever more important due to the increased prevalence of hybrid or remote teams in the wake of the Covid pandemic.
  • Lower costs: IBM Cloud Pak for Integration can help you lower your costs by enabling you to reuse existing applications and services, rather than building them from scratch. IBM Cloud Pak for Integration also helps you to optimize your application infrastructure, so you can run your applications more efficiently and require less processing power to do so, making cost savings in the process.
  • Data Security: One of the major advantages of using IBM Cloud Pak is that all of the data is transferred using enterprise-grade encryption, meaning that wherever your team members are based, they will all be able to access the right data streams and tools, securely and safely from their device.

Why choose Nastel Technologies?

At Nastel Technologies we aim to make the whole process as smooth and seamless for our clients as possible. We specialize in the development and rollout of middleware for applications to connect to APIs and software platforms and utilize their capabilities to solve the problems that you are experiencing. As certified business partners of IBM, Red Hat, AWS and Microsoft, we have the experience and expertise your organization needs to optimize and streamline all your middleware requirements for smoother operation and security in the hybrid cloud and legacy environments.

We offer IBM Cloud Pak tools and IBM Cloud Pak for Integration tools with enhanced drill-down options and greater granularity than ever before, over and above the native CP4I tools, and expand the visibility beyond the IBM technologies. You will have the power to offer application teams the ability to visualize and modify their environment with no risk to other ongoing work in different teams across the organization. Monitoring of system and business performance, queries, and transaction rates across the board is simplified, making it easier to avert problems before they begin.

See here for more detail on the Nastel solution for Cloud Pak for Integration.

Hybrid Multi-Cloud Demands Holistic Observability

Originally published here on October 21, 2021.

As I said before, Speed is King. Business requirements for applications and architecture change all the time, driven by changes in customer needs, competition, and innovation and this only seems to be accelerating. Application developers must not be the blocker to business. We need business changes at the speed of life, not at the speed of software development.

Thirteen years ago much of my time was spent helping customers rearchitect their monolithic applications into service oriented architectures (SOA). We broke the applications into self contained, reusable, stateless business functions which could be managed, modified, tested, reused or replaced individually without impacting the rest of the application. This enabled business agility, reducing cost and improving time to market.

Over the last ten years or so, companies have been exploring the benefits of cloud. Initially this was also quite monolithic i.e. lifting and shifting large applications onto outsourced virtual servers or outsourcing email or CRM to web hosted providers. However, more recently we’ve started to see newer applications being developed with finer grained components stored in the most appropriate place. Confidential data may be stored on premise, open APIs can enable public lookup services, the CRM can be integrated with the marketing functions and the internal cloud hosted product warehouse. A single business application is typically spread across a hybrid cloud of different cloud providers, and on-premise services.

They are at risk of sprawling so much that no-one knows how they work or how to maintain them and they will be a blocker to innovation again. The drivers behind SOA are here again. Applications need to be designed as combinations of smaller self contained, loosely coupled, stateless business functions or services.

Management of these services is even more powerful than it was with SOA. They can be quickly pushed out by DevOps, moved between clouds, they can be ephemeral and serverless and containerised.

So, a single business transaction can flow across many different services, hosted on different servers in different countries, sometimes managed by different companies. The services are being constantly updated, augmented, migrated in real time. The topology of the application can change significantly on a daily basis.

It’s amazing that we’ve had so many software development innovations to enable this rapid innovation for our lives. But what happens if something goes wrong? What happens when a transaction doesn’t reach its target? Who knows what the topology is supposed to look like? How can it be visualised when it’s constantly morphing? Who knows where to go to find the transaction? How can you search for it and fix it when it could be anywhere on any technology?

Nastel XRay addresses this. XRay dynamically builds a topology view based on the real data that flows through the distributed application. It gives end to end observability of the true behaviour of your application in a constantly changing agile world. It highlights problem transactions based on anomalies, and enables you to drill down to those transactions, carry out root cause analytics and perform remediation.

Many companies recognise all the benefits that hybrid multi-cloud can give but recognise that it’s too risky to move their business forward if their monitoring and management software and processes can’t keep up. Nastel is here to help.

Microservices Without Observability Is Madness

Featured

Originally published here on August 26, 2021.

As I said before, Speed is King. Business requirements for applications and architecture change all the time, driven by changes in customer needs, competition, and innovation and this only seems to be accelerating. Application developers must not be the blocker to business. We need business changes at the speed of life, not at the speed of software development.

Thirteen years ago much of my time was spent helping customers rearchitect their monolithic applications into service-oriented architectures (SOA). We broke the applications into self-contained, reusable, stateless business functions which could be managed, modified, tested, reused, or replaced individually without impacting the rest of the application. This enabled business agility, reducing cost and improving time to market.

Over the last ten years or so, companies have been exploring the benefits of the cloud. Initially, this was also quite monolithic i.e. lifting and shifting large applications onto outsourced virtual servers or outsourcing email or CRM to web-hosted providers. However, more recently we’ve started to see newer applications being developed with finer-grained components stored in the most appropriate place. Confidential data may be stored on-premise, open APIs can enable public lookup services, the CRM can be integrated with the marketing functions and the internal cloud-hosted product warehouse. A single business application is typically spread across a hybrid cloud of different cloud providers, and on-premise services.

They are at risk of sprawling so much that no one knows how they work or how to maintain them and they will be a blocker to innovation again. The drivers behind SOA are here again. Applications need to be designed as combinations of smaller self-contained, loosely coupled, stateless business functions or microservices.

Management of these microservices is even more powerful than it was with SOA. They can be quickly pushed out by DevOps, moved between clouds, they can be ephemeral and serverless and containerized.

So, a single business transaction can flow across many different services, hosted on different servers in different countries, sometimes managed by different companies. The microservices are being constantly updated, augmented, migrated in real-time. The topology of the application can change significantly on a daily basis.

It’s amazing that we’ve had so many software development innovations to enable this rapid innovation for our lives. But what happens if something goes wrong? What happens when a transaction doesn’t reach its target? Who knows what the topology is supposed to look like? How can it be visualized when it’s constantly morphing? Who knows where to go to find the transaction? How can you search for it and fix it when it could be anywhere on any technology?

Nastel XRay addresses this. XRay dynamically builds a topology view based on the real data that flows through the distributed application. It gives end-to-end observability of the true behaviour of your application in a constantly changing agile world. It highlights problem transactions based on anomalies and enables you to drill down to those transactions, carry out root cause analytics and perform remediation. It would be madness to have access to all the benefits that microservices can give you and not use them to the fullest extent just because your monitoring and management software and processes can’t keep up.

Was Cloud A Mistake?

Originally published here on August 12, 2021.

I just heard that businesses are repatriating cloud workloads.

I was told, “The history of IT is paved with dreams of lower costs, better performance, and higher reliability through migrations to new platforms and languages and new architectures and infrastructures.”

“Some of the biggest recorded IT waste has been when companies focus on the ‘next big thing’ and have issues achieving successful implementations. Any architectural change comes with risks, and if a company isn’t clear on the risks in terms of feature creep and associated cost and time overruns, they can sometimes decide to ‘de-migrate’”.

I’ve been thinking about this and of course, as a cloud evangelist I have some cognitive dissonance but, if you want the TLDR answer, I don’t think that moving to the cloud was a mistake.

As we’ve said before, the cloud is not new. Many of us who wrote IBM MQ at IBM began by writing IBM SAA APPLICATION CONNECTION SERVICES or AConnS. This was a middleware designed to facilitate the move to the new client-server model. The idea was to run the right workload in the right place and have them work together in an optimum holistic business application. It utilitized the mainframe for its heavy data processing and the PC for its powerful graphical abilities. An application on the PC would call AM (Application Manager) to run mainframe processes, whereas an application on the mainframe would call PM (Presentation Manager) to display the data. Because of this meeting of AM and PM the product was known internally as HighNoon.

Before that, we had the Computer bureau which was Pay As You Go Utility computing access to the mainframe which phased out when companies began to be able to afford their own mainframes or the PC became sufficient.

The point is that you should run your application workload in the most appropriate place at that particular time. Without wanting to get into Schrödinger’s Cat, you can’t just say that a technology platform is right or wrong, you have to look at it in the context of the time. Technology is constantly evolving with invention and innovation, as are other factors such as cost, politics, and security.

At an enterprise level, the concern with cloud over the last 10 years has been around security. Companies have had a castle and moat security approach meaning that everything needs to be stored inside the company premises and they put a barrier around the outside to stop people from getting in. But with the arrival of COVID-19 and home working they’ve had to quickly revisit this. We now have the zero trust model and message-level security. Also, homomorphic encryption is coming much faster than was first thought possible and also confidential computing. We also have quantum computing coming along which will deliver world-changing discoveries and innovation but, as it needs to run at minus 273 degrees Celsius, will need to be accessed as a cloud service.

So, those are changes that are making the cloud more and more appropriate. But on the other hand, the amount of data in the world is rapidly increasing. 79 zettabytes will be created in 2021. That’s up from 64 zettabytes in 2020 and 2 zettabytes in 2010. iPhone photos and videos are getting ever larger and stored on the cloud and more 4K entertainment, education, and business content is provided from the cloud with Netflix, YouTube, Zoom, etc. Shipping all this across the cloud is becoming increasingly hard. With the limit on the speed of light, we need invention in telecoms or to right-size and put the workloads in the appropriate place. With the invention of 5G, being 100 times faster than 4G we were preparing for a dramatic change in our lives. But then governments canceled contracts with the 5G provider Huawei due to security and privacy concerns around involvement with the Chinese government.

And then there’s the moving between cloud providers. If a cloud provider is accused of tax avoidance then a government tax agency needs to consider moving away from them. If a cloud provider is also a retail company then retail companies decide to relocate. If a cloud provider applies for a banking license then finance companies make strategic decisions to migrate from them.

Change is the only constant in IT. Reminiscent of the Sneetches, companies will move workloads to and from the cloud constantly but it’s not a wholesale move. Different parts will move as is appropriate. Parts will always be in transition, so there will always be a hybrid cloud.

Nastel can help customers move their middleware and message data from on-premise to the cloud and between clouds and back again. And Nastel can monitor the whole end-to-end environment. It can give a single view of the integration infrastructure alerting on the IT availability and the business transaction. Most importantly Nastel software is the fastest at adapting its monitoring when the integration infrastructure changes, which it does constantly. The topology views are dynamically created in real-time based on the current view of the infrastructure. The alerts levels are set based on ongoing statistical analysis (AIOps). There is no need for the monitoring environment to always be out of date, playing catch-up with the changing integration infrastructure. Nastel is here to support you in the ever-changing world of hybrid cloud.

Put security first in cloud migration planning

Originally published here on April 28th 2021

Companies considering migration to a cloud environment usually focus first on infrastructure, from data centers to system performance to storage.

But even before you select a cloud environment, the fundamental issue to be considered must be security. Protecting that environment against inappropriate access is critical. And a deep understanding of all the vectors associated with the cyber kill chain is essential to ensure that any environment is protected. Clouds are inherently no more or less secure than any other environment, but with carefully crafted monitoring and vigilance, they can provide the highest level of security available.

It is not a surprise when customers that we work with choose Red Hat OpenShift as their cloud platform. It is built with a fundamental security model to ensure a secure environment. That environment includes continuous security across all aspects of the application deployment. Customers tell us that being able to control, defend, and extend their application platform’s security is one of the critical benefits they see with OpenShift.

Tied into this is the granular security provided by Nastel Navigator. Nastel Navigator offers self-service access to the middleware environments running IBM MQ, Tibco EMS, and Apache Kafka (and derivatives). Security is a fundamental aspect of providing self-service. If users are allowed to serve themselves, they have only to do what they are authorized to do. Access is critically important when crossing the various environments.

Even where just a single messaging middleware platform is in use, multiple versions spreading across different servers and even other platforms is a typical complexity issue, especially as we move to hybrid cloud. The nature of these messaging middleware systems typically demands that each server be individually connected to for configuration management purposes. And once you log in to a server, you can immediately access all of the individual queues, stream configurations, and data associated with that server. Since multiple business processes and teams may well use the same server, it is not appropriate to directly give the individual developers access to the server. Instead, the developer typically requests changes through a messaging middleware administration team (aka a shared services team). This request requires approvals and to model the changes in a test environment before delivering them into production. This arrangement enables the integrity of the entire environment to be guaranteed, but it adds complexity, time, and cost to the environment’s management. This heavy process impacts the time to market.

When a company spins up an OpenShift environment, Nastel Navigator gives each development team access to its entire messaging middleware environment across the whole application stack while maintaining privacy and security using access controls. Furthermore, it includes auditing all changes and attempted changes. It doesn’t matter what mix of cloud or on-premise architectures are in use or what middleware versions are running; it can all be managed from a single web browser pane of glass (SPOG) complete end-to-end business application view. One login gives a user full access to just what they have rights to, and the administration team sets the rules. This setup means that users can manage their configurations and messages, delivering solutions quickly without impinging on anyone else. Simple, fast, and secure.

Try Nastel Navigator for free from the Red Hat Marketplace.

How the cloud is revolutionizing roles in the IT department

This post was originally published on ThoughtsOnCloud on November 10th, 2014.

Many people see cloud as an evolution of outsourcing. By moving their traditional information technology (IT) resources into a public cloud, clients can focus on their core business differentiators. Cloud doesn’t nix the need for the hardware, software and systems management—it merely encapsulates and shields the user from those aspects. It puts IT in the hands of the external specialists working inside the cloud. And by centralizing IT and skills, a business can reduce cost and risk while focusing on its core skills to improve time to market and business agility.

But where does this leave the client’s IT department? Can they all go home, or do some of the roles remain? Are there actually new roles created? Will they have the skills needed for this new environment?

Changes Crossroad SignLet’s look in more detail at some of these roles and the impact that the extreme case of moving all IT workloads to external cloud providers would have on them:

IT strategy

Strategic direction is still important in the new environment. Business technology and governance strategy are still required to map the cloud provider’s capabilities to the business requirements. Portfolio management and service management strategies have increased importance to analyze investments, ascertain value and determine how to get strategic advantage from the standardized services offered by cloud. However, the role of enterprise architecture is significantly simplified.

Control is still needed although the scope is significantly reduced. IT management system control retains some budgetary control, but much of its oversight, coordination and reporting responsibilities are better performed elsewhere. Responsibility for portfolio value management and technology innovation is mainly handed to the cloud provider.

At the operational level, project management is still required. Knowledge management has reduced scope, but best practices and experiences will still need to be tracked.

IT administration

The scope of IT business modeling is reduced, as many of the functions in the overall business and operational framework are no longer required.

There are changes in administration control. Sourcing relationships and selection are critical for the initiation and management of relationships with providers. Financial control and accounting will continue to manage all financial aspects of IT operations. Human resources planning and administration are still required, but the number of people supported is reduced. Site and facility administration is no longer needed.

All of the operational roles in IT administration have increased importance. IT procurement and contracts as well as vendor service coordination are needed to manage the complex relationships between the enterprise and cloud provider. Customer contracts and pricing is needed for the allocation of cloud costs to internal budgets as well as providing a single bill for services from multiple cloud providers.

Service delivery and support

The main casualties of the move to cloud are the build and run functions. The service delivery strategy will remain in house, although it becomes largely redundant once the strategic decision has been made to source solely from the cloud. Responsibility for the service support strategy moves to the cloud provider.

Service delivery control and service support planning move to cloud provider. Infrastructure resource planning functions are likely to be subsumed into the customer contracts and pricing administration role.

Responsibility for service delivery operations and infrastructure resource administration moves to cloud provider. The help desk and desk-side support services from service support operations remain essential activities for initial level one support, but beyond this, support will be offered by the cloud provider.

Further observations

Governance is a critical capability, particularly around maintaining control over software as a service adoption. Integration of services will be a challenge, but perhaps this will also be available as a service in the future. Relationships with partners and service providers in all guises will become increasingly important.

There is a potential issue with skills. With many of the traditional junior roles in development and operations moving outside the enterprise, it’s hard to see how candidates for these new strategy and coordination roles will gain the experience they need. Academia has an important part to play in ensuring that graduates are equipped with the right skills.

In summary:

• Most current job roles remain, although many have reduced scope or importance.
• Fewer strategic roles are impacted than control or operational ones.
• Build and Run are the main functions which move to the cloud providers.
• Planning and commercial skills are key, linking the IT department more closely to the business.

Can you think of other roles that will be affected by the coming changes? Is your organization ready? Leave a comment below to join the conversation.

Demonstration: cloud, analytics, mobile and social using IBM Bluemix

This post was originally published on ThoughtsOnCloud on January 3rd, 2015.

In the 2013 IBM Annual Report, IBM Chairman, President and Chief Executive Officer Ginni Rometty talked about the shifts that she sees occurring in industries:

“Competitive advantage will be created through data and analytics, business models will be shaped by cloud, and individual engagement will be powered by mobile and social technologies. Therefore, IBM is making a new future for our clients, our industry and our company…As important as cloud is, its economic significance is often misunderstood. That lies less in the technology, which is relatively straightforward, than in the new business models cloud will enable for enterprises and institutions…The phenomena of data and cloud are changing the arena of global business and society. At the same time, proliferating mobile technology and the spread of social business are empowering people with knowledge, enriching them through networks and changing their expectations.”

Consequently, IBM is focusing on three strategic imperatives: data, cloud and engagement.

Recently I have been demonstrating this in a holistic way by showing the fast deployment of an application running on IBM Bluemix, the company’s platform as a service (PaaS) offering. It’s a social analytics application that provides cross-selling and product placement opportunities enabling systems of engagement with systems of insight. It analyzes social trends like tweets to help sales, marketing and operational staff to target specific customers with more personalized offers using mobile technologies.

Take a look at this video of my demonstration: