Nastel XRay is a cheaper and better option than ELK, Data lakes, Prometheus, and Grafana

Featured

Nastel XRay is a hosted, managed alternative to ELK, Prometheus and Grafana

Being able to interrogate data about transactions that take place in your IT applications is critical when you need to ensure performance, availability, security and a desired user experience.

Many companies have considered the two choices that are most often discussed, namely the combination of Prometheus and Grafana, and alternatively, the Elastic Stack. Both of these options offer a pathway to desired outcomes, but with some caveats.

ELK (Elasticsearch, Logstash, and Kibana) also known as the elastic stack. ELK is open source, so the software costs are very low, but the complexity of implementation can be immense. A great mind once said “you end up paying twice as much for what you get for free”, and the experience of many indicates this is true with ELK. The combination of Prometheus and Grafana is a similar story.

These options provide powerful tool sets for log management and data analysis, but powerful tools demand experts to operate, and this is both a cost and process concern. Experts are expensive and translating a data request into a query can be very time consuming. If the data being analysed holds knowledge which is only of value immediately, sometimes the effort can be wasted.

There are alternatives!

Picture1

One such choice is Nastel XRay which delivers a very cost-effective method of capturing, storing and analysing data in real time.

Key differentiators are:

  • True multi tenancy
  • English-like query language can be used by non-developers
  • Native transaction tracing with automatic computation of latency
  • Understanding of business objectives
  • Much better scaling
  • Can also consume data from Logstash and other ELK stack instrumentation.
  • Can be natively deployed as SaaS and on-prem.
  • Integrated with R language for machine learning.

Unusual patterns or previously identified situations can be identified as signals in the data in real time, allow immediate alerts and actions to be automated. This model allows you to reduce your costs while delivering real-time analysis.

Lower costs, faster results are things we (and you) know you need to consider.

EUM-1

As my colleague said yesterday, “we can say all you want on our website, in blog posts, etc, what REALLY makes a difference in terms of £££ is DOING it in the field, with real customers, that’s where everything gets real. My customer was going full tilt to build a project for testing Prometheus, Grafana,  ELK and Splunk too for leveraging data intelligence until we stopped them. We told them unabashedly : “Gents, we’re sorry but that’s just a dumb strategy, you should be using XRay for that” … sometimes you just have to go balls out with a customer … well, they listened and we delivered.”

In a future post I will discuss how XRay is used for real-time analytics of message payloads to facilitate business decisions and compliance.

I would love to hear your thoughts and experiences in this area. Please comment below or email my directly.

Getting Started Using Nastel XRay for MQ On-Prem Version

Featured

In my previous blog post, Getting Started Using Nastel XRay for MQ, I gave an overview of Nastel’s free MQ tracing/monitoring/diagnostics solution, available here, and I described how to install and set up the cloud version. This has proved to be very popular but some people would prefer not to send data to the cloud, even if it’s just metadata, so in this post I go through the steps to set up on-premise version.

image001

Purpose

I won’t repeat the whole overview as you can read it in the previous post, but as reminder, the purpose of the Nastel XRay for IBM MQ is to analyse applications making MQ calls and determine their behaviour. Examples of the value of tracing include:

  • Tracking individual application calls:
    • Identify applications doing “unnecessary” calls
      1. Inefficient logic
    • Identify applications not conforming to “standards”
      1. Not providing expiry
      2. Not resetting message ID
    • Observe timings of calls based on different scenarios
      1. Persistence
      2. Different environments
    • Verify correct processing of error conditions
      1. Queue Full, Queue not found, etc.
    • Summarise application calls:
      • Identify Patterns
    • Problem determination:
      • What is the application actually doing?

Many other use cases are possible, for example these MQ events can be used as part of business transaction tracking scenarios.

Installation

There are two components to XRay.

  • The XRay server which hosts all the collected data for search, analytics, visualisation, automation etc.
  • The probe collects the information from MQ. It runs in your MQ environment (or connects remotely to it)

Installing Docker

For on-premise we use the Docker version of the XRay server, which is only available on Linux, although the monitored MQ can be on any platform.

The documentation says that in order to run as a Docker container, the following minimum requirements must be met:

  • 16 GB of RAM (I tried running with 3GB and found that it hung when setting up Solr).
  • 4 virtual processors

I didn’t already have Docker installed so I typed ‘docker’ on the command line and it said

The program 'docker' is currently not installed. You can install it by typing:
sudo apt install docker.io

so that’s what I did, and it installed easily.

It turns out that it’s also really important to run the following command (and log out and in again) to put the user into the docker group, otherwise you can hit problems with the Nastel Xray install.

sudo usermod -a -G docker $USER

Installing the XRay Server

On the first page, click on the Register for Nastel XRay Docker Edition button. Once you’re through the registration process you will be given a link to download the software and another to view the documentation.

Once downloaded, uncompress the software using unzip or your preferred extraction software, and switch to the docker directory

cd XRayStarterPack/docker

There are various scripts in that folder to help you manage the installation. There is one program which provides a menu to call the others:

./xray_menu.sh

Select option 1

1. Deploy XRay Starter Pack containers

This downloads the Docker images, creates the containers and executes them. This is typically only run once. It will ask you for your XRay advertised host name – this just means the host that you’re installing on that the probes will be publishing too. You can normally press enter and accept the default although in may case it was a virtual host with complicated networking so I entered the actual ip address at this point.

It may also ask you for the location of two license files. Just press enter to accept the defaults.

The download time will depend on your bandwidth and then there’s a five minute wait while it configures everything.

Then it will say:

You can reach XRay by this url: http://apsvr:8080/xray

You can stream data to this url: http://apsvr:6580

or similar (my hostname is apsvr).

At this point you can go to a web browser and go to the XRay url and login with user Admin and password admin. Of course, you will change these once you’re up and running.

image003

Once you’re logged in, click on Go to Dashboard
image004

This will take you to the sample repository and the sample dashboards:

image006

You should take a look around these, guided by the installation document.

Then you can switch to your own repository where your own data and dashboards will go.

image008

On the cloud version you would have to import the dashboards but on the Docker version this has already been done.

Installing the Probes

Now you need to install the probes onto your MQ machine. In my case this was onto a separate Windows machine. So, I downloaded the same zip file that I’d put onto Linux and unzipped it on Windows.

run \XRayStarterPack\run\XRay_setup.cmd

It will ask you to enter:

  • the url of the XRay server, so in my case this was http://apsvr:6580
  • your Access Token – for the docker version this is DefaultToken

Next you need to edit

\XRayStarterPack\run\ibm-mq-trace-events\XRayMQ_parser.xml

If the probe is running locally then you just need to provide the queue manager name. Alternatively, you can use a client server connection in which case you’d have to provide the connection information too.

To monitor a second local queue manager just copy the stanza and add the next queue manager name but also change the stream name so that each one is unique. You could also monitor a second queue manage on a second machine by installing the probe on that machine too.

Configure MQ

Switch on activity trace by setting the queue manager property ACTVTRC(ON).

This means that everything will be traced. This is great for getting started. However if you run tools such as MQ Explorer you will find that it generates far too much data so once you’re up and running you’ll want to go back and edit mqat.ini to include or exclude data some applications and functions from the trace.

Run the probe

Now you can start collecting data and seeing it in the dashboards by running the probe on the managed machine with

\XRayStarterPack\run\ibm-mq-trace-events\run.bat

You could actually have done this as soon as you’d configured the probe.

Dashboards

Very soon you should start to see the data appearing in the dashboards

image010

You could also look at the raw events. Click on the plus sign at the bottom:

image012

To get to the jKQL (jKool Query language) command line. If you type in get event it will list all the trace events that it’s received.

image013

Try clicking around on some of the hyperlinks within this. You will notice that all the dashboard viewlets were created with jKQL commands which you can edit too. If you’re feeling adventurous you could even read the jKQL Users Guide here.

Conclusion

The free version is the full function product with the limit that you can stream 500MB of event data per day and store them for 5 days. If you’re careful about what data you capture this could be sufficient for many use cases. Hopefully this blog post has given you an idea of how to use Nastel XRay for MQ on premise and will get you started using it.

Feel free to mail me at sgarforth@nastel.com if you need help or leave comments below.

Getting Started Using Nastel XRay for MQ

Featured

Nastel has a free version of its MQ tracing solution available here. In this post I will give an overview of its purpose, its use, and how to install it.

Purpose

The purpose of the Nastel XRay for IBM MQ is to analyse applications making MQ calls and determine their behaviour. Examples of the value of tracing include:

  • Tracking individual application calls:
    • Identify applications doing “unnecessary” calls
      • Inefficient logic
    • Identify applications not conforming to “standards”
      • Not providing expiry
      • Not resetting message ID
    • Observe timings of calls based on different scenarios
      • Persistence
      • Different environments
    • Verify correct processing of error conditions
      •  Queue Full, Queue not found, etc.
  • Summarise application calls:
    • Identify Patterns
  • Problem determination:
    • What is the application actually doing?

Many other use cases are possible, for example these MQ events can be used as part of business transaction tracking scenarios.

How It’s Used

The dashboards that come with the free installation include:

  • MQ Error Analysis
  • MQ Call Analysis
  • Search for Message Data

You can also create your own dashboards by using the jKQL query language.

The MQ Call Analysis dashboard

There are various viewlets.

Summary

In this case, this shows a summary of all of the MQ calls captured by the queue manager.

MQ Calls by Type

This viewlet breaks down the MQ calls collected by their type.

In this case, we can see a lot of MQOPEN, MQCLOSE and MQCONNX calls which may be interesting sometimes, but in many cases, you will want to focus on the puts and gets. That could be done by modifying the query run to display the data or in the data collected itself.

MQ Object Breakdown

This displays what queues were used, but it could contain topics and other objects for some call types. This viewlet shows the total MQ calls against each MQ queue.

Persistence Percentages

This displays the breakdown by persistence. Many organizations require persistent messages, so this viewlet would be useful to identify which applications are sending non-persistent messages.

You can easily drill into the data to see which queues were in this group. In all viewlets, you can drill in by clicking on the pie chart or the bar chart. Clicking the non-persistent portion of the chart opens a new window at the bottom of the display showing the non-persistent calls.

MQ Calls over Time

This shows the MQ calls being done over the latest day.

MQ Calls by Application

This viewlet will be useful in determining which applications are being traced.

The MQ Error Analysis Dashboard

This dashboard has details specific to MQ errors that are occurring. The following are the viewlets

Summary

The first summary shows the total number of MQ errors by queue manager and the second shows the total number by MQ error code.

MQ Calls by Completion Code

This shows the errors versus the total number of calls. In this case, 60 requests resulted in an error, just over 8 percent.

MQ Exception List

This shows the breakdown of the errors by type. In this case, the majority were inhibited put events. You can link to see which requests these were.

The Search for Message Data Dashboard

When you click on this tab, a dialog will be presented.

Enter a string from the message payload, for example Payment. The viewlet will refresh and display the matching requests.

You can select the envelope icon next to the message preview to view the entire message

Alternatively the general search field can be used to search across all data elements, including the payload, but also the queue name, queue manager name, user, and so on.

Installation

There are two versions available – one hosted in the cloud and one is in a docker container so can be installed on-premise. In this blog post I talk through my experience of using the cloud based one monitoring a Windows environment.

There are two components to XRay.

  • The probe collects the information from MQ. It runs in your MQ environment (or connects remotely to it)
  • The XRay server which hosts all the collected data for search, analytics, visualisation, automation etc.

You download the probe software from here.

You register to use the server here.

Configure the probe

Uncompress the probe software and run \XRayStarterPack\run\XRay_setup.cmd

It will ask you to enter:

Next you need to edit \XRayStarterPack\run\ibm-mq-trace-events\XRayMQ_parser.xml

If the probe is running locally then you just need to provide the queue manager name. Alternatively you can use a client server connection in which case you’d have to provide the connection information too.

To monitor a second local queue manager just copy the stanza and add the next queue manager name but also change the stream name so that each one is unique. You could also monitor a second queue manage on a second machine by installing the probe on that machine too.

Configure MQ

Switch on activity trace  by setting the queue manager property ACTVTRC(ON).

This means that everything will be traced. This is great for getting started. However if you run tools such as MQ Explorer you will find that it generates far too much data so once you’re up and running you’ll want to go back and edit mqat.ini to include or exclude data some applications and functions from the trace.

Configure the XRay Dashboards

Log in to the XRay server and click on Go to Dashboard

Go to the repository list where all the samples are as well as your own repository at the top

Select your repository

The next dialog box will ask you to create a dashboard. Just click cancel on that as we want to import the ones that came in the download of the probe software. On the top right corner of the screen, click on the Main Menu icon and select Import/Export > Dashboards.

Import the file \XRayStarterPack\XRayStarterPack\dashboard\NastelXRayMQStarterPackDashboard.json

After a short pause, the list of dashboards will be presented. Select All and click Open.

Run the probe

Now you can start collecting data and seeing it in the dashboards by running the probe on the managed machine with

\XRayStarterPack\run\ibm-mq-trace-events\run.bat

You could actually have done this as soon as you’d configured the probe.

Conclusion

Hopefully this blog post has given you an idea of how to use the free version of Nastel XRay for MQ and will get you started using it. Feel free to mail me at sgarforth@nastel.com if you need help or leave comments below.

Using Nastel’s MQSpeedtest to Test MQ Performance in a Multi-hop Architecture

Featured

In my previous blog post here I described how to get started using Nastel’s free MQSpeedtest application (available here) to check the performance of your MQ environment.
Today I decided to try it out in an architecture with multiple queue managers. I have an application architecture that looks like the following:

So an application starts the transaction by putting to Queue Manager A. This is sent to Queue Manager B. An application receives that message, does some processing and then sends a related message on to Queue Manager C where the final part of the transaction processing occurs.
I wanted to set up some monitoring in parallel to this so that I could see that the MQ flow itself from A to B to C was performing adequately. So I set up the following ping architecture:

I ran the following command:

 MQSONAR RTOQMCPCF QMA -fmqsonar.csv -a -d5

Converting the output into a chart looked like this:

Next I ran the MQSpeedtest from A to B (plotted as QMB) and the MQSpeedtest from A to B to C (plotted as QMC) at the same time. This produced the following graph.

So here you can see that:
• I started recording before starting the channels, so initially the route to both QMB and QMC was blocked.
• I started both channels which meant that the route to both QMB and QMC was fine
• I stopped channel QMB.TO.QMC which meant that the route to QMC was blocked but the route to QMB was fine
• I started channel QMB.TO.QMC which meant that the route to both QMB and QMC was fine
• I stopped channel QMA.TO.QMB which meant that the route to both QMB and QMC was blocked
• I stopped channel QMB.TO.QMC which had no immediate effect but affected the following step
• I started channel QMA.TO.QMB which meant that the route to QMB was fine but the one to QMC was still blocked
• I started channel QMB.TO.QMC which meant that both routes were fine again
Following this I can start forwarding the data to Nastel’s AutoPilot and XRay products for real time availability monitoring and historical trend analysis.

Getting started using Nastel’s MQSpeedtest for IBM MQ

Featured

Nastel has a free download available here called AutoPilot MQSpeedtest for IBM MQ. The purpose of the MQSpeedtest utility is to measure various times related to queue manager message flow. Much like traditional sonar technology, this is completed by sending a ping which is picked up by the target and echoed back. By measuring the time spent it can determine various characteristics of the message path. In IBM MQ terms, it sends messages to a listening application which replies with a response to those messages. MQSpeedtest captures various times along the message path as well as the round-trip times.

By default, the echo application can be the IBM MQ Command Server but, for users that do not have administrative permission to use this, Nastel also provide a Speedtest echo application.

The following are some basic configurations that can be implemented using MQSpeedtest:

Single queue manager:

  • determine if the queue manager is up and responding
  • determine standard throughput for a (set of) queues
  • compare the behaviour of different queue configurations such as using persistent versus non-persistent messages.

Multiple queue managers:

  • Identify slowdowns in inter-queue manager communication
  • Identify queue managers that contribute to delays
  • Identify differences in behaviours of different queue managers
  • Verify that a path from one sending application to the receiving application is properly configured.

These diagrams show the configurations that I have tested so far. There are also multi-hop options that I will try in the future.

To test the single queue manager scenario, I ran the following from a Windows command console as an administrator:

mqsonar SYSTEM.ADMIN.COMMAND.QUEUE QMA

It gave the following output:

Pinging QMA(SYSTEM.ADMIN.COMMAND.QUEUE) using 36 byte 10(msgs) batch....
Statistics for queue QMA(SYSTEM.ADMIN.COMMAND.QUEUE) :
Summary Performance Indicators :
        MINIMUM_ROUND_TRIP       (0.0250 sec/msg)
        MAXIMUM_ROUND_TRIP       (0.0280 sec/msg)
        AVERAGE_ROUND_TRIP       (0.0266 sec/msg)
        AVERAGE_PROPAGATION_TIME (0.0240 sec/msg)
        AVERAGE_REFLECTION_TIME  (0.0026 sec/msg)
        MESSAGES_SENT            (10)
        CONFIRMED_EXPIRIES       (0)
        CONFIRMED_DELIVERIES     (0)
        CONFIRMED_ARRIVALS       (0)
        CONFIRMED_EXCEPTIONS     (0)
        REPORTS_RECEIVED         (0)
        RESPONSES_RECEIVED       (10)
        MESSAGES_RECEIVED        (10)
        BYTES_SENT               (360)
        BYTES_RECEIVED           (360)
        RESPONSE_REQUEST_RATIO   (100.0000%)

General Performance Indicators :
        TOTAL_PUT_TIME           (0.0000 sec)
        TOTAL_GET_TIME           (0.0280 sec)
        AVERAGE_PUT_RATE         (10000.0000 msg/sec [360000.00 bytes/sec])
        AVERAGE_GET_RATE         (357.1429 msg/sec [12857.14 bytes/sec])
        PUT_GET_RATE_RATIO       (2800.0000% [28.00])

Message Performance Indicators :
        GROSS_ROUND_TRIP_RATE    (714.2857 msg/sec [25714.29 bytes/sec])
        EFFECTIVE_ROUND_TRIP_RATE(714.2857 msg/sec)
        CONFIRMATION_OVERHEAD    (0.0000% [0.00])
        AVERAGE_ARRIVAL_RATE     (0.0000 msg/sec])
        AVERAGE_DELIVERY_RATE    (0.0000 msg/sec])
        AVERAGE_MSG_LATENCY      (0.0000 sec]) WITH QDEPTH(0)
        MAXIMUM_MSG_LATENCY      (0.0000 sec]) WITH QDEPTH(10)

        TOTAL_BATCH_TIME         (0.1630 sec)
        TEST_COMPLETION_CODE     (0)

Pinging QMA(SYSTEM.ADMIN.COMMAND.QUEUE) completed with RC(0)

To test the two queue manager scenario, I ran the following from a Windows command console as an administrator:

mqsonar RTOQMBPCF QMA

which gave the following output:

Pinging QMA(RTOQMBPCF) using 36 byte 10(msgs) batch....

Statistics for queue QMA(RTOQMBPCF) :

Summary Performance Indicators :
        MINIMUM_ROUND_TRIP       (0.0000 sec/msg)
        MAXIMUM_ROUND_TRIP       (0.0030 sec/msg)
        AVERAGE_ROUND_TRIP       (0.0015 sec/msg)
        AVERAGE_PROPAGATION_TIME (0.0000 sec/msg)
        AVERAGE_REFLECTION_TIME  (0.0025 sec/msg)
        MESSAGES_SENT            (10)
        CONFIRMED_EXPIRIES       (0)
        CONFIRMED_DELIVERIES     (0)
        CONFIRMED_ARRIVALS       (0)
        CONFIRMED_EXCEPTIONS     (0)
        REPORTS_RECEIVED         (0)
        RESPONSES_RECEIVED       (10)
        MESSAGES_RECEIVED        (10)
        BYTES_SENT               (360)
        BYTES_RECEIVED           (360)
        RESPONSE_REQUEST_RATIO   (100.0000%)

General Performance Indicators :
        TOTAL_PUT_TIME           (0.0010 sec)
        TOTAL_GET_TIME           (0.0040 sec)
        AVERAGE_PUT_RATE         (10000.0000 msg/sec [360000.00 bytes/sec])
        AVERAGE_GET_RATE         (2500.0000 msg/sec [90000.00 bytes/sec])
        PUT_GET_RATE_RATIO       (400.0000% [4.00])

Message Performance Indicators :
        GROSS_ROUND_TRIP_RATE    (4000.0000 msg/sec [144000.00 bytes/sec])
        EFFECTIVE_ROUND_TRIP_RATE(4000.0000 msg/sec)
        CONFIRMATION_OVERHEAD    (0.0000% [0.00])
        AVERAGE_ARRIVAL_RATE     (0.0000 msg/sec])
        AVERAGE_DELIVERY_RATE    (0.0000 msg/sec])
        AVERAGE_MSG_LATENCY      (0.0000 sec]) WITH QDEPTH(0)
        MAXIMUM_MSG_LATENCY      (0.0000 sec]) WITH QDEPTH(10)

        TOTAL_BATCH_TIME         (0.0380 sec)
        TEST_COMPLETION_CODE     (0)

Pinging QMA(RTOQMBPCF) completed with RC(0)

Many other options are also available. It can run as a client too.

MQSpeedtest statistics can be captured in a CSV file and used to produce a spreadsheet of historical data. For example, you can capture data on a daily basis, and append the results to an existing file each day.

Example parameters:

1. -fmqsonar.csv

Writes a header and a single result line to msqsonar.csv.

2. -fmqsonar.csv -a

Appends only a result line to mqsonar.csv.

3. -fmqsonar.csv -a -d60

Appends a header and a result line every 60 seconds to msqsonar.csv.

You can import this file into Excel or other spreadsheet to produce charts such as the figure below.

The tool can also be integrated with Nastel’s AutoPilot and XRay products for real time availability monitoring and historical trend analysis.

Managing and Enforcing Automated DevOps of MQ Environments

Nastel is well known for producing some powerful GUIs (graphical user interfaces) for managing and monitoring IBM MQ but is there still a place for these as the world moves towards DevOps with automated continuous integration and continuous delivery?

Of course, the answer is yes! All the technology that Nastel has spent 25 years developing as the back end behind the GUIs can provide a secure and robust interface between the new open source tooling and the MQ environments, and the GUIs themselves can still be used as much more intuitive components in the automated software development lifecycle, thus reducing the chances of human error and reducing the cost of ownership.

Untitled

The aim of an automated software development lifecycle is that the definitive source of the truth for the MQ configuration is held in a version control system, that all changes to it are recorded and audited and that the environment can be quickly and automatically be built from this as infrastructure as code. In this way a configuration can be built in one or more development environments, integrated into the version control system, with continuous integration, and then deployed into the test environments, then finally through pre-prod and into production. You can be confident that no manual changes to one environment have been missed in delivery to the next environment. As we move into a cloud-based utility computing world it becomes very useful and financially beneficial to only have environments for as long as they’re needed. We can spin up a test environment, deploy the configuration to it, run the tests and then shut the whole thing down and only pay for the infrastructure during the time when it’s needed.

So, what technologies are people using for this? Well, I’m finding that people are using versions of Git such as Github or Bitbucket for the version control system / source code control repository. For the continuous integration Jenkins is often used, and Ansible (free version or Ansible Tower) can be used to manage the deployment. Chef, Puppet and Terraform are other technologies that many choose as alternatives.

So, isn’t it enough to write lots of scripts using these open source technologies? Why would you need Nastel?

From the beginning (MQ V2) MQ has had a script command processor called MQSC which can define and alter all the objects within a queue manager. These scripts would form the basis of an automated deployment. However, we still need to design the environment which gets scripted, as well as defining the queue managers and associated objects (listener, command server, channel initiator etc).

Nastel Navigator provides a GUI in which a developer can design and unit test an MQ configuration. From here you can save the environment to a script which can be stored in a version control system and act as input to MQSC. With a Nastel agent on the target machine it can create the queue manager and associated objects as well as configuring it.

image002image004

So, you can put the configuration definitions into the source code repository from Navigator, or write them directly as text. Jenkins can detect the updates and pull them from the repository and then pass them to Ansible to initiate the deployment to the next environment.

image006

At this point Ansible can use the Nastel Navigator agent to create or update the new environment using the scripts. There is a lot of security functionality in Navigator which will ensure that developers can only update the objects which they are authorised to and so cannot affect the work or environments of other users or teams. This security and governance is critical to the DevOps dream becoming reality.

image007

So, we’ve migrated a configuration from development to test, and similarly it can move from test to production etc in a robust automated way. So, is that enough? What happens if someone makes changes in the test environment? What if they bypass the tooling and log straight in to fix something in production? How would we detect that? How can we get that change reflected back into the source control repository, to ensure that the Git version remains the golden copy, the single source of the truth, and that if production fails we can always reproduce it?

To address this Navigator can monitor the environment. Firstly, it can receive MQ Configuration events which are generated when certain MQ actions are taken such as alter, create and delete. Secondly, Navigator generates its own Alter events. It constantly compares the MQ environment to the version that it has stored in its database and when it spots a difference it records it as an event.

image008

You can then choose whether to roll back this change or to save it as a script for the version control system.

image010

Finally, how do we know that it’s a valid configuration? How do we know that the design, whether created as a script or through the GUI, conforms to the company’s design standards? Will it actually perform in the expected way? Do all the queue managers have dead letter queues? Have all the listeners and channel initiators been defined? Do all the transmission queues exist that are referred to by the channels and remote queue definitions? Do the objects conform to your naming standards?

This is where Nastel AutoPilot comes in. AutoPilot is a real time performance monitor and typically comes with Navigator. AutoPilot can monitor any of the environments, whether that’s the development environment or the test or production ones, and highlight any issues such as these.

image011

Conclusion

Whilst it is possible to build an automated software development lifecycle environment using open source tools, a lot of the effort can be reduced by adding Nastel’s Navigator and AutoPilot software to the mix, adding in its twenty five years of MQ and monitoring expertise to assure security, governance, and a lower cost of ownership.

I would be very keen to hear your comments, questions and experiences of DevOps with MQ. Please email me at sgarforth@nastel.com to discuss it further or leave comments below.

How the cloud is revolutionizing roles in the IT department

This post was originally published on ThoughtsOnCloud on November 10th, 2014.

Many people see cloud as an evolution of outsourcing. By moving their traditional information technology (IT) resources into a public cloud, clients can focus on their core business differentiators. Cloud doesn’t nix the need for the hardware, software and systems management—it merely encapsulates and shields the user from those aspects. It puts IT in the hands of the external specialists working inside the cloud. And by centralizing IT and skills, a business can reduce cost and risk while focusing on its core skills to improve time to market and business agility.

But where does this leave the client’s IT department? Can they all go home, or do some of the roles remain? Are there actually new roles created? Will they have the skills needed for this new environment?

Changes Crossroad SignLet’s look in more detail at some of these roles and the impact that the extreme case of moving all IT workloads to external cloud providers would have on them:

IT strategy

Strategic direction is still important in the new environment. Business technology and governance strategy are still required to map the cloud provider’s capabilities to the business requirements. Portfolio management and service management strategies have increased importance to analyze investments, ascertain value and determine how to get strategic advantage from the standardized services offered by cloud. However, the role of enterprise architecture is significantly simplified.

Control is still needed although the scope is significantly reduced. IT management system control retains some budgetary control, but much of its oversight, coordination and reporting responsibilities are better performed elsewhere. Responsibility for portfolio value management and technology innovation is mainly handed to the cloud provider.

At the operational level, project management is still required. Knowledge management has reduced scope, but best practices and experiences will still need to be tracked.

IT administration

The scope of IT business modeling is reduced, as many of the functions in the overall business and operational framework are no longer required.

There are changes in administration control. Sourcing relationships and selection are critical for the initiation and management of relationships with providers. Financial control and accounting will continue to manage all financial aspects of IT operations. Human resources planning and administration are still required, but the number of people supported is reduced. Site and facility administration is no longer needed.

All of the operational roles in IT administration have increased importance. IT procurement and contracts as well as vendor service coordination are needed to manage the complex relationships between the enterprise and cloud provider. Customer contracts and pricing is needed for the allocation of cloud costs to internal budgets as well as providing a single bill for services from multiple cloud providers.

Service delivery and support

The main casualties of the move to cloud are the build and run functions. The service delivery strategy will remain in house, although it becomes largely redundant once the strategic decision has been made to source solely from the cloud. Responsibility for the service support strategy moves to the cloud provider.

Service delivery control and service support planning move to cloud provider. Infrastructure resource planning functions are likely to be subsumed into the customer contracts and pricing administration role.

Responsibility for service delivery operations and infrastructure resource administration moves to cloud provider. The help desk and desk-side support services from service support operations remain essential activities for initial level one support, but beyond this, support will be offered by the cloud provider.

Further observations

Governance is a critical capability, particularly around maintaining control over software as a service adoption. Integration of services will be a challenge, but perhaps this will also be available as a service in the future. Relationships with partners and service providers in all guises will become increasingly important.

There is a potential issue with skills. With many of the traditional junior roles in development and operations moving outside the enterprise, it’s hard to see how candidates for these new strategy and coordination roles will gain the experience they need. Academia has an important part to play in ensuring that graduates are equipped with the right skills.

In summary:

• Most current job roles remain, although many have reduced scope or importance.
• Fewer strategic roles are impacted than control or operational ones.
• Build and Run are the main functions which move to the cloud providers.
• Planning and commercial skills are key, linking the IT department more closely to the business.

Can you think of other roles that will be affected by the coming changes? Is your organization ready? Leave a comment below to join the conversation.