IBM MQ Streaming Queues Adds Business Value to Middleware

Originally published here on July 18, 2022.

Last year we published this blog post about the benefits of IBM MQ streaming queues.  On July 15th, 2022 this functionality was also made available for MQ on the mainframe (z/OS) and it’s also been announced for the MQ appliance for Aug 2nd, 2022.

Nastel has been supporting and embracing this functionality for some time in its integration infrastructure management (i2M) platform.

As a quick summary, a streaming queue allows you to define an alternate queue to which every message is sent, so the original is duplicated. This is important for data-driven companies (and we’ll get back to why in a minute).

The image below is a simple example of the basic concept. There are 2 queues, MyQueue and CopyQueue. As messages are put to MyQueue, IBM MQ automatically copies them to CopyQueue.

So in the example, messages A and B are put to MyQueue by the application. As each message is put, the queue manager automatically copies it to CopyQueue. As the messages are read by the receiving application, the copied messages remain. You can select whether the copy is required or optional (such as if CopyQueue gets full). The current limit is one stream queue per primary queue, although the streaming queue could be a topic.

There are many different use cases for IBM MQ streaming queues.

Here are three:

  1. Create an audit trail for all activities through the queue
  2. Create an alternate workflow for the messages
  3. Access the valuable data flowing through the IBM MQ message queues

The use case that is very attractive to our many banking and financial services customers is the ability to immediately access the data flowing through the queues.

This might bring to mind Application Activity Events which are commonly used with Nastel XRay to track messages. Activity events are good for tracking activity and can be used to get payload and are a good option in many cases. They are best for tracking the application calls and when messages were put and received.

Now, with streaming queues, you get the raw data directly from the source, not a reformatted message to process. Since messages contain the business information, you could now answer questions like “How many orders today (or in the last 5 minutes) were greater than 1 million dollars?” Well, you can get that if you have a way to collect this data.

Nastel XRay is specifically suited for this too. By listening to your data, you can get various data and related insights without having to make any application changes.  Time and data analytics are money today and transaction tracking, AIOps, and business analytics just became a whole lot simpler and faster for IBM MQ shops, many of which have been searching for a better ROI (return on investment) from that part of the IT stack.

This ability has existed for a long time when topics are used in IBM MQ and Kafka, but for traditional MQ applications that use queues, this creates a new avenue to get insight.  If you are an IBM MQ shop, this is a big productivity and data analytics win.  Your investments in IBM MQ and your entire middleware and integration infrastructure (i2) layer can deliver a lot more ROI.

Check out XRay, all of Nastel’s IBM MQ-targeted solutions, or set up a demo to see how this all works for organizations like yours today.

What’s In the New IBM MQ Appliance

Originally published here on July 14, 2022.

When IBM MQ 9.3 was released last month, as discussed here, it was notable (and noted) that this was just for distributed platforms and that news of the mainframe and the appliance would follow.

Now, IBM has announced the new MQ Appliance, called M2003, with general availability on August 2, 2022.

In terms of what’s new, IBM says that it “brings together next-generation hardware and IBM MQ firmware, packed with the latest updates and designed to provide a more powerful appliance than its predecessor,” so presumably that means it has the MQ 9.3 functionality.

Like the previous releases, the IBM MQ Appliance M2003 is designed for the following:

  • Simple setup
  • Maximized performance
  • Business resiliency
  • Cost efficiency
  • Data insights
  • Help protects against threats

We will be joining their Webcast on August 11, 2022, to learn more: Data Availability and Integrity with the new IBM MQ Appliance, M2003.

Nastel products support this new version of the MQ Appliance.  For further information on Nastel’s dedicated support for the MQ Appliance click here.

UPDATE: Aug 17th

IBM has now released more information in this blog post.

They have confirmed that the appliance runs MQ 9.3 and so includes the following enhancements:

  • Streaming queues for insight into the data flowing through MQ
  • Encrypted file systems to protect IBM MQ data at rest
  • Synchronous replication for disaster recovery (DR) between two appliances
  • Simplified administration for high availability (HA) resource failures
  • Retrieve HA and DR configuration settings using REST
  • Process dead-letter queues by using a remote client DLQ handler

They’ve also provided more detail on the improved hardware:

  • 2 x 16-core 2.90GHz processors (8 cores for B model)
  • 256 GB RAM, 3200 MHz DDR4 DIMMs
  • 2 x 1Gb management Ethernet ports
  • 8 x 1Gb Ethernet ports
  • 4 x 10Gb Ethernet ports
  • 4 x 40Gb Ethernet ports
  • 2 x 100Gb Ethernet ports
  • 4 x 3.2 TB NVMe SSDs, RAID 10 (6 TB effective, 3 TB effective for B model)

As previously discussed here, Nastel especially welcomes the streaming queues functionality which can be utilized by Nastel XRay for transaction tracking and data analytics.

Make Money Reviewing IBM MQ, DataPower & Nastel

Originally published here on June 24, 2022.

IBM is offering $25 each for reviews of IBM MQ and IBM DataPower from actual customers.

In this post they say:

“IBM relies on users like you to share your experience with IBM DataPower. People trust you – Peer review sites are getting more popular”. (G2 says “over 86% of software buyers use review sites when buying software.”)

“To thank you for your time, IBM has set up multiple campaigns that give you something back. If you review on Gartner you’ll receive a $25 gift card or donation to the charity of your choice.“

In this post they say:

“Write a review of your experience with IBM MQ, and you will receive a $25 gift card via email as our thank you when your Trust Radius review is published. Help other professionals like you to pick the right solution based on real user experiences.”

Nastel Technologies is also offering a reward for reviews of its i2M Platform for management and monitoring of IBM MQ, DataPower, etc. They are paying $50!

  • Review IBM MQ here.
  • Review DataPower on Gartner here.
  • Review DataPower as part of Cloud Pak for Integration (CP4I) on Gartner here and on G2Crowd here.
  • Review Nastel on G2 here.

You can give more than one review, getting the $25/$50 for each one (valid in any country). Although there are lots of question, you only need to answer the compulsory ones to be entitled to the reward. Although IBM say that this is a good chance for visibility for yourself, you can choose to be anonymous.

(Offers valid at the time of writing.)

What’s New In IBM MQ 9.3

Originally published here on June 20, 2022,

The latest long-term support (LTS) and continuous delivery (CD) release of IBM MQ will be released for the distributed platforms on June 23, 2022.

IBM MQ 9.3, again, has a focus on securely powering cloud-native applications across hybrid-multicloud as well as making it easier to get started.

The MQ 9.3 release includes:

  • An improved getting-started experience.
  • Simplified cloud-native deployment and improved high availability, scalability, and resilience.
  • The ability to tap into existing data with streaming queues – see our blog post on this here.
  • Enhanced administration with the MQ Console, whilst the MQ Explorer has been removed from the IBM MQ install package.
  • Security enhancements that build on the existing robust security mechanisms in MQ.
  • Enhancements to IBM MQ Advanced Managed File Transfer capabilities – see our page on MFT support here.
  • Updates to the supported platforms i.e. Adoptium and IBM Semeru Java 11, Oracle and Adoptium Java 17, Windows 11, JMS 3.0, NET 6 support.

Nastel has been participating with the development team on this technology and analyzing the impact and benefits to our customers as part of our own cloud and container initiatives.

Nastel products support this new version of MQ and the new features.  For further information on the Nastel solution please read here.

To see the official IBM press release see here.

Can’t Wait For Messaging Updates At IBM TechCon 2022!

Originally published here on April 2, 2022.
IBM TechCon 2022, April 5th – 7th, 2022, Register here.
I’m looking forward to seeing some very distinguished speakers at the virtual conference IBM TechCon 2022, not least of which is the co-founder of Apple, Steve Wozniak!

Here are some of the highlights of the messaging track:

  • “Innovation and evolution of IBM MQ” – David Ware, Chief Architect of MQ
  • “Event Streaming: what does it mean and where does it fit” – how Kafka complements MQ and how CP4I brings them together – Jerome Boyer, IBM Kafka CTO
  • “IBM MQ Cloud Native – embracing container technology” – why and how people are building MQ deployments using Kubernetes and Red Hat OpenShift – Arthur Barr, IBM MQ container architect
  • “Running IBM MQ in AWS” – MQ on virtual machines, containers or SaaS – Callum Jackson, IBM Solution Architect
  • “IBM MQ and Apache Kafka, two designs, two behaviors” – how to choose – David Ware, Chief Architect
  • “Deep dive on what’s new in MQ on z/OS” – Mat Leming, MQ for z/OS Architect & Matthew Sunley, Product Management.
  • “IBM MQ Cloud Native DevOps” – how you can integrate IBM MQ into a DevOps pipeline – Anthony O’Dowd, IBM Distinguished Engineer
  • “IBM Event Streams Deep Dive” – IBM’s implementation of Apache Kafka and more – Graeme Robert, Event Streams Technical Lead & Neeraj Laad, Event Streams Architect
  • “Controlling access to your IBM MQ system” – the different ways to authenticate and authorize users and applications, how different environments promote different approaches, from the traditional OS users, through centralized LDAP to mutual TLS – Rob Parker, IBM MQ Security Architect
  • “Deep dive into Native HA” – Jon Rumsey, MQ Senior Software Engineer
  • “Making the move to MQ Native HA using GitOps, containers, and AWS-ROSA, for a mission critical capability of a FinTech platform” – A CUSTOMER SESSION – Mark Lucente, Technical Director, DAI Source & Kent Brown, CTO, ModusBox

IBM TechCon 2022 is a virtual conference running from April 5th – 7th, 2022 at 10:00 AM – 3:00 PM EDT. You can register for it here.

What is Kafka Monitoring?

Originally published here on March 17, 2022.

Apache Kafka is a distributed messaging system that can be used to build applications with high throughput and resilience. It is often used in conjunction with other big data technologies, such as Hadoop and Spark. Kafka-based applications are typically used for real-time data processing, including streaming analytics, fraud detection, and customer sentiment analysis. There are many derivatives such as Confluent Kafka, Cloudera Kafka, and IBM Event Streams. These are all essentially the same but with additional functions and support included.

Kafka monitoring is the process of tracking and analyzing Kafka performance in order to identify and correct any issues. Kafka monitoring is essential for anyone running Apache Kafka in production. It can help you avoid data loss, service interruptions, and other problems that can occur when Kafka is not performing properly. If you are not already monitoring Kafka, now is the time to start. Doing so will help you keep your Kafka deployment and Kafka-based applications running smoothly and prevent issues from causing serious problems.

There are a number of Kafka monitoring tools and techniques that you can use to optimize your Kafka-based applications. Here are a few of the most common:

  1. Monitoring Kafka performance metrics

Each organization using Apache Kafka will have its own set of Key Performance Metrics (KPMs) or Key Performance Indicators (KPIs) that they will measure the performance of the system against. For example, the amount of energy being used to run the clusters may be one of the metrics as this can then be calculated as a kilowatt per hour cost to the business. Obviously, the lower the energy consumption, the less money spent by the business. Being able to keep a tight rein on the costs to the business is vital for accountability and the bottom line, so being able to easily monitor this type of information is invaluable.

2. Tracking Kafka logs

Tracking Kafka logs allows the team to stay apprised of who is accessing which part of the system and to ensure that all proper rules are being adhered to in terms of accountability and security across the organization. Kafka logs can also throw up errors with specific codes that can then be traced by the development teams to work out how to fix any problems that are flagged up, in a real-time, organic way with minimal interruption for the end-users.

3. Investigating Kafka latency issues

Apache Kafka is designed to have very low end-to-end latency, which means that the information transmission from one end to the other should be almost instant. If there is a problem with latency, this smooth transactional system will not be working as well as expected and may provide problems for the end-user. Possible solutions when it comes to latency or lag are to add more Kafka servers to the cluster in order to boost the available computing power. This can often be done on the fly and in a matter of minutes, depending on the setup being used. There are also numerous ways to optimize and reconfigure client apps so that latency is reduced and performance is improved.

4. Troubleshooting Kafka connectivity issues

It is important to know whether the problems are being caused by something central to the Apache Kafka installation or something outside of it, for example, by internet connectivity problems at the client-side.

The tools available allow monitoring and troubleshooting to drill down into some considerable detail in order to generate the most comprehensive picture possible of the connectivity situation across the client network.

Each of these techniques can help you troubleshoot and optimize your Kafka-based applications. By monitoring Kafka performance, you can ensure that your applications are running smoothly and meeting your requirements.

How can Nastel Technologies help?

An industry leader in the messaging middleware sector, Nastel Technologies was founded in 1994 and has over two decades of experience in offering versatile solutions to enable the most robust monitoring of your Apache Kafka installations. Partnering with some of the biggest names in the business has seen Nastel go from strength to strength. We have worked with some of the foremost companies in the world, including BlueCross BlueShield, Citi, and Dell Technologies.

Integration Infrastructure Management (i2M)

We offer a single pane of glass solution which gives an at-a-glance overview of all your Apache Kafka instances as well as connected middleware, applications, and operating systems. We believe that this is the complete management, monitoring, tracking, tracing, reporting, and observability solution for Kafka delivering Integration Infrastructure Management (i2M).

This powerful and versatile monitoring solution offers the ability for Quality Assurance and Development teams to manage and operate their own middleware environments while at the same time providing a suitable level of oversight and accountability. This allows the teams to move faster in terms of getting applications to market, with the added benefit that there is also security and transparency of the process for all involved.

This allows senior management to have the confidence that they have the full picture at all times and no problems are being obfuscated from view, while at the same time it gives the Development and QA teams the environment and space they need to ensure that the applications are the best that they can be. It is the sweet spot between retention of control over a project and giving the teams their freedom to create what is needed for the company to flourish.

Ensuring the uninterrupted service for all mission-critical services is at the heart of everything we do at Nastel, reliability is one of our watchwords and we provide out of the box solutions that allow excellent monitoring and observability across the board, to ensure that you can minimize any downtime in apps and services.

We have pre-built dashboards to help detect and deal with some of the most common problems including ISR (In-Sync Replica set) shrink/expansion rate, offline partitions, under-replicated partitions, incoming/outgoing byte rate, produce/fetch rates, size of a request queue, average request latency, failed requests, idle request handler, zookeeper disconnects, unclean leader election, and active connections.

Our Integration Infrastructure Management (i2M) solution will handle all of your Kafka monitoring needs.  See here for more information on Nastel’s i2M for Kafka.

What is IBM Cloud Pak for Integration?

Originally published here on March 10, 2022.

IBM Cloud Pak for Integration (CP4I) is a platform that helps you quickly and easily integrate your hybrid cloud applications with the systems and applications that are important for running your business. It can help to collaborate between the different application teams and businesses that exist in your organization and ensure that they are working together at maximum efficiency. IBM Cloud Pak for Integration is designed around simplifying the way you connect to the IBM Cloud and to other external clouds, and their applications and systems.

It includes a variety of integration tools and services that can help you connect your cloud applications with on-premises systems, software-as-a-service (SaaS) applications, and other cloud services. The integration tools include MQ, API Connect, App Connect Enterprise, DataPower, Event Streams (Kafka), and Aspera. See here for more on this. IBM Cloud Pak for Integration is an important part of the IBM cloud strategy.

Migrating to IBM Cloud Pak for Integration has never been easier with Nastel Navigator X which offers advanced “single pane of glass” oversight, providing monitoring of all transactions and middleware in CP4I and beyond.

CP4I is a container-based IBM middleware solution that enables you to integrate your applications quickly and easily with other IBM and non-IBM applications, regardless of where in the world they are hosted and whether on-premise or on any cloud. Nastel Navigator X for CP4I reduces business risks, clarifies visibility, and enhances alerting systems to speed up business developments while at the same time-saving money and increasing accountability, governance, and security.

The use of cloud-based computing power and applications means that even thin clients can utilize advanced software solutions that they may not have had the computing power to run on their own previously. This allows for the distribution of less expensive equipment in order to run the necessary applications and keeps some of the more expensive hardware either securely on-site or entirely cloud-based and not in a physical location owned by your company. This means they are being used as and when needed rather than being purchased outright, which can be an excellent business cost-saving, particularly across larger and more complex companies.

Below are just some of the advantages of IBM Cloud Pak for Integration:

  • Rapid integration: With IBM Cloud Pak for Integration, you can quickly and easily integrate your applications with other IBM and non-IBM applications, regardless of where they are hosted. IBM Cloud Pak for Integration includes a wide range of pre-built connectors to popular applications and services, so you can get up and running quickly.
    Seamless integration makes for a much more joined-up experience for all concerned. The chance to use pre-built connectors also ensures that time isn’t wasted on re-inventing the wheel and that solutions are available to deploy so much more quickly than they would otherwise ever have been.
  • Greater flexibility: IBM CP4I gives you greater flexibility to integrate your applications with the systems and data that are important to you. IBM CP4I supports a wide range of integration scenarios, including both synchronous and asynchronous integrations. And, because CP4I can be cloud-based, you can access it from anywhere, anytime.
    The ability to access cloud-native applications is a massive boon offered by the cloud pak and this allows the technical teams more room to maneuver so they can balance the needs of onsite and offsite teams in a much more seamless way than ever before. This is something that will become ever more important due to the increased prevalence of hybrid or remote teams in the wake of the Covid pandemic.
  • Lower costs: IBM Cloud Pak for Integration can help you lower your costs by enabling you to reuse existing applications and services, rather than building them from scratch. IBM Cloud Pak for Integration also helps you to optimize your application infrastructure, so you can run your applications more efficiently and require less processing power to do so, making cost savings in the process.
  • Data Security: One of the major advantages of using IBM Cloud Pak is that all of the data is transferred using enterprise-grade encryption, meaning that wherever your team members are based, they will all be able to access the right data streams and tools, securely and safely from their device.

Why choose Nastel Technologies?

At Nastel Technologies we aim to make the whole process as smooth and seamless for our clients as possible. We specialize in the development and rollout of middleware for applications to connect to APIs and software platforms and utilize their capabilities to solve the problems that you are experiencing. As certified business partners of IBM, Red Hat, AWS and Microsoft, we have the experience and expertise your organization needs to optimize and streamline all your middleware requirements for smoother operation and security in the hybrid cloud and legacy environments.

We offer IBM Cloud Pak tools and IBM Cloud Pak for Integration tools with enhanced drill-down options and greater granularity than ever before, over and above the native CP4I tools, and expand the visibility beyond the IBM technologies. You will have the power to offer application teams the ability to visualize and modify their environment with no risk to other ongoing work in different teams across the organization. Monitoring of system and business performance, queries, and transaction rates across the board is simplified, making it easier to avert problems before they begin.

See here for more detail on the Nastel solution for Cloud Pak for Integration.

Hybrid Multi-Cloud Demands Holistic Observability

Originally published here on October 21, 2021.

As I said before, Speed is King. Business requirements for applications and architecture change all the time, driven by changes in customer needs, competition, and innovation and this only seems to be accelerating. Application developers must not be the blocker to business. We need business changes at the speed of life, not at the speed of software development.

Thirteen years ago much of my time was spent helping customers rearchitect their monolithic applications into service oriented architectures (SOA). We broke the applications into self contained, reusable, stateless business functions which could be managed, modified, tested, reused or replaced individually without impacting the rest of the application. This enabled business agility, reducing cost and improving time to market.

Over the last ten years or so, companies have been exploring the benefits of cloud. Initially this was also quite monolithic i.e. lifting and shifting large applications onto outsourced virtual servers or outsourcing email or CRM to web hosted providers. However, more recently we’ve started to see newer applications being developed with finer grained components stored in the most appropriate place. Confidential data may be stored on premise, open APIs can enable public lookup services, the CRM can be integrated with the marketing functions and the internal cloud hosted product warehouse. A single business application is typically spread across a hybrid cloud of different cloud providers, and on-premise services.

They are at risk of sprawling so much that no-one knows how they work or how to maintain them and they will be a blocker to innovation again. The drivers behind SOA are here again. Applications need to be designed as combinations of smaller self contained, loosely coupled, stateless business functions or services.

Management of these services is even more powerful than it was with SOA. They can be quickly pushed out by DevOps, moved between clouds, they can be ephemeral and serverless and containerised.

So, a single business transaction can flow across many different services, hosted on different servers in different countries, sometimes managed by different companies. The services are being constantly updated, augmented, migrated in real time. The topology of the application can change significantly on a daily basis.

It’s amazing that we’ve had so many software development innovations to enable this rapid innovation for our lives. But what happens if something goes wrong? What happens when a transaction doesn’t reach its target? Who knows what the topology is supposed to look like? How can it be visualised when it’s constantly morphing? Who knows where to go to find the transaction? How can you search for it and fix it when it could be anywhere on any technology?

Nastel XRay addresses this. XRay dynamically builds a topology view based on the real data that flows through the distributed application. It gives end to end observability of the true behaviour of your application in a constantly changing agile world. It highlights problem transactions based on anomalies, and enables you to drill down to those transactions, carry out root cause analytics and perform remediation.

Many companies recognise all the benefits that hybrid multi-cloud can give but recognise that it’s too risky to move their business forward if their monitoring and management software and processes can’t keep up. Nastel is here to help.

Was Cloud A Mistake?

Originally published here on August 12, 2021.

I just heard that businesses are repatriating cloud workloads.

I was told, “The history of IT is paved with dreams of lower costs, better performance, and higher reliability through migrations to new platforms and languages and new architectures and infrastructures.”

“Some of the biggest recorded IT waste has been when companies focus on the ‘next big thing’ and have issues achieving successful implementations. Any architectural change comes with risks, and if a company isn’t clear on the risks in terms of feature creep and associated cost and time overruns, they can sometimes decide to ‘de-migrate’”.

I’ve been thinking about this and of course, as a cloud evangelist I have some cognitive dissonance but, if you want the TLDR answer, I don’t think that moving to the cloud was a mistake.

As we’ve said before, the cloud is not new. Many of us who wrote IBM MQ at IBM began by writing IBM SAA APPLICATION CONNECTION SERVICES or AConnS. This was a middleware designed to facilitate the move to the new client-server model. The idea was to run the right workload in the right place and have them work together in an optimum holistic business application. It utilitized the mainframe for its heavy data processing and the PC for its powerful graphical abilities. An application on the PC would call AM (Application Manager) to run mainframe processes, whereas an application on the mainframe would call PM (Presentation Manager) to display the data. Because of this meeting of AM and PM the product was known internally as HighNoon.

Before that, we had the Computer bureau which was Pay As You Go Utility computing access to the mainframe which phased out when companies began to be able to afford their own mainframes or the PC became sufficient.

The point is that you should run your application workload in the most appropriate place at that particular time. Without wanting to get into Schrödinger’s Cat, you can’t just say that a technology platform is right or wrong, you have to look at it in the context of the time. Technology is constantly evolving with invention and innovation, as are other factors such as cost, politics, and security.

At an enterprise level, the concern with cloud over the last 10 years has been around security. Companies have had a castle and moat security approach meaning that everything needs to be stored inside the company premises and they put a barrier around the outside to stop people from getting in. But with the arrival of COVID-19 and home working they’ve had to quickly revisit this. We now have the zero trust model and message-level security. Also, homomorphic encryption is coming much faster than was first thought possible and also confidential computing. We also have quantum computing coming along which will deliver world-changing discoveries and innovation but, as it needs to run at minus 273 degrees Celsius, will need to be accessed as a cloud service.

So, those are changes that are making the cloud more and more appropriate. But on the other hand, the amount of data in the world is rapidly increasing. 79 zettabytes will be created in 2021. That’s up from 64 zettabytes in 2020 and 2 zettabytes in 2010. iPhone photos and videos are getting ever larger and stored on the cloud and more 4K entertainment, education, and business content is provided from the cloud with Netflix, YouTube, Zoom, etc. Shipping all this across the cloud is becoming increasingly hard. With the limit on the speed of light, we need invention in telecoms or to right-size and put the workloads in the appropriate place. With the invention of 5G, being 100 times faster than 4G we were preparing for a dramatic change in our lives. But then governments canceled contracts with the 5G provider Huawei due to security and privacy concerns around involvement with the Chinese government.

And then there’s the moving between cloud providers. If a cloud provider is accused of tax avoidance then a government tax agency needs to consider moving away from them. If a cloud provider is also a retail company then retail companies decide to relocate. If a cloud provider applies for a banking license then finance companies make strategic decisions to migrate from them.

Change is the only constant in IT. Reminiscent of the Sneetches, companies will move workloads to and from the cloud constantly but it’s not a wholesale move. Different parts will move as is appropriate. Parts will always be in transition, so there will always be a hybrid cloud.

Nastel can help customers move their middleware and message data from on-premise to the cloud and between clouds and back again. And Nastel can monitor the whole end-to-end environment. It can give a single view of the integration infrastructure alerting on the IT availability and the business transaction. Most importantly Nastel software is the fastest at adapting its monitoring when the integration infrastructure changes, which it does constantly. The topology views are dynamically created in real-time based on the current view of the infrastructure. The alerts levels are set based on ongoing statistical analysis (AIOps). There is no need for the monitoring environment to always be out of date, playing catch-up with the changing integration infrastructure. Nastel is here to support you in the ever-changing world of hybrid cloud.

Put security first in cloud migration planning

Originally published here on April 28th 2021

Companies considering migration to a cloud environment usually focus first on infrastructure, from data centers to system performance to storage.

But even before you select a cloud environment, the fundamental issue to be considered must be security. Protecting that environment against inappropriate access is critical. And a deep understanding of all the vectors associated with the cyber kill chain is essential to ensure that any environment is protected. Clouds are inherently no more or less secure than any other environment, but with carefully crafted monitoring and vigilance, they can provide the highest level of security available.

It is not a surprise when customers that we work with choose Red Hat OpenShift as their cloud platform. It is built with a fundamental security model to ensure a secure environment. That environment includes continuous security across all aspects of the application deployment. Customers tell us that being able to control, defend, and extend their application platform’s security is one of the critical benefits they see with OpenShift.

Tied into this is the granular security provided by Nastel Navigator. Nastel Navigator offers self-service access to the middleware environments running IBM MQ, Tibco EMS, and Apache Kafka (and derivatives). Security is a fundamental aspect of providing self-service. If users are allowed to serve themselves, they have only to do what they are authorized to do. Access is critically important when crossing the various environments.

Even where just a single messaging middleware platform is in use, multiple versions spreading across different servers and even other platforms is a typical complexity issue, especially as we move to hybrid cloud. The nature of these messaging middleware systems typically demands that each server be individually connected to for configuration management purposes. And once you log in to a server, you can immediately access all of the individual queues, stream configurations, and data associated with that server. Since multiple business processes and teams may well use the same server, it is not appropriate to directly give the individual developers access to the server. Instead, the developer typically requests changes through a messaging middleware administration team (aka a shared services team). This request requires approvals and to model the changes in a test environment before delivering them into production. This arrangement enables the integrity of the entire environment to be guaranteed, but it adds complexity, time, and cost to the environment’s management. This heavy process impacts the time to market.

When a company spins up an OpenShift environment, Nastel Navigator gives each development team access to its entire messaging middleware environment across the whole application stack while maintaining privacy and security using access controls. Furthermore, it includes auditing all changes and attempted changes. It doesn’t matter what mix of cloud or on-premise architectures are in use or what middleware versions are running; it can all be managed from a single web browser pane of glass (SPOG) complete end-to-end business application view. One login gives a user full access to just what they have rights to, and the administration team sets the rules. This setup means that users can manage their configurations and messages, delivering solutions quickly without impinging on anyone else. Simple, fast, and secure.

Try Nastel Navigator for free from the Red Hat Marketplace.