Posts Tagged ‘containers’

Is being consistent the same as being equal? Changing how your business works with IBM Cloud Pak for Integration

September 8, 2020

“A foolish consistency is the hobgoblin of little minds” is an interesting turn of phrase by Ralph Waldo Emerson. What does it mean, and how can we learn from it today?

Imagine trying to do calculations without an equals sign. What is on one side of the equation is the same as what is on the other side. But does being equal mean the same as both sides being consistent?

Things might seem the same but be different. Are they consistent? Are they equal? It depends how and why they are being used.

In the world of integration, solutions have been crafted, sometimes over years, to be works of art. Beautiful but complex. Valuable but fragile. However, this leads them to become slow and expensive. You can only afford to have a small, limited amount of these integration solutions. In today’s fast-moving world that’s not enough. Your business needs more. And now. It is no good working for months to connect 2 systems together as the business need will change and evolve over that time. The business needs to respond rapidly with a solution as possible as quickly as possible. Does it need to be a work of art? It needs to be consistent, producing reliable results, but it doesn’t need to be equal to a work of art.

Let’s look at Andy Warhol’s picture of a can of soup. His original canvas paintings are hugely valuable. But one of the reasons it is famous as art is that prints were made to replicate it. Are they the equal of the artwork? No. But they are consistent, both with each other and with the original. This is much the same decision that many businesses face around deploying integration as containers. A business might want some art on the walls, but it would seem hugely foolish to spend millions of dollars on the original when the print can do the job, and be replicated around the corner and in the next building, spreading new joy, happiness and meaning to each person that sees it.

With a modern agile deployment of integration in containers as enabled through Cloud Pak for Integration, a business can rapidly deploy containers, on Red Hat OpenShift Container Platform to run integration consistently anywhere needed, and to address any need. The integration available to your business will transform from being that unique and expensive work of art, accessible only to a few parts of the business, to be a widely available, consistently valuable asset, used and cherished by all parts of the business. Additional integration can be added or changed quickly and cheaply as needed. Need different integration? Deploy it. Need a new picture? Hang it.

A modern IT infrastructure can be re-thought to allow integration to be provided by a consistent set of highly available integration capabilities, quick to scale up and down. Easy and consistent to add value to meet new needs. Equality is making something available to all, not making everything the same. This change in integration will drive a change in mindset, allowing new problems to be overcome, and new opportunities to be addressed like never before.

Not a hobgoblin, but a new path through the wilderness, where none existed before.

Feast on more than a potato with IBM Cloud Pak for Integration

August 27, 2020


The humble potato might not look like much, but it is certainly a reliable food item for many around the world. There are even a number of instances of people eating nothing but potatoes as a diet. I have been reading a fascinating book (Rory Sutherland, Alchemy), which as part of an example looked at eating potatoes. Imagine being only able to eat 1 food item for the rest of your life. If you can only eat one thing, then it can do the job, although you would likely get pretty fed up. However, if you could choose, say, 6 or 10 items of food, then you would get a far more varied and balanced diet. You could have combinations of foods and eat an enjoyable feast every day. You might well still include the potato, or you might not. You would have a choice.

Screenshot 2020-08-27 at 17.27.34

Now let’s consider integration software. There can sometimes seem to be as many choices of integration software and solutions as there are food choices. Certainly as many as you might find in a restaurant menu (Cheesecake Factory, I am looking at you).

And in the same way that you might have different menu sections with Salad, or Pasta, or Pizzas or Burgers, there are many different styles of integration, depending on what and how you are trying to integrate.

Sometimes the choice can be confusing. But it is good to have a choice.  With some integration vendors, they have a single style of integration. They will insist that is always the best way to solve any integration problem, including whatever integration need you might be trying to solve at that time.

Screenshot 2020-08-27 at 17.02.54

The old joke goes, if all you have is a hammer, then every problem looks like a nail.

With integration, you need more than just a hammer. Sometimes you need to stream events at scale. Kafka would be great at that. Sometimes reliable delivery with once-and-once only is key, as well as needing a high volume of messages. That might be a good time to use the leading enterprise messaging offering like IBM MQ. Perhaps, to drive new business opportunities, you need to publish and manage API calls and protect your business as requests are made through the firewall. Then a solution such as IBM API Connect and IBM DataPower would be ideal. You need to connect different applications together, enriching and adding value to the data between these applications? IBM App Connect would be a great choice. Or perhaps you are looking to ensure that different sites or different clouds can work together more efficiently, with more data exchanges faster using an offering like IBM Aspera.

Screenshot 2020-08-27 at 17.30.42

Hence the attraction of IBM Cloud Pak for Integration. Different integration needs call for different approaches. Of course, you can hammer in a screw with a hammer. But it is better to use a screwdriver. So why limit your choice of integration by listening to that vendor who will tell you their single approach is all you need? You always want the right tool.

Screenshot 2020-08-27 at 17.17.29

The right tool for integration gives you everything. Except a potato. You will need to provide that yourself. In the meantime, check out Cloud Pak for Integration. Here is a previous blog about the most recent release.


Icing on the cake – making Cloud Pak for Integration 2020.2.1 even better

June 26, 2020

Screenshot 2020-06-26 at 11.43.39

In the UK version of ‘Who wants to be a millionaire?’ there was a section where the host offered the contestant a cheque for the prize money they had won so far, and then pulled it away saying “We don’t want to give you that” to encourage the customer to play on for more prizes. This week we are doing the same with Cloud Pak for Integration 2020.2.1, which is the latest release with a GA date of June 26th 2020 by doing an additional announcement to highlight additional features made available in this release.

We have already seen a blog about Cloud Pak for Integration 2020.2.1. This was announced back in April, but with a GA date that was a couple of months away at the time.

Modern software offerings are often tremendously powerful. They typically have many features that never get used, not because the features aren’t useful, but because it would take too long to understand the feature, and users don’t have the time to spend to learn how to use them, when the improvements might be minimal or not directly needed to get value from the offering.

The challenge with an offering like Cloud Pak for Integration is therefore not to add new features, but to add smarter, better features. To focus on features that deliver something not just essential but offering such an improvement that it becomes a core part of using the product, with the benefits flowing clearly and easily.

A reminder that in this release, we see the availability of Kubernetes Operators which are designed to help provide cloud-native style of operation, and by taking advantage of these, customers can start to exploit the power of Kubernetes for automated deployment and operation activities to enable customers to make their CI/CD pipeline goals a reality. IBM Event Streams was able to take advantage of the Strimzi operator from the Cloud Native Computing Foundation.

Screenshot 2020-06-26 at 11.35.26

In addition to the Operators the new announcement, available to read here,  calls out some additional features now available in the 2020.2.1 release that will provide real benefits. An innovative new feature is the introduction of ‘Mapping Assist’ for App Connect Designer. App Connect is our leading integration tool to enable different applications, systems and data to connect and exchange value. However, to do this, the data must be mapped between the source and targets. This can be not just time consuming, but also complex.


Years of experience and sample data have allowed IBM to add AI assistance to this task and can provide suggested mappings. This rapidly accelerates this complex task, helping achieve value both faster and with potentially better outcomes.


Additionally, Cloud Pak for Integration can now take advantage of IBM Transformation Advisor which was previously only part of Cloud Pak for Applications. Many customers today may be getting entitlement to the Cloud Pak for Integration, but their applications are not designed for container deployment.

Screenshot 2020-06-26 at 11.08.08

Transformation Advisor can help provide an assessment of the level of complexity of migrating these applications and also some guidance for how to modernize these applications.

Screenshot 2020-06-26 at 11.07.22

By adding these new features and capabilities into the Cloud Pak for Integration, there is increased value and it will be easier to get value from not just buying Cloud Pak for Integration but in deploying and using it as well. Not just the icing on the cake, but the fruit on the icing well.

Note a version of this blog also appears on the IBM Community site here.

Containers and modernization. Not a one-horse town for businesses using IBM MQ.

January 12, 2020

Screenshot 2020-01-12 at 23.13.15

This blog was going to be on IBM Cloud Pak for Integration, following one I did last year. But I realized I needed to focus first more on modernization. This aligns with much of what I was going to talk about in my Cloud Pak blog, which I will revisit in another entry. For now, it is probably best to look at modernization specifically for MQ customers.

Modernization is not simply about moving to containers. A couple of years ago we would have had to caution that modernization wasn’t about moving to cloud. Stop jumping on the latest technology and seeing it as a solution to business problems. The solution to business problems is not simply changing our technology for today’s buzzword. That’s not to say containers can’t be part of a good solution, but simply touting containers as a solution rather than a technology that can be used when appropriate is not optimal.

Screenshot 2018-11-30 at 10.09.23

Let’s review Messaging Modernization, and how it applies to MQ users today – of which there are many thousands. Some of these will have used it as a critical part of their infrastructure for more than 2 decades. It would be amazing if across all these infrastructures there weren’t improvements that could be made to these MQ deployments without throwing away the baby with the bath water.

Screenshot 2020-01-12 at 23.15.16

Some of the issue is that even though customers might be using a recent release of MQ, they are still using MQ in exactly the same way that they were many years ago. It’s like moving from house to house over the years, starting small and nasty, and moving through nicer and nicer homes but still with the same furniture. You end up in a beautiful and spacious house, but still with the same tatty armchair, single bed, and peeling bookcase. Using MQ for many years without modernizing your deployment is similar to that scenario of moving from house to house but still using the same furniture and décor from the initial flat share.


MQ’s flexibility means it can be deployed in many ways. MQ has been deployed alongside each application instance, which is great for resilience and reliability but can lead to overhead both in costs, and in deployment times when scaling.

Another deployment approach is to have single MQ instances manage workload from multiple different applications. Then yet another style of deployment is having multiple instances of the same application using a single MQ instance. There are of course more combinations but those are certainly prevalent.


There is not necessarily anything wrong with those deployment styles, on whatever platform they are deployed on, but many customers don’t see them as being as responsive or as efficient as they would like for today’s infrastructure.

Screenshot 2020-01-12 at 23.10.10

Modernization of MQ is likely to review the way in which you deploy MQ in support of the applications that use it. In many cases, recommended practice would be to define and deploy more, smaller instances of MQ with each instance or Queue Manager supporting an individual application instance. With virtualization and container technology, it is no longer the case that you can have a single instance of MQ on a server, but you can have multiple instances, each maybe only using fractions of a processor core. This provides decoupled deployment, but with the ability to scale on a more granular level, and to spread the workload both horizontally and vertically.

Screenshot 2020-01-12 at 23.23.13

You can have a combination of this modern, agile deployment style. This can be coupled with greater insight into what is happening in the MQ system through streaming of system events and logs to a choice of external tools. Then modern REST API based tooling to control and manage the environment completely transforms MQ for today’s critical business problems.


Your business problems might seem to be different, but the challenge of moving data once and once only, with security and reliability remains a constant. There are some things you can rely on even when everything else is changing. IBM MQ remains the messaging solution you can rely on. Even when you have modernized your deployment. Both inside and outside containers.

Screenshot 2019-07-29 at 18.08.41

Occam’s razor, or Keep It Simple Stupid?The IBM MQ Appliance is the right choice.

July 29, 2019

Screenshot 2019-07-29 at 17.18.49

Diagnosing technology problems can be hard. I have an old car I drive in the summer and to get music, I plug my mobile phone into a cassette adapter to play music through the car stereo.  I had problems with it a year ago and replaced the cassette adapter and everything worked again, but a few weeks ago I had problems again. The music started playing fine, but then it would start to have problems. It would stutter, and the sound would break up making it pointless to try to listen to it.


How should I go about trying to fix it as there are so many variables? Buy another cassette adaptor? Or was it the old radio cassette player in the car? Or maybe it was the connection between the radio and the speakers? Or maybe there was a problem with the phone connection? Or the phone itself? Or the app on the phone I was using? This was not a problem I wanted to spend a lot of time on, but equally I didn’t want to just throw money at the problem to try to fix it if replacing parts wouldn’t fix the problem.

Screenshot 2019-07-29 at 17.41.34

William of Occam told us when reviewing a hypothesis, to select the one with the fewest assumptions, so what I needed to do was to reduce the variables to try and narrow down the cause of the problem. This thought of ‘reducing variability’ is one of the reasons for choosing the IBM MQ Appliance and you might want to consider this alongside the other benefits it can bring to your business.

Screen Shot 2018-07-03 at 10.05.08

Consider the deployment environment where you may install, configure and manage MQ on servers today. Many of our customers have MQ deployed on multiple machines, and although there is a move towards VMs and containers, there is a lot of variability in the servers themselves, the OS levels, the MQ install and configuration, and even VMs and containers can suffer from these problems. While not necessarily done as a conscious choice, the outcome is a lot of potential for complexity and confusion. This variability can become a serious issue when problems arise, and the issues multiply further once you scale up your deployment. MQ deployments tend not to condense down to just a single Queue Manager for many reasons and this means that Queue Managers proliferate. In the move to containers this ‘doubles-down’ with each Queue Managers typically in its own container.


Screenshot 2019-07-29 at 18.08.41

The MQ Appliance can be seen as another form of container, but for the MQ administrator, it might be much simpler than multiple Docker containers, especially if you haven’t managed to consolidate to a single container image, repeated in every deployment, as the MQ Appliance is a dedicated environment optimized for running MQ Advanced. And a single appliance (or a pair of appliances for HA) can run multiple MQ Queue Managers, with the resources defined and protected from each other.


Going back to the point of reducing complexity and variability, the MQ Appliances are built to be physically identical, with no additional code installable on them, and the MQ as well as the operating system is held in firmware, updated as a complete ‘flash’ in a single operation. This can be done in confidence as the MQ Appliances we build and test the updates on will be identical to the MQ Appliances in the customer data centers, making it typically much easier to reproduce problems and fix them, as long as they are within the MQ Appliance itself. This of course can help to identify that problems are outside the Appliance – such as with the networking, or even the cables.

You run MQ where it is most important to your business. It handles your most critical data. You expect it to run all day and every day, without interruption. It exchanges millions or billions of messages and it doesn’t compromise as it is designed to never lose a single message. When you hit problems, you want them to be as easy to diagnose as possible. You don’t want to have your deployment choices causing confusion, whether that is the weird mix of servers, or the range of operating systems you are running. And you don’t want your devops team blasting away and renewing your MQ container to ‘tidy up’ because the container had been running without stopping for 3 months.


IBM MQ is the most resilient, robust and secure enterprise messaging platform available. It is relied on by virtually every bank, every credit card company, insurance businesses, retailers, manufacturers, health care providers and the travel and transportation industry. The MQ Appliance is the deployment option that has been built specifically for it, and MQ has been optimized to run there. Once you have a High Availability pair of MQ Appliances up and running, handling maybe 200000 persistent messages per second, you know it will be easy to reproduce that environment exactly by buying and deploying more exactly like them.


The problem I described at the start with the music in my car was caused by the phone. The app settings were set to optimize battery life, so it would start playing, and then the screen would switch off and it would try to use less battery which would mean it would stop and start the music. It took time to identify the problem, but at least I didn’t have end users or my CEO asking me what the problem was within seconds of it occurring. Maybe your business needs a reliable, repeatable high-performance messaging solution like the IBM MQ Appliance?

The best things in life are ‘3’. Now announcing IBM MQ V9.1.3

July 9, 2019

Screenshot 2019-07-09 at 07.44.06

There are many time in old Monty Python sketches where the number 3 seems to come up. There is a 3 headed giant in Monty Python and the Holy Grail. There is also a problem where King Arthur is trying to count to 3 to throw the ‘Holy Hand Grenade of Antioch’ but goes “1, 2, 5”.


As we move through the 2nd set of Continuous Delivery releases we are expecting to follow the same pattern for the 9.1.x releases as we did for the 9.0.x releases then we would anticipate that the final release will be 9.1.5, but we aren’t there yet. We have just announced MQ V9.1.3 on all platforms including the MQ Appliance. You can read the announcement letter for the distributed and Appliance offerings here. And you can read the letters for MQ z/OS here, and MQ z/OS VUE here. An important additional announcement to be aware of is that IBM is announcing withdrawal the separate MQ Advanced Message Security and MQ Managed File Transfer offerings on z/OS, with MQ Advanced for z/OS VUE being the recommended way to get these extended capabilities going forward.

For this blog I will call out a number of the key new features in MQ V9.1.3, and why I think they are important.

Let’s start with a feature that was first delivered in MQ V9.1.2, is of strategic importance and has been extended in MQ 9.1.3. This is a feature called Uniform Clusters and it allows MQ itself to balance application connections across multiple different queue managers. The initial release only supported this balancing for C applications and in this release the capability is extended to JMS applications. Why is this a useful feature? If you have multiple application connections to a set of queue managers, there is no easy way to ensure a fair distribution of workload across the queue managers. And then imagine what might happen if you remove or add queue managers for maintenance or to adjust available capacity. How can you rebalance workload, especially when new queue managers are being added? This feature allows MQ itself to be aware of the group of queue managers to spread the work across, and will take care of the balancing and rebalancing needed. As workload and queue managers become more dynamic with hybrid cloud deployments and containers, then this will become increasingly essential.

Screenshot 2019-02-08 at 14.27.50

Recent releases over the last year or so have seen new feature enabling use of a REST API for admin, as well as also using REST for messaging. MQ V9.1.3 sees enhancements in both of these areas. The REST API for admin now allows calls to ‘runmqsc’ to use JSON inputs through the REST interface to return JSON output. The JSON input and output means that it will be much easier to send commands and to understand and take actions based on the output of the commands. This will help more customers and vendors build new tooling, or to update their existing tooling to be more powerful and dynamic and to use modern tooling frameworks.

REST Messaging offers the ability to send and receive MQ data without using MQ clients. Previously there was only the ability to use PUT and GET commands, but MQ V9.1.3 adds support to Browse messages.


New enhancements in MQ Advanced include a number of updates for the MQ MFT feature. One of these enhancements extends the FTP protocol bridge to now support FTP servers that run on the IBM i platform. With MQ Advanced V9.1.3, customers who use FTP to move files into and out of the IBM i platform can have them intercepted by the FTP Protocol Bridge and moved into the MQ network.

Screenshot 2019-07-09 at 07.48.55

And for those increasing numbers of customers using MQ Advanced container images, in Kubernetes environments there is now the option to configure multi-instance queue managers with active and standby pods, or to use a single resilient queue manager with Kubernetes and system monitoring for high availability.


For the MQ Appliance there is a useful enhancement to the HA and DR functions that builds on the capabilities and enhancements previously available. Our MQ Appliance customers really appreciate the High Availability and Disaster Recovery configurations for the appliances. Now with MQ V9.1.3 notifications about HA and DR status, or changes in status are written to the MQ Appliance system log. In MQ V9.1.2, the MQ Appliance system log could be configured to stream off the appliance as with other MQ log targets. This combination of features would allow 3rd party monitoring tools to detect HA and DR status changes and rapidly alert MQ Appliance customers of failover activity. An additional feature of the latest MQ Appliance update is also to report on the data and time when data synchronization was stopped or lost between Appliances in a HA pair, or in a DR configuration. This will be useful for both offline analysis and for DR restart consistency.

Read the official IBM blog by Ian Harwood about the new release here.


This latest MQ V9.1.3 will be available to download on July 11th 2019. Are you ready for ‘3-dom’?

Screenshot 2019-07-09 at 07.44.24

Packed and ready to go. IBM Cloud Pak for Integration includes IBM MQ Advanced.

June 28, 2019

packed suitacase

With holiday season coming up one of the challenges is always what to pack for your holiday. Are you sure you are bringing the right clothes? What if you go out for a nice meal and you haven’t brought the right outfit? You want to pack the right selection of clothes that you can mix and match to use in different combinations to meet any need. You wouldn’t sit on a beach in a business suit or wear your swimsuit to a fancy restaurant. But equally you don’t want a single item to wear that claims to be appropriate for everywhere, but actually is not a good choice for anywhere.


It is the same with many things. It’s said if all you have is a hammer then every problem looks like a nail. But also, you don’t necessarily want to bring you entire toolbox when all you need is a screwdriver.


In the same vein, when I am talking with customers, I will often say that nobody uses IBM MQ on its own. After all the applications send the messages and MQ is simply providing them with a service. But it is an essential service as IBM MQ provides reliability, security, high-availability and more, and not simply the movement of messages. MQ is just one part of the set of tools needed to build and maintain a connected and integrated business, especially at such a rapidly changing and demanding time. You need the right set of tools for the job. Not too big a set, not too small. And tools that work together. That’s why IBM has been spending time and effort to pull together a number of related products into platforms that are designed to be more than the sum of their parts. These platforms are now called IBM Cloud Paks and you will find IBM MQ Advanced as a part of the IBM Cloud Pak for Integration.

Screenshot 2019-06-28 at 17.56.03

What else is part of the Cloud Pak for Integration, as well as MQ Advanced?

  • API Connect
  • App Connect Enterprise
  • Aspera High-Speed Transfer Server
  • DataPower Virtual Edition
  • Event Streams


It’s also important to understand that to bring these capabilities together within an integrated framework, additional work has been done to provide a single sign-on feature between the different offerings, as well as shared monitoring and logging.


Just as you don’t wear every item of clothing together, and you don’t need to use every tool in the toolbox for every job, you might find that you keep using the individual products in the same way as you did before. If you want to securely and reliably move transactions between applications as part of payments solution, then you are likely to use MQ Advanced, rather than Aspera. But equally if you want to stream video files between London and Singapore, then you would choose Aspera.

Screenshot 2019-06-28 at 18.14.39

But if you want to build a solution that reliably pulls together multiple pieces of data from different backend systems and presents them as a single integrated result to a user based on an API driven query, then you might use nearly every component to build the right solution with the right tools.


As well as the different capabilities and the integration between them already mentioned, there is also licensing flexibility. The offering itself is container-based, to allow for the rapid and repeatable devops style of deployment that many businesses are moving towards, and therefore the licensing provides you the flexibility to use different components and deploy them in containers at different times, swapping out as you replace one component for another.

Screenshot 2019-03-15 at 17.53.35

But if you aren’t quite there yet with your container deployment strategy, you can still buy IBM Cloud Pak for Integration entitlement and choose to convert the entitlements to standalone deployments of the same core capabilities but deployed outside the container-based deployment environment. So if you are happy to deploy API Connect and DataPower Virtual Edition as container images, but don’t currently plan to deploy MQ Advanced in a container environment, then under IBM Cloud Pak for Integration entitlement you can deploy MQ Advanced on bare-metal, or in a VM outside of the container environment, but as part of the overall entitlement. Both comprehensive, and flexible. Just what you need in this fast-changing world.


With IBM Cloud Pak for Integration, including MQ Advanced, you are now ready for the days ahead. Secure, agile, reliable, robust, and highly available. It’s just up to you to know what you want to do.



Custom-build or container image? The choice is always yours with IBM MQ

May 10, 2019

silicon wafer

Once upon a time (as all good stories begin) I was doing my final year project for my Computer Science degree. The project was based on the custom chip design software and systems we had access to. During the previous summer I had interned at LSI Logic (at the time a large custom chip designer and fabricator) and had written some code for them to lay out a custom resistor on the chip.

Screenshot 2019-05-09 at 16.56.53

The goal I had been set was to lay out the resistor taking the smallest amount of silicon, within the parameters of the space on the chip with the resistive layer. After all, space on a chip was expensive, and it was critical to be able to do what was needed without taking up space that could be used for other tasks.

When discussing my final year project with my tutor, he suggested I redo that program for the chip design system at University, but also create a new program for a programmable logic array generator. For this, the goal would be for the user/customer to enter all the logic gate sequences they wanted, and the entire chip would be designed and laid out to meet the requirements.

This was a very different type of requirement. Lots of different individual components would be plugged together, but the outcome would be effectively an entire chip designed and ready for fabrication in seconds. Every component needed a separate design file, and they all needed to be created such that they would work together successfully and a new integrated design file would be built. Once the initial hard work of the component design was done, such that all the components would fit together, then it became easy, after a bit of coding, to build the output file based on the multiple required inputs. And entire custom chips would be ready to build in seconds.

Screenshot 2019-05-09 at 16.55.56

What’s the relevance of this ancient history? When reviewing discussions with customers regarding deploying in containers I was reminded of some of the design choices I made back in those days of project work. I have written previously here and here about containers. The programmable logic array generator is conceptually pretty similar to container deployment. The design generated will not be the most efficient, either in terms of layout or size. But it will be ready to go in seconds. And if you want to make changes, you do so and run it again and generate another design file, ready to go. Undoubtedly this is great, as long as you are happy to not go for maximum efficiency. And these days, when trying to minimize operational cost instead of minimizing hardware usage, or maximizing performance, then this is a good trade off.

The other part of my project – the resistor design and layout program gives the other aspect of the decisions being made today. It was built to be as efficient as possible. It would have been possible to have more standard forms of resistor, but given the constraints and business goals, this would have used more silicon. And there are still lots of systems, or parts of systems where performance, throughput and efficiency is worth the extra effort. And so not everything is best with a ‘one-size-fits-all’ approach. Sometimes you need to have just the right solution in place.


Looking around the connectivity segment, I see fewer and fewer solutions which give customers a choice. Everyone wants simplicity, but in order to build and deploy the right solution, you need more than just having a hammer and treating every problem as a nail.

Screenshot 2018-12-31 at 17.03.13

IBM MQ is offered in multiple forms – as base or Advanced software to be configured and managed by the customer, as container images (IBM Cloud Paks) for deployment in environments like IBM Cloud Private and Red Hat OpenShift, as a physical appliance, as a native z/OS offering, or as a public cloud hosted and managed solution. Even as a part of the IBM Cloud Integration Platform. The combination of these deployment options, as well as the proven technical advantages IBM MQ has over other messaging offerings is designed to provide customers with the best solution for all possible use cases.


With IBM MQ you get to have your cake, and eat it.



(Sadly I have misplaced my project write-up or I would have included the original design images from the PLA and resistor programs)

Not just the great State of Texas but the Integrate State of Texas. Learn more about IBM Cloud Integration Platform and IBM MQ at the 2019 Integration TechCon

March 15, 2019

Screenshot 2019-03-14 at 17.04.44

I have written about MQ and containers before here and here and let’s face it, I will be writing about them again in the future. Just about every customer is trying to build a modernization strategy which today means a container strategy.

Containers are a great fit for stateless objects. Things like micro-services, but also other applications. And as well as those objects, the other integration capabilities like API Connect are stateless, and thus easily get provisioned and cleaned up through a container/dev-ops approach. And Kubernetes is widely used as a deployment and orchestration environment for containers. However you might have questions about how a stateful product like IBM MQ, which holds critical persistent data fits in a container deployment strategy.

Screenshot 2019-03-15 at 17.52.06

To help with this, IBM is investing to provide modern container-based offerings, such as IBM Cloud Integration Platform, which like a number of our other offerings and platforms are based on IBM Cloud Private.

Offerings like the Cloud Integration platform are designed to not just offer containerized versions of the individual products but provide additional integrated services which enable common shared single sign-on, logging and monitoring for the integration capabilities, with more to come.

Screenshot 2019-03-15 at 17.47.16

One of the capabilities within the Cloud Integration Platform is MQ Advanced. This is delivered as an IBM Cloud Pak, providing the production ready containerized image, along with a Helm chart and full IBM support for the product and the environment.

Screenshot 2019-03-15 at 17.53.35
However, let’s review why you might be moving to container deployments of various offerings, as it could be for many reasons:

  • Faster deployment
  • Simpler provisioning
  • Faster, easier maintenance
  • Deployment in any environment
  • Lightweight images
  • Rapid version migration
  • Reduced operational costs
  • etc. etc.


Layered above these reasons will be some of the benefits provided by the individual integration offerings that might be deployed in containers. And then there are the further benefits that could be available if taking advantage of integrated offerings.


That sounds like a lot of consider. Wouldn’t it be great if there was some easy way to get insights not just into the individual products but the IBM Cloud Integration Platform? And it would be best if there was lots of technical information and not just high level content. So welcome to the 2019 IBM Integration TechCon, held in Grapevine Texas April 30th to May 2nd this year. Hear from technical experts in all the IBM integration products including multiple deep topics on MQ, MQ Appliance, MQ on Cloud and MQ Advanced, and also sessions on IBM Cloud Integration Platform.

Register today

No need to watch the clock with new hourly container-based pricing for IBM MQ

September 18, 2018

IBM Clock 2

Your business is 24×7 now. No downtime. No waiting for additional machines to be brought online. Simply deploy more containers to handle more workload. On-premises or on cloud, either public, or private, or both.

After all workload varies through the day, across the week, throughout the month, and over the year. If that’s the case, then you are either already adjust or are planning to adjust the deployed resources to match the workload.

Deploying into containers makes this easier than ever. Co-ordinating these deployments into a managed environment using Kubernetes is the basis to private cloud infrastructures such as IBM Cloud Private or Red Hat OpenShift.

But building resources based on container deployments provides flexibility to make use of these assets either in these private clouds, or in public clouds, or anywhere really.


If that’s all understood, then let’s imagine a scenario of a workload that is only run at certain times. Perhaps it is payroll or ordering new supplies. But the applications, and their supporting infrastructure, such as the IBM MQ messaging layer only needs to run for a relatively well-defined period of time. And with the applications, as well as the MQ Queue Managers ready for container deployments, you can quickly and easily get everything up and running with the containers started with just the right number of cores to match the expected workload. But how will this be licensed? For transient workloads buying perpetual entitlement might not be the most cost-effective answer. Especially if the applications only run for a short time, or if there are brief spikes in the workload which require a large amount of resources but only for a short time.


The good news is that today IBM announced that for IBM MQ V9.1, as well as IBM WebSphere Application Server V8.5.5 and V9 a new licensing approach to reflect this type of deployment, making it easier for businesses to not only gain operational benefits from new container deployments but also cost benefits as well.


There are 2 main aspects of the announcement. One is that the licensing required for container deployments is based on the number of cores allocated to the container, rather than the size of the physical or virtual machine where the container is running, as reported by the deployed product to an instance of IBM Cloud Private which is monitoring and reporting on this. The second aspect is that this monitoring and reporting will be fine grained enough that the reports will include the number of cores used for the number of hours, and license entitlement will be available based on Core-Hours.


Let’s go back to the example. We are deploying a WebSphere Application Server (WAS) application into 8 cores of containers, and this will be supported by 4 cores on IBM MQ in containers. This needs to run on a Sunday, and runs for 20 hours solid, before finishing all tasks, cleaning up all data and shutting down the containers. If this runs 4 times per month, then for each month you would need license entitlement for 4x20x8 core hours of WAS, and 4x20x4 core hours of MQ. The new licensing allows customers to buy blocks of 1000 Core Hours of WAS or MQ. And the included free entitlement to an instance of IBM Cloud Private just for reporting on this usage will allow customers to track and report on their usage against their purchased entitlement.

But supposing at the end of the year things are busier and the workload needs to run for longer – maybe 60 hours per week instead of 20 hours per week, although still the same number of cores. Then each month would be 4x60x8 for WAS and 4x60x4 for MQ – so 240 hours per core in these busier months. The good news is the licensing provided in this announcement includes a cap on monthly usage of 160 hours per core. So although each core used for this workload is running for 240 hours each month, the reporting mechanism will make it clear that only 160 hours of usage per core is chargeable.


This new licensing includes support for both the new Virtual Processor Core Hours part, as well as the existing VPC month part for MQ. If you are deploying MQ or WAS inside IBM Cloud Private, rather than simply using it for reporting on usage, then PVUs can also be used for container deployments.


You can now get on with deploying and using IBM MQ and WebSphere Application Server to run your business, however and wherever you need. And the licensing will help you better reflect your container usage. Try to contain your excitement!