Posts Tagged ‘Mainframe’

From the mainframe to the mobile or to infinity and beyond. IBM MQ is everywhere!

June 3, 2019

MQ on PiZero

Probably more than a decade ago, when I had a role in IBM marketing I was looking for a phrase to describe the way in which IBM MQ, and more widely the whole span of IBM integration solutions could be used across the business infrastructure, and I came up with the phrase “from the mainframe to the mobile”. IBM MQ has been at the heart of the enterprise, running on mainframes right from the early days. And MQTT support has been around for many years as well, placing MQ endpoints in sensors and mobile phones providing a great breadth of coverage for businesses wanting to ensure they can reliably and securely exchange data across both servers and physical devices.

But recently as part of a refreshed move to listen to and work with developers, the IBM MQ team in Hursley have gone a step further. With the examples above, we would typically expect the MQ Queue Managers to be running on the mainframe, or on physical appliances, or maybe on Linux servers in a datacenter or in the cloud. And the applications would be running on their own servers with MQ Clients or MQTT clients at the endpoints connecting to the MQ Queue Managers over the network. But as part of this new initiative, we are demonstrating MQ Queue Managers running on the smallest servers yet – Raspberry Pi Zeros.

Our developer initiative is showing that MQ can be simple to develop for, and simple to deploy, and we can use both of these aspects to demonstrate the value of IBM MQ to anyone who might not be familiar with messaging as a programming technique. One example is programming MQ using Scratch. Also being able to show MQ running in a portable non-threatening environment is a great way to demonstrate the usefulness, especially if all the same MQ capabilities are in action. Our demonstration includes installing separate MQ Queue Managers on 2 separate Raspberry Pi Zero boards. And not just running, but running as a High Availability configuration, so that messages were preserved and the 2nd Queue Manager would take over when the 1st one fails.

The demonstration may be simple, but it is very effective, but what I find most compelling is the example of the MQ Queue Managers running on such a small piece of kit. IBM MQ is one of the most important software offerings in the world. Most of the world’s infrastructure depends on it. Banking, insurance, travel and transportation, retail. You name it, most of the leading businesses in the world rely on MQ running and processing trillions of messages per day. But the perception would be that MQ runs in the datacenter, or maybe these days in both the datacenter and the cloud.

What I would like us to think about is the idea of MQ running embedded in such small devices like the Pi Zero. If you can run a fully featured MQ Queue Manager there, then where else could you run it. What difference might it make in today’s infrastructure or tomorrow’s infrastructure if IBM MQ was running in the smallest computing devices? Are there any use cases?

It’s important to point out that MQ running on the Pi Zero is not an officially supported implementation, and IBM has no plans to support it in the future. But sometimes it is great for us to think outside the box. So let’s have your ideas as to whether MQ would benefit from running in these smaller configurations. Either share here or reach out to me directly and I will try to comment with another blog entry later.

As well as the demonstrations mentioned above, IBM has been working a lot on improving IBM for developers. There was a recent release of the MQ Client for Mac OS. We added REST Messaging as an option for MQ. There is a tutorial called Learn MQ, and a badge for MQ developer essentials. And we have had MQ Advanced for Developers available for free download since 2013.

In Toy Story Buzz Lightyear could fall with style but set his sights on the stars. Let’s move from the “mainframe to the mobile” to “infinity and beyond” with IBM MQ.

Screenshot 2019-06-03 at 13.27.17

(Buzz Lightyear is obviously Disney IP – image just used here for effect, and does not signify any ownership or endorsement.)

Don’t get caught out by clouds of hot air. IBM MQ builds reliable bridges in a multi-cloud world.

October 5, 2018

oldmanyells

More and more businesses are realizing the value of moving to the cloud. There are as many, if not more reasons to move to the cloud as there are different clouds. Any single business is likely to have already deployed to multiple different clouds, both public and private. And different departments will have different priorities and success goals covering agility, availability, location, cost, or multiple other reasons. Certainly some businesses will be looking for the expected benefits of cloud but want to still run in their own data center using a private cloud architecture.

 

Central to these decisions are the business applications, which are already changing rapidly, benefitting from this new deployment environment. Cloud deployed applications typically scale more readily and may be built out of many cloud specific common services, designed to maximize the positive aspects of deploying and running in the cloud, and not have you running off a cliff, instead of running into the future.

wile e coyote cliff

There are however other important design points. If an application is built solely to use the tools and environment specific to a single cloud, then flexibility and freedom to change will be limited. Around 80% of businesses already admit to using more than one cloud provider, which will see a need for applications running on different clouds to connect together, as well as connecting to any applications still running on-premises. Additionally, applications may need to use functions that are available on multiple different cloud environments in case the applications need to be redeployed on other clouds. And that will definitely be important when it comes to the connectivity mechanism for data exchange between applications.

 

IBM MQ was originally built to connect applications running in different environments, allowing them to exchange data with reliability and security, and to provide a common, cross-platform way for applications to do so. And this is exactly the challenge now being faced with applications built for different environments having to connect and exchange data across different clouds as well and into and out of the on-premises data center.

Screen Shot 2018-07-27 at 15.30.23

A strong benefit of IBM MQ is that all applications can drive their connectivity through a single consistent interface. This not only simplifies the application development, but ensures that the application can remain unaware of not just where it is running itself, but also where the applications that it is trying to connect with are running.

 

As an asynchronous messaging layer, IBM MQ can buffer the connectivity between applications that run at different speeds, and also, with MQ running in every locations, then connectivity breaks between locations, or latency issues can be handled by IBM MQ rather than by complex logic within the applications.

Screen Shot 2018-10-05 at 18.59.13

IBM MQ is able to be deployed as an IBM managed and hosted messaging server on IBM Cloud and AWS, or deployed and managed by customers on any public cloud. And on-premises, IBM MQ can be deployed in mainframes, as a physical appliance, or on servers such as Linux, Windows or more, in containers or in VMs. This flexibility, combined with the persistence, security, reliability, scalability and high availability that much of the world’s leading businesses depend on mean that you can move to the cloud with confidence.

 

There is no better way to bridge between your applications and across the clouds that with IBM MQ.

Millau2

When dinosaurs ruled the earth? They still do.

October 16, 2013

Image

 

One of the odd things about having worked in IBM for 24 years now is that there are people I work with at IBM who hadn’t been born when I started working in IBM Hursley. And when I started I was given a desk with a 3270 mainframe terminal on it which weirded me out somewhat. At University, studying Computer Science I had been used to using Unix machines with large graphical displays. And the closest I had come to a mainframe was the department Vax, and various other mini computers connected to the UK academic JANET network around the country which we were happy to hack into in order to play MUD and MIST in Essex and Aberystwyth. I had assumed that mainframes were dead. And pretty much so did everyone else out in the world.

Funny thing was they didn’t die. They evolved. Just like dinosaurs did. Mainframes back in the 80s and early 90s were different beasts to those we see today – completely different technology – but still the same goal. Very high performance. Very high throughput. Very high reliability. Which, by an odd coincidence, is the same set of characteristics that businesses need for their core business systems. These aren’t systems that the regular public have much to do with, even though they interact with them every day. When checking their bank account, withdrawing money, booking a holiday, interacting with a large business in any way, you are driving work on a mainframe. You never see it, because it just works. Any failure you see would typically be on the front-end. If there was a failure on your ATM, it is likely a Windows (or similar) error screen you see, not a mainframe error message. These machines are invisible, ever present, running and running like the Duracell bunny. Running the world? I think they might just be. And it appears I am not alone in thinking that.

If you have applications on a mainframe, running your business world, then these applications won’t run in splendid isolation. They need to connect to the rest of your business, sharing data, completing orders, adding new customers. And ideal for these workloads, and any new workloads is WebSphere MQ. We have a specific offering for running on IBM System z mainframes – WebSphere MQ for z/OS – which is built to exploit many of the key features in our leading mainframes. It handles a million messages per second. It uses the Coupling Facility and Shared Queues to help you to avoid ever losing messages. And of course it has tremendous robustness and security, ideal for ensuring your business can keep doing what it needs to do. Day-in, day-out.

And October 15th 2013 we announced a new way to buy this offering – WebSphere MQ for z/OS Value Unit Edition. This offers the exact same product as the existing WebSphere MQ for z/OS, but is available to buy as a ‘One Time Charge’ transaction, rather than being charged per monthly usage as the existing WebSphere MQ for z/OS product is. So now there is a choice of how to buy this leading messaging solution for z/OS – a monthly license charge, or buy it upfront for your new workloads, deployed on new logical partitions (called zNALC). The dinosaurs just got a little more agile, a little faster. I guess they are evolving into birds.

Image

More on Mainframe Modernization

March 23, 2009

Following on from the Modernization Topic – lets answer a few questions about that area:

Q. >What are the architectural aspects of SOA that scare mainframe professionals, and what are the best ways to overcome those reservations?


A.> An important part of that question is whether SOA – or some aspects of it does scare mainframe professionals. Obviously mainframe professionals tend to believe and work passionately for high quality deliverables and levels of service, with strong controls to ensure this. I think that the issues for most would be around the degree of change involved and how that might impact what the mainframe delivers to the business. SOA, if supported with the right tools can actually deliver strong levels of control and increase the business awareness of ongoing work and transactions. This can help to reassure mainframe professionals, as well as the fact that by including the mainframe in SOA, it can help to further demonstrate the continuing importance of the mainframe to the business.

Q.> How do WebSphere offerings fit in with offerings from IBM-Rational? And, will these work with other toolsets?


A.> The business/IT environment can be highly complex and unique to each organization. Depending on differing priorities and where each asset is in its own lifecycle, there may be different decisions being made as to what the key area to focus on and where to progress the business. In some cases that will require analysis and redevelopment of the application for modernization, based around the Rational tooling, in others you will see a more direct approach using WebSphere integration capabilities, but in many cases – probably most – there will be benefits from using a range of these capabilities over the longer term. This is a way in which IBM Services experts can provide assistance by reviewing your existing infrastructure and suggesting a roadmap as part of their SOA Healthchecks – helping you to identify when to use Rational and when to use WebSphere.

Q.> The key themes of IBM’s SOA initiatives have long been ‘Connectivity’ and ‘Reuse’. These terms would seem to take on new dimensions when talking about Mainframe SOA projects. Briefly summarize how IBM views ‘Connectivity’ and ‘Reuse’ when including mainframe data or logic?


A.> Certainly from an IT perspective Reuse and Connectivity have been and continue to be important issues. The assets one is likely to find on a mainframe tend to be those assets which are extremely valuable and therefore likely to be key in any reuse more widely across the enterprise – and these assets may have been more difficult to reuse historically. Reuse of course is a 2-way practice – information and data is involved more widely across the business which demands a better connectivity implementation – they really are 2 sides of the same coin. And of course with the high quality enterprise data and applications that exist on the mainframe the connectivity infrastructure needs to be capable of delivering very strong and reliable connections. This is what the WebSphere solutions are built on.

Q.> Many Mainframe-SOA projects don’t get done because of 2 main concerns – (1) maintaining integrity and SLAs, and (2) visibility into the transaction flow from mainframe to the outside. Summarize IBM offerings for addressing these concerns.


A.> IBM well understands these concerns. Both WebSphere Application Server and WebSphere MQ, the foundation for all IBM’s runtime offerings are built on an absolutely rock solid transactional implementation, enabling the mainframe assets to be used and reused without the quality of service or the quality of data being impacted negatively – instead it is used to extend these traditional mainframe attributes wider into the business – a positive really rather than a threat.

Q.>
In today’s economy some long-term projects are being put on hold, or replaced with projects that offer quicker ROI. Does IBM offer templates for a 60 or 90 day Mainframe SOA project?

A.> It is more important than ever to not get bogged down in long term IT projects that do not offer clear benefits to the business, especially benefits that can be seen in the near term. Projects that will have overall benefits to the business are likely to need to be justified by a single project use – and once successful may then be rolled out more widely. For modernization of assets this reduces the scope for tasks and level of effort – driving the focus on fast implementations that show clear benefits. While it is still worthwhile to review which projects can reap the largest benefits using efforts such as the SOA Healthchecks mentioned above, it is likely that once identified initial projects might be found that will see results by selecting WebSphere MQ to simplify connectivity and to service enable application interfaces. Not only is a WebSphere MQ project likely to be easier to cost-justify than a more substantial and involved implementation but it will show rapid benefits with increased reliability, enhanced manageability and simpler application interfaces. Further to this other ‘quick hits’ might be from WebSphere DataPower SOA appliances as they are extremely rapid to configue and deploy, needing little in the way of further work, or another choice could be to put a foundation of Governance in place with WebSphere Service Registry and Repository. By quickly finding and logging existing services assets, both developers and adminstrators can quickly reap benefits in both finding and tracking the use of existing assets, reducing exceptions and increasing the potential for reuse.