Posts Tagged 'IBM'

How governments can tap into cloud and Internet of Things

This post was originally published on ThoughtsOnCloud on October 2nd, 2014.

In my previous post, I discussed some of the innovations that can be achieved by governments using cloud. I presented on this topic recently at the Westminster eForum Keynote Seminar: Next steps for cloud computing. At the session I went on to explore mobile, the Internet of Things and some changes in the skills needed for cloud.

The session abstract asked the following question about mobile: As device processing power increases, yet cloud solutions rely less and less on that power, is there a disconnect between hardware manufacturers and app and software developers? I think this misses the point. Cloud isn’t about shifting the processing power from one place to another; it’s about doing the right processing in the right place.

U.S. Marine using GPS capabilities on a handheld device to check his location.

At IBM, we talk about the nexus of forces of cloud, analytics, mobile and social (CAMS) and we split information technology (IT) into Systems of Record and Systems of Engagement.

The Systems of Record are the traditional IT—the databases that we’re talking about moving from the existing data centers to the cloud. And, as I’ve discussed in a previous post, moving into the cloud means that we can perform a lot of new analytics.

With mobile and social we now have Systems of Engagement. We have devices that interact with people and the world. Because of their fantastic processing power, these devices can gather data that we’ve never had access to before. For example, these devices make it really easy to take a photo of graffiti or a hole in the road and send it to the local council through FixMyStreet in order to have it fixed. It’s not just the additional processing power; it’s the new instrumentation that this brings. We now have a GPS location so the council knows exactly where the hole is.

In a case I discussed in my previous post, this would have made it a lot easier to send photos and even videos of Madeleine McCann to a photo analytics site, to assist in the investigation of her disappearance in 2007.

We’re also working with Westminster council to optimize their parking. The instrumentation and communication from phones helps us do things we’ve never done before, moving us onto the Internet of Things and making it possible to put connected sensors in parking spaces.

With connected cars, we have even more instrumentation and possibilities. We have millions of cars with thermometers, rain detection, GPS and connectivity that can tell the National Weather Service exactly what the weather is with incredible granularity, as well as the more obvious solutions like traffic optimization.

Let’s move on to talk about skills. IBM has an Academic Initiative in which we supply software to universities at no cost, and IBMers work with university administrators and professors on the curriculum and even act as guest lecturers. For Imperial College, we’re providing cloud based marketing analytics software as well as data sets and skills, so that they can focus on teaching the subject rather than worrying about the IT. Since computer science curriculum in schools is changing to focus more on programming skills, we can offer cloud based development environments like IBM Bluemix. We’re working with the Oxford and Cambridge examination board on their modules for cloud, big data and security.

Students at the lecture 21440848

To be honest, it’s still hard. Universities are a competitive environment and they have to offer courses that students are interested in rather than ones that industry and the country need. IT is changing so fast that we can’t keep up.

Lecturers will teach subjects that they’re comfortable with and students will apply for courses that they understand or that their parents are familiar with. A university recently offered a course on social media analytics, which you’d think would be quite trendy and attractive, but they only had two attendees. It used to be that universities would teach theory and the ability to learn, and then industry would hire them and give them the skills. Now, things are moving so fast that industry doesn’t have the skills and is looking for the graduates to bring them.

Looking at the strategy of moving to the cloud and the changing role of the IT department, we’re finding that outsourcing the day-to-day running of the technology brings about a change in skills needed. It’s less about hands-on IT and more about architecture, governance, and managing relationships with third party providers. A lot of this is typically offered by the business faculty of a university, rather than the computing faculty. We need these groups to work closer together.

To a certain extent, we’re addressing this with apprenticeships. IBM has been running an apprenticeship scheme for the last four years. This on-the-job training means that industry can provide hands-on training with the best blend of up-to-the-minute technical, business and personal skills. This has been very effective, with IBM winning the Best Apprenticeship Scheme from Target National Recruitment Awards and National Apprenticeship Services and Everywoman in technology.

In summary, we need to be looking at the new things that can be achieved by moving to cloud and shared services; exploiting mobile and the internet of things; and training for the most appropriate skills in the most appropriate way.

How do you think governments should utilize cloud and the Internet of Things? And what changes do you think are needed to equip students for a cloud based future? Please leave a comment to join the conversation.


IBM MQSeries Professional Certification

The value of professional qualifications in the computing industry is a contentious issue but there is no doubt that they are here to stay. Five years ago IBM introduced the Certified MQSeries Engineer qualification. This has since mutated into three others qualifications and more have arrived too.
IBM’s motivation for introducing the MQSeries certification is debated. Back in 1995 it was difficult for them to achieve sales, as customers were rightly concerned about the availability of skilled support staff. Also, there was concern that Microsoft’s imminent MSMQ could become serious competition. Microsoft already had a certification programme which enabled them to boast about the number of people with skills in their products. IBM’s introduction of the MQSeries certification helped alleviate this difficulty. However, Peter Goss of MQSeries Product Marketing responds to this suggestion saying, “Your statement is not correct. The tests in the MQSeries Family Certification programme – all 8 of them – were introduced to allow individuals to validate the success of the education they have taken and experience they have gained working with the products. And it allows Business Partners and customers to validate that their employees have gained the education, knowledge and experience necessary to do their jobs.”
There is also debate over who should pay for the sitting of the tests. It is in IBM’s interests to have lots of qualified people in the market. If employers demand that job candidates are certified then people have no choice but to take the tests and often to attend supporting IBM education. In order for a software vendor or consulting company to receive support from IBM it must become an IBM Business Partner. One of the requirements for a company to be a member of IBM’s PartnerWorld programme is that they have a certain number of employees who are certified.
Peter goes on to say, “Tests therefore have real value to an individual or to a company, and so it’s no surprise that it costs money to take the test.”
Another area of concern is that employers could use the fact that an employee has failed the test as justification for dismissal. For this reason some employees refuse to take the test until they are absolutely sure that they are ready, but this can cause conflict.
Having said all this, the Certification Programme is here to stay and if you achieve the qualification you are undoubtedly in a better situation than if you do not.
The IBM Professional Certification Programme is the roadmap provided by IBM such that you can achieve internationally recognised certification by attending education classes and taking examinations that demonstrate your abilities and experience. IBM says that its Professional Certification Programme “offers a business solution for skilled technical professionals seeking to demonstrate their expertise to the world.” The programme is designed to objectively validate skills and demonstrate proficiency in the latest IBM technology and solutions. They say that by giving you and your employer confidence that your skills have been tested, the certification can help you excel at your job, delivering higher levels of service and technical expertise and thus move you on a faster career track.
For optimum benefit, the certification tests must reflect the critical tasks required for a job, the skill levels of each task, and the frequency by which a task needs to be performed. IBM has comprehensive, documented processes, which ensure that the certification tests remain relevant to the work environment of potential certification candidates.

A typical price for a test is £105 plus VAT. Although it is sometimes possible to get a “You Pass We Pay” deal depending upon your position (e.g. business partner). Sometimes, at IBM conferences, it is possible to sit the test for free. The prerequisite IBM MQSeries Courses typically cost around £300 plus VAT per day and last between one and five days.
IBM justifies the cost of taking the test and associated education with a Return on Investment study. They say that the study indicated positive business results for the companys taking the test such as improved Revenue (Profitability), Efficiency (Productivity), and Customer Satisfaction (Credibility).

Screen Shot 2016-04-17 at 16.05.03

At Level 1 you can take tests to qualify as an IBM Certified Specialist in MQSeries, MQSeries Integrator or MQSeries Integrator V2. This title certifies you to perform basic operational services such as basic planning, configuration, installation, support, management, and maintenance, with limited assistance or to perform administration of the product, with limited assistance.
At Level 2 you can qualify as an IBM Certified Solutions Expert in the same products and also in MQSeries Workflow. This title demonstrates breadth of basic operational services skills in more than one environment or demonstrates depth of advanced operational services skills such as customising, integrating, migrating and tuning, in one environment.
Also at level 2 is the IBM Certified Developer qualification for MQSeries. It demonstrates the capability to plan and design an application requirement and build a prototype.
As an example, the people who should consider applying to be IBM Certified Solutions Experts in MQSeries are people responsible for planning and architecting software solutions and designing applications based on MQSeries. They should have the knowledge available from attending the MQSeries Technical Introduction course along with the Application Programming course, the Domino Connections course, and the Advanced System Design and Connections course. They should also have practical experience of implementing MQ with transaction management and database products, systems management, prototyping IT solutions, basic programming concepts, IT security concepts, plus the ability to gather business requirements and to translate those requirements into IT solutions, and skills in implementing systems on multiple computing platforms.

Each of the qualifications requires the taking of at least one test, with the recommended prerequisite study and experience.
The mapping between the qualifications and test names is not immediately obvious but it does help you to understand what the qualifications signify:




MQ Specialist

Installation and Configuration

MQ Solutions Expert

MQ System Planning and

MQ Developer

MQ Application Design

MQSI Specialist

MQSI Installation and

MQ Specialist

MQSI Solutions Expert

MQSI System Planning and

MQ Solutions Expert

MQSI V2 Specialist

MQSI V2 Implementation

MQ Specialist

MQSI V2 Solutions Expert

MQSI V2 System Architecture
and Design

Any MQ certification

MQSeries Workflow Solutions

MQSeries Workflow System
Planning and Modelling

Details of the individual test objectives, describing the topics potentially covered on the tests can be found at

Each test specifies as a prerequisite course the MQSeries Technical Introduction, which is available from IBM Learning Services as a class lesson or as a computer-based training CD-ROM. They each require at least familiarity with the MQ manuals and publications and knowledge of basic MQSeries functions and facilities. Each certification also has other specific courses associated with it, the details of which are available on the tests web page.

This of course is the contentious part. The qualification is designed to show that you have adequate experience to cope with real life situations. Taking courses designed to get you through the test and missing out the real life experience is defeating the object.

When you think you may be ready to take the test it is worth trying a sample one. This way you will know whether it’s worth spending the time, money and stress on the real thing. Also you’ll get a feel for the style of questions and may even learn more through the experience. Sample tests are available from the certification web site.

Example Question
Here is an example question. It is a very simple question but the answer is not so easy. See the web site for the answer.

 An MQSeries application has created a queue with the following conditions specified on the DEFINE QLOCAL command:

When will a trigger message be generated?

 A. No trigger messages will be generated
 B. when the queue contains 5 messages
 C. when the queue contains 10 messages
 D. when the queue contains 5 priority messages
 E. when the queue contains 10 priority 5 messages

It is common to say that IBM has simply got the answer wrong and/or that there is more than one answer. This is a common criticism of the tests in general but usually shows that the person has not fully understood the question or the reason for the answer. This is a good way to identify which people deserve the certification but unfortunately gives the test a bad reputation.

Tests can be scheduled with either an IBM Learning Centre or Prometric, Inc. (formerly Sylvan Prometric). IBM Learning Services can be contacted on 0845 758 1329. It may be possible to register as late as the day before the test. You will be required to pay at the time of scheduling and if time permits, you will be sent a letter of confirmation.

You will need to take two forms of identification. At the conclusion of the test you will receive a full score report with section analyses. If you fail you will be able to reregister for a test. This test should have different questions.

Each of the MQSeries tests has questions that relate to MQSeries Version 5.1 for distributed platforms and MQSeries for OS/390 Version 2.1. Each test has between 60 and 80 questions and is of multiple-choice, closed book format. Most last 90 minutes but some are 75 minutes.
Some people say it is possible to pass and still know very little, or to study, cram and pass the test and then forget it. Some object to the tests covering areas that are not relevant to their work, however you do not need to get 100% to pass so it is not expected that you will know all the areas of the product. It shows that you have a reasonable percentage of knowledge. A common criticism of the tests is that they test retained knowledge and people say that in real life they would not need to know it, that they would look up the information in a manual or call IBM. The certification specifications specifically say that they certify that a person is self-sufficient, performing the tasks with limited assistance from peers, product documentation and vendor support services. There is only so much that can be achieved by someone who needs to rely on documentation and support. In the crisis situation of a live production problem there may not be the time to look things up or to wait for a call to be returned. An employer will be looking for someone who can cope under this sort of pressure. Also, it is necessary to have complete understanding and retention of fundamentals to be able to understand the more complex MQ and application scenarios. For example, it would not be possible to master MQSI if you had to keep referring back to MQSeries itself. However, the tests also cater for the other point of view by asking where you might expect to look up certain information.

In 3 to 6 weeks after you passing IBM will send you the Certification Agreement and a welcome package, which includes your certificate. After accepting the Certification Agreement you may open the package and use the IBM Professional Certification title and trademark. The package contains a certificate, a wallet sized certificate, a lapel pin and details of the certification logo and how you can use it. The logo looks like this:


You are asked to keep the testing centre informed of any changes in your personal details. They in turn keep IBM informed. Apparently this means that they can keep you informed of all programme information. However, I have never received anything other than the original pack.

MQSeries skills are still scarce at the moment and so people may not feel the need for certification, but as more people learn MQ skills, certification may become more useful. It is in the interests of both employers and good employees to find a way to filter out the people who claim abilities that they do not have. The certification system is by no means perfect but it is still a good step towards achieving this.
Sam Garforth
IBM Certified Specialist – MQSeries
IBM Certified Developer – MQSeries
IBM Certified Solutions Expert – MQSeries
© S Garforth 2000

Beyond WebSphere Family Monitoring

The integrated application processes that drive real-time business must flow smoothly, efficiently and without interruption. These processes are comprised of a series of interrelated complex events which must be tightly managed to ensure the health of your business processes. To achieve this, deep visibility into the core metrics of business processes is crucial.. But this level of insight is impossible when you’re limited to static events. This paper explores the next level in dealing with the increasing complexity of inter-application communication in the real-time enterprise with the ability to dynamically create events based on actual conditions and related data elements across the enterprise.

Stanford University Emeritus professor David Luckham has spent decades studying event processing. In his book, The Power of Events he says that we need to

“View enterprise system based on how we use them – not in terms of how we build them.”  This is an important paradigm shift which can move us towards the goal of business activity monitoring

Currently we have Enterprise systems management (ESM) with a foundation of event based monitoring (EBM). It provides availability with automation using event based instrumentation, with threshold based performance and alerts and notifications. This is good but not good enough for the evolving needs of real time business. Gartner says that event driven applications are the next big thing.

Typically in event based monitoring events come from different middlewares – web servers, databases, applications, network devices, mobile devices, through the IT infrastructure of middleware, application servers, applications and the network. The events can be seen by the data centre. The idea is that the ESM system will monitor, detect, notify and take corrective action either automatically or with manual intervention. But there are different business users with their own perpectives.

Event based ESMs can’t really take good corrective action as they can’t correlate the event with the effect.

Here’s a typical example of a business activity. The head office requests a price change in all the stores. It updates the price in the master database and checks the inventory levels and then transmits the change to all the stores. But what happens if something goes wrong.

You/the customer will be asking yourself lots of questions. 1. From an IT perspective you’ll want to know where the problem/slowdown is i.e. which queue or channel has the problem that caused the change not to happen. But there are business level questions too. 2. You want to know why the change didn’t take place at all the stores. 3. You need to ask yourself from a business perspective what the impact of this is. 4. You may know that a channel’s gone down or you may know that a price change hasn’t happened but do you know what else has been affected?

These are problems that you don’t really have time to address. EBMs don’t know what to do about this out of the box as they are designed at a technology level and to configure them to understand the business is too hard to set up. With constantly changing business and application needs you can’t adapt your monitoring and automation fast enough.

Here is another real life example.  A stock trade or share purchase. The customer says they want to buy something. You check their account, then the stock availability and price and then they agree they want to buy at that price. Then you process and complete the trade and update the stock price.

This is straight through processing. The transaction has to be done atomically as one unit of work within a certain time. These are serially dependent transactions. But what happens if they don’t complete in time. Again you will ask yourself the questions. From an IT perspective you will want to know what the cause of the problem was. But also the business units who were affected will want to know about it. You will want to know which business units and transactions were affected. The correct, and only the correct, business guys will want to know. You will want to know the business impact of this – to see the problem and impact from a business perspective.

So what is the business impact? At a high level its loss of money. You’ll know that the transaction didn’t take place but you won’t know the real root cause so everyone will be blaming everyone else, wasting time and damaging morale and relationships. During this time you are not delivering the business service. You have damaged your relationship with the customer as you haven’t delivered what they needed and you can’t even explain why. So you’re going to lose your competitive advantage.

To give this root cause analysis and business impact analysis customers normally have to put a lot of resource into customising an event based solution or just developing their own monitoring solution but this is not flexible enough. It is not feasible in an increasingly complex environment of technology. So we have to ask ourselves how can we have a monitoring solution that is flexible enough to keep our business systems productive, rapidly and constantly adapting to incorporate a changing IT and business environment.

So in summary, the big questions are Why is it so hard to detect and prevent these situations? How can we make the transition to real-time on-demand monitoring. How can we align our IT environment with the business priorities to achieve the business goals?

These problems arise because we’re using event based monitoring. Monitoring at an IT or technology level is preventing us from achieving business activity monitoring.

David Luckham refines this more to talk about Business Impact Analysis – overcoming IT blindness. We should be looking at the complex events. Correlating or aggregating the various events and metrics to see the business impact. He talks about the event hierarchy and the processing of complex events. MQ has about 40 static events like queue full, channel stopped etc. But there are events from WAS, DB2 etc, and there are metrics like channel throughput, cpu usage and TIME. There are also home grown apps which need monitoring and there are business events and metrics. All these need to be taken into account to give a higher level complex event. For example if a queue is filling up at a certain rate you can calculate that in a certain amount of time you will receive a static simple queue high event. But by that time it will be too late. You need to aggregate the metrics queue fill rate, queue depth, maximum depth and time to generate a complex event.

So the problems with the current state of event based monitoring are:

Event causality – there’s not enough information to identify the root cause of the problem. The price update didn’t’ happen, but why? MQ failed but why? Maybe it was a disk space problem. Maybe it was caused by something in a different part of the environment – a different computer or application.

Interpretation – looking at it at a simple technology level we don’t have enough information to see the effect of this simple problem on the different parts of the enterprise – to see the effect from the perspectives of the different users, and to notify them and resolve the problems it causes for them.

Agility – Out of the box ESM or EBM solutions cannot possibly know the business requirements. They require a lot of customisation when you initially set them up to be able to understand the effects of different problems on the different users and then constant customisation as the technological and business environment constantly changes. They are constantly playing a game of catch up that they can never win.

Awareness – Because they are only looking at individual points of technology they have a blindness to the end to end transaction. They cannot know how a simple technology problem affects the rest of the technologies or businesses.

Another shortcoming of the current generation of system management is false positives. This is a big problem with simple event based monitoring. You have a storm of alerts. The data centre sees an event saying the queue is full. They call the MQ team who say not to worry about it; it’s just a test queue, or a queue for an application that hasn’t gone live yet. After the first 24 times that this happens the data centre stops paying attention to queue full events. Then the 25th one happens which is for an important transaction which needs to be dealt with immediately and they just ignore it. The company loses business etc and it’s as if they didn’t have a monitoring solution at all. So what we need is a high level of granularity on the queue monitoring based not just on whether a queue is full but what queue it is, who the application owner is, what time of day it is, what applications are reading from the queue etc.

It’s not enough to provide monitoring data, it has to be information. It has to be interpreted in a way that is useful. What we need is dynamic metric based monitoring. The difference between events and metrics or data and information. You need metric based monitoring to create complex events – in context, user specific events that are pre-emptive before a real business problem happens which can be actionable. The problem isn’t getting events, its event correlation with rules etc. You need to watch more than the vendor gives. It can’t be enough.

There is something called the ‘aha phenomenon’. When a problem occurs you spend ages trying to identify the cause, looking at all the queues and middlewares and applications. All the time you’re looking the technology’s not running and the business is losing money. Eventually you find it and say ‘aha!’ Then what happens? Can you easily adapt your monitoring environment to make sure it doesn’t happen again or that you at least don’t have to search again when it does happen. In other words you need dynamic monitoring – where the monitoring environment of event correlation, metric selection and rules application can be constantly updated.

So let’s expand the vision of what we need. We need a unified approach like the service oriented architectures that are so popular for applications i.e. a reusable monitoring architecture. We don’t need a silo or isolated tool – the antithesis of SOAs. It needs to be a business oriented on demand solution. It needs to be modular, extensible, adaptable, scalable and reusable. We need instrumentation for all the different applications and middlewares. And the environment status needs to be shown to all the different stakeholders from their own perspective for their own roles and responsibilities.

By applying the service oriented architecture principles we can achieve the Business Activity Monitoring and business agility that we really need. A business centric solution aligning IT to the business processes so the business can actually benefit from the technology rather than being constrained by it. Using this you can see the impact of a problem from all perspectives and you can rapidly adapt to the changing business and technological environment learning from mistakes. Currently 80% of IT resource is consumed by maintaining the technology. Using this architecture we can free the resources to other products, develop the business and make more money.

In summary, this unified model gives business and technology continuity and automatic recovery. It gives very granular complex events allowing root cause analysis and business impact analysis by being aware of the business processes affected by the technology and displaying the information in a business context giving an improved quality of service.

Of course there are pros and cons to being standards based. Some Service Oriented Architectures such as .Net and WebServices are still in flux. We need unified SOA security across all platforms. To be proactive in the way that is needed will require polling which needs to be configured to avoid performance problems.

But anyway, what I’ve proposed here is a unified model, a base for business activity monitoring. As David Luckham says “The challenge is not to restrict communication flexibility, but to develop new technologies to understand it”. So I propose that the key to dealing with complexity and delivering true business activity monitoring solutions is a unified model based on a service oriented architecture. This doesn’t happen out of the box as no vendor or developer can know all your requirements but it is a framework which is modular, extensible, adaptable, scalable and resuable enough to facilitate what we need.

© Sam Garforth   2005

Closing the holes in MQ security

In choosing the default settings for MQSeries, IBM has had to strike a balance between making the product easy to use as quickly as possible and making it secure straight out of the box. In more recent releases, they have put more emphasis on ease of use and so relaxed the default security settings. This is one of the reasons why administrators must now reconfigure their systems if they require them to be secure. This article examines some of the potential security holes of which administrators should be aware, and also describes ways in which administrators can close these holes.

Default channel definitions

There are a number of objects, such as SYSTEM.DEF.SVRCONN and SYSTEM.DEFAULT.LOCAL.QUEUE, that are created by default when you install and configure a queue manager. These are really intended only as definitions to be cloned for their default attributes in the creation of new objects. However, a potential infiltrator can exploit the fact that they are also well-defined objects that probably exist on your system.

Originally, on distributed platforms, the definition of channel SYSTEM.DEF.SVRCONN had its MCAUSER parameter set to ‘nobody’. IBM had so many complaints from users who couldn’t get clients connected that it has now changed this parameter to blank (‘ ’).

The MCAUSER parameter specifies the userid that is checked when an inbound message is put on a queue. Setting this field to blank means that the authority of the userid running the channel (usually ‘mqm’) is checked. In other words, messages are always authorized to be put on all queues.

The thinking behind putting ‘nobody’ in this field is that no one should be allowed to put messages on queues unless the administrator actually changes settings to allow them to do so. Unfortunately this default setting was not documented and so users could not work out how they were required to change things.

There are many users who don’t need client channels and so haven’t even read this section of the manual. They’re unaware that nowadays, with default settings in place, anyone who can connect to their machine (for instance, someone on the same LAN) can start a client channel to them called SYSTEM.DEF.SVRCONN and have access to put messages on any of their queues and – often more importantly – to get messages from any of their queues.

This is not an entirely new problem – even the original systems suffered from it, as there are other channels, such as SYSTEM.DEF.RECEIVER and SYSTEM.DEF.REQUESTER, that have always had a blank MCAUSER. With a little effort, users have always been able to connect to these and put messages on queues using full authority. If the queue manager is the default one, the infiltrator needs no prior knowledge of the system.

As previously mentioned, these definitions are used to provide defaults for the creation of new channels. This means that, in many systems, newly created channels also have MCAUSER set to blank.

It is recommended that the following commands be executed using RUNMQSC to close this loophole:

alter chl(SYSTEM.DEF.SVRCONN) chltype(SVRCONN) trptype(LU62) +


alter chl(SYSTEM.DEF.RECEIVER) chltype(RCVR) trptype(LU62) +


alter chl(SYSTEM.DEF.REQUESTER) chltype(RQSTR) trptype(LU62) +


Do not start MQ using root

It’s worth noting that much of this section is described in Unix terms, though it’s applicable to most platforms, once Unix terms are substituted with their equivalents.

All MQSeries components should be started using the MQSeries administration userid (mqm). Many system administrators like to make the system administration userid (root) a member of the mqm group. This is understandable, as they can then run all of their administration commands, not all of which are for MQ, as root. However, this is a very dangerous thing for them to do as they are effectively giving root authority to all of the members of the mqm group.

For example, if the trigger monitor of the default queue manager is started by root using default parameters, a member of the mqm group whose workstation has IP address ‘myhost’ can enter the following commands using RUNMQSC:



DEFINE PROCESS(MYPROCESS) APPLICID(‘xterm –display myhost:0 &’)

and then enter the command:

echo hello | amqsput MYQUEUE

This causes a terminal to appear on their screen giving them a command line with root authority from which they have full control of the system.

Similarly, if a channel is started by root, or the channel initiator starts a channel and the channel initiator is started by root, then any exits called by the channel will run as root. So the mqm member could write and install an exit that again spawns a root-authorized xterm.

The receiver channel could have the same problems, for example, if started as root by the listener, inetd, or Communications Manager.

A good start to overcoming this problem is to remove root from the mqm group. However, on some systems root will still have access to the strmqm command and, while it may look as though it has started the queue manager, there may be unexpected errors later when it performs commands for which the OAM checks authority.

The system administrator may find it useful to create commands that only root is authorized to run which switch to the mqm userid before performing the instruction. For example the following shell script could be called strmqm and put higher in root’s path than the real strmqm.


su – mqm -c /usr/lpp/mqm/bin/strmqm $1

Only use groups on UNIX OAM

The setmqaut command is used to set access to MQSeries objects. Among its parameters you may specify ‘-p PrincipalName’ or ‘-g GroupName’ to indicate to which users you intend this command to apply.

For example, the following command specifies that all members of the group tango are to be allowed to put messages on queue orange.queue on queue manager saturn.queue.manager (note the use of the continuation character, ‘‰’, in the code below to show that one line of code maps to more than one line of print)

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -g tango +put

Similarly, the command:

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p theuser +put

specifies that the userid theuser should be allowed to put messages on queue orange.queue on queue manager saturn.queue.manager. On most platforms this works fine. However, the implementation on Unix systems is that:

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p theuser +put

specifies that all of the members of theuser’s primary group are allowed to put messages on queue orange.queue on queue manager saturn.queue.manager.

This is can be very dangerous, as a system administrator can give access to a particular user unaware that in doing so he has accidentally also given access to many other users. User theuser may also be unhappy to be blamed by administrators for actions that they believe only he is authorized to have carried out.

The way around this problem is never to use the ‘-p’ parameter on Unix. The same effect can be obtained by specifying ‘-g PrimaryGroup’, which is a lot clearer.

Only create objects as mqm on unix

As described above, MQSeries on Unix does all of its security using the primary group of a userid rather than the userid itself, as you would expect. This has other knock-on effects.

When a queue is created, access to it is automatically granted to the mqm group and to the primary group of the userid that created it. It’s quite reasonable for someone designing the security of an MQSeries infrastructure to assume that access to all queues has been forbidden to all users except members of the mqm group. From here, the administrator would specify additional security settings that need to be made.

This works fine when queues are created either by the mqm user or by someone whose primary group is mqm. The problem arises when another user whose primary group is, for instance, staff, but who is also a member of mqm, defines the queue. In this case authority is also granted automatically and unintentionally to all members of the staff group.

This also applies to the creation of queue managers. If a queue manager is created by a userid whose primary group is staff, then all members of staff by default have access to the queue manager.

The simplest solution to this problem is to enforce a policy whereby no userid other than mqm may create MQSeries objects or queue managers. An alternative policy is never to make a userid a member of the mqm group unless this is its primary group.

OAM uses union

The Object Authority Manager uses the union of the authority settings that it finds. So, to take the example above a step further, suppose a queue, orange.queue, is created by a userid whose primary group is staff. At some point later it is found that another userid, worker, who shouldn’t have access to the queue, is nevertheless able to access it. worker is a member of staff but has team as his primary group. To resolve this problem an administrator might try running:

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p worker –all

However, this will not solve the problem. While it will remove team from the authorization list, members of staff, including worker, still have access to the queue.

This also applies to other platforms, such as NT, that implement the ‘-p’ parameter. Although the problem of primary groups is not present, it should be realized that, while:

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p worker +all

gives full access to worker,

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p worker –all

only forbids all access if worker is not a member of any authorized groups.


On some platforms, such as Unix, group membership is cached by MQSeries. This means that, if a new user joins a group and needs access to MQSeries objects, the queue manager needs to be restarted. Similarly (and probably more importantly), if a user leaves the team or company, it is not sufficient just to remove them from the group. The user retains access to objects until such a time as the queue manager is restarted.

Only enable things if you need them

This is no more than common sense, and the defaults are such that this won’t cause problems, but for the sake of completion the following points are worth mentioning:

  • Automatic channel definition

Enabling the automatic definition of channels increases the ability of machines to connect to your queue manager with little prior knowledge of your system, so this should be enabled only if definitely required.

  • Command server

The command server is very powerful and can render weak security even weaker. For instance, on a system running MQSeries version 2 in which users do not have the authority to use the client channel, they could still connect using a sender channel called SYSTEM.DEF.RECEIVER. This could put messages on the command server’s input queue requesting it to create a channel and transmission queue back out. This could then be used for further breaches of security. If you’re not confident of your system’s security, it’s advisable to start the command server only when it is needed and to grant users only the minimum required levels of authority to it.


Sam Garforth

SJG Consulting Ltd (UK)                                                     © S Garforth 1999

Using RACF and the OAM for end-to-end security

It’s a commonly held view that MQSeries is not a secure product and that to install it in your network infrastructure is to give hackers a free reign. In this article I’ll demonstrate that this isn’t necessarily so.
Security is a general term that covers such tasks as sender and receiver authentication, encryption and privacy, non-repudiation, and message integrity and data authentication.
When communications between companies occurs, MACs, digital signatures, and public key encryption may be employed to enforce security, perhaps by means of third-party products, such as Baltimore Secure MQ from Baltimore. However, many companies consider that, when it comes to communication within an enterprise, such measures are not required as all machines in their infrastructure are managed by administrators whom they trust. All that’s required is to provide administrators with the means to prevent unauthorised users from accessing the network and creating messages, while still providing access to authorised users.
This can be done using third-party products, many of which have the potential to secure communications completely. Nevertheless, there is a lot that can be achieved using just the security mechanisms provided by MQSeries itself – that is, using RACF on MVS and the Object Authority Manager (OAM) on distributed platforms.
This article covers the policies that a company would need to put in place and the configuration that administrators would need to implement in order to establish an acceptable level of security in an environment where administrators are trusted but users are not.
Consider the following example:
For a number of years a large company has successfully used techniques such as file transfer (FTP) to carry out point-to-point communication. However, in order to improve speed of development they decide to move to an MQSeries-based infrastructure, also deciding to use a central hub managed by a trusted group, as this yields benefits in manageability and allows new connections to be added quickly.
Most of the machines are based in a secure machine room (while it’s possible to log on to the machines from outside the room, a discussion of how to secure this type of access is beyond the scope of this article). Each business unit owns one machine. Business units don’t trust users (who could be disgruntled employees), and they don’t trust administrators of machines belonging to other business units, though they do trust their own administrators. Most security problems, such as ‘sniffers’ on the communication lines, were addressed when FTP was set up (possible solutions include using encryption at the communications layer, splitting SNA packets into so many parts that they are virtually impossible to read, and using security calls within applications themselves).
Consider a situation in which A and B need to communicate with each other using MQSeries, as do C and D (see Figure 1). Most of the security issues that exist in this environment also apply to FTP, though a major new one is introduced.


With the environment shown above, if business A decides that it needs to talk to business D, the infrastructure is already in place and only the application development needs to be done. This is a very strong reason for using MQSeries and a hub environment. However, it also introduces the problem that an unauthorised person on A could send a message to D.
In order to secure the end-to-end connection, including preventing the generation of unauthorised messages, it is necessary to carry out the measures detailed in this article.
Firstly, security needs to be set up at the level of the communication layer. SNA bind or session-level security can be used to ensure that, when an SNA bind request comes from A, the hub knows that the request really does originate at A. This is a default with most communications packages (but not ones from Tandem) and involves providing the same password at each end. Obviously the password must be kept secret and should be accessible only by the machines’ administrators. Something similar can be done for TCP/IP using secure domain units or some form of Virtual Private Network.
We’ve now ensured that no boxes are connected to the hub that shouldn’t be connected to it. However, it’s still possible for a user on A to define a queue manager called C and a channel on A called C.TO.HUB (for example, by knowing the naming convention or by querying the hub’s command server), and then connect to the hub by impersonating C and having messages routed to D.
If the channel is a sender/receiver channel, the only way around this is to use a security exit provided by a third-party product (such as Baltimore, mentioned earlier). However, if A is a secure machine, users won’t have the authority necessary to add these definitions to the system. An alternative is to use requester/sender channels. This is similar to a call-back system: the hub acts as a requester and thus needs to initiate the conversation. It calls out to the known LU/IP address stating that it wants to start a channel. A, acting as the sender, would then initiate the channel back to the hub. If A were to try to start a channel to the wrong requester, the request would not be accepted. Similarly D, acting as a requester, could initiate a conversation asking the hub, as a sender, to call it back. As the hub’s sender channel contains D’s CONNAME and calls it back the most that A could do in this set-up is to get the hub to call D.
So now we have a system where we can be confident that all messages coming to the hub on any channel are from the machine that they should be from.
The hub is merely a queue manager, looking after transmission queues and running the associated channels. Each transmission queue is named after the queue manager that it points to. The next problem, as mentioned above, is that a user on A could, by default, do an MQPUT specifying as its target the queue manager of D. The message would be put on A’s default transmission queue (to the hub); when it reaches the hub, it would automatically be put on transmission queue D, and thus get to a destination that it shouldn’t be able to reach.
The way around this is to specify the MCAUSER parameter on the receiver/requester channel definitions. By default the inbound channel at the hub puts messages on its target transmission queue using the userid running the channel. This userid has full access to put messages on all queues. However, if you change the channel’s MCAUSER parameter, the message will be put on the queue using the userid specified by the parameter.
So, define one userid for each inbound channel on the hub. For example, define a userid called A for the channel from A, a userid called C for the channel from C, etc. Alter the inbound channels to put messages on queues using their corresponding userid – for example:
Next set the permissions on the hub’s transmission queues to accept only messages from authorised channels. How you do this depends on your set-up – on distributed platforms, use the following commands:
If you use RACF, then the following commands are needed:
So now B and D can be confident that all the messages they receive are from authorised queue managers. The next problem is to make sure that the right messages go to the right queues. For example, user UserX on A might be allowed to send messages to queue QueueQ on B, and user UserY on A might be allowed to send messages to queue QueueR on B (see Figure 2). However, we need to ensure that UserX cannot send messages to QueueR. To do so without either using security exits or changing applications, business B needs to trust A’s administrator (but not C’s, etc). Also a system-wide naming convention of userids needs to be enforced.


By default, when a user on A sends a message, the user’s userid is put in the USERID field of the message descriptor. The user is not allowed to change this. Also, by default the inbound channel at the receiver (for instance, B) puts messages on its target queue using the userid that’s used to run the channel. This userid has sufficient access rights to put messages on all queues. If you change the PUTAUT parameter of the channel from PUTAUT(DEF) to PUTAUT(CTX), messages are placed on the queue using the authority of the userid specified in the message descriptor.
So queues can now be secured by defining userids on the receiving machines that have the same names as the userids on the sending machines. The receiving userids do not need authority to log on. So, in this example:
Define two users, UserX and UserY, on B.
On distributed platforms, issue the following command:
If you use RACF, the following commands are needed (assuming MQM runs the channel):

As mentioned previously, security is a bit more complex on Tandem systems. On Tandem, the ‘userid’ in the message descriptor is actually a groupid. For the above to work when a Tandem system is the receiver, it is necessary to use a group that’s defined and authorised with the same name as the sending userid. When a Tandem system is the sender, the receiver needs a userid defined and authorised with the same name as the sending group.
If users on the sender machine do not trust one another, some additional work is necessary to set up security.
If QREMOTE queues are not used, and users specify the target queue manager in the MQPUT call, then messages from UserX and UserY on A are put directly on the transmission queue and there is no way for MQSeries to stop them specifying one another’s target queues. It is also possible, when the channel is not running, for them to remove one another’s messages before the messages are sent.
The best way to solve this problem is to restrict access to transmission queues (this is the default) and to allow users to put messages only on QREMOTE queues that point to the target queues. Using this approach, a secure structure can be set up such that UserX and UserY cannot put messages on one another’s queues.
For instance, using RUNMQSC, enter the following definitions:
The commands below are the ones to use on distributed platforms.
While the ones below are for use with RACF.
However, if you are happy to allow applications to write to the transmission queue, you could use either the following commands on distributed platforms:
or this one with RACF:
If users on the receiving machine do not trust one another, then it’s necessary to set up some additional security.
Say UserQ is able to read messages on queue QueueQ and UserR is able to read messages on queue QueueR. If the users are not considered trustworthy, then one needs to guard against the possibility that UserR may put a message on queue QueueQ and for UserQ to receive it believing it to have come from A. Similarly UserR could get messages from queue QueueQ before UserQ gets them. To prevent this, it is necessary to run the following OAM commands:
SETMQAUT –M B –T Q –N QueueQ –P UserQ +GET
SETMQAUT –M B –T Q –N QueueR –P UserR +GET
With RACF, the following commands would be needed:
Note that on MVS a problem still remains. UserQ (or perhaps a member of the same group) can run an application that puts messages on queue QueueQ that the main UserQ application then reads off in the belief that they came from A. On distributed platforms, the OAM command SETMQAUT can be used to ensure that UserQ can get messages from a queue but not put them on it. RACF does not have this facility. A user is either able to both get and put messages on a queue or neither. One solution to this is to use ‘alias’ queues.
For example:
RACF could then be used to be used to prevent UserQ from directly accessing queue QueueQ while giving the user full access to the ACCESS.BY.USERQ alias queue. The PUT(DISABLED) attribute ensures that the user can’t put messages on the queue. Note that the PUT(DISABLED) attribute could not have been used directly on queue QueueQ, as this would have stopped the channel from being able to write messages.
While this method works, it’s a bit of overkill. As it’s common in MVS for a user to have read/write access to a dataset, allowing them also to have read/write access to a queue is usually seen as a natural extension. Another consideration is that, in MVS, it’s less likely that the same userid is used to run different applications.
Note that, on all platforms, such measures are unnecessary if the administrator has secured the machine so that users cannot add or run their own applications.

Sam Garforth
SJG Consulting Ltd (UK) © S Garforth 1999


The following comments have been made by a reader more recently and need to be incorporated into the article:

The statement is made that “If A were to try to start a channel to the wrong requester, the request would not be accepted.” This is not true. In the diagram, A, B, C or D could all start the same requestor on the hub (although not more than one at a time).

Also, if you give the MCAUSER +setall authority and set PUTAUT(CTX), what is to prevent a malicious user on A from sending messages as mqm to the command server (or any arbitrary queue) at the hub or any of the other machines? Each company has to trust the MQ administrators at all the other companies using that hub. That’s a lot to ask.

You might want to add a note in the document for TCP/IP users. It’s good practice to have the users coming in on different ports from each other and from the one you use internally. If, for example, you use 1414 internally and A, B, C and D use 1415, 1416, 1417 and 1418 respectively you gain a lot of control. You can stop all external traffic while allowing internal traffic to continue by shutting down listeners on all ports other than 1414. Or you can stop traffic from one business partner without affecting the others.

There are a bunch of other measures that can and probably should be taken to secure a hub QMgr that talks to several different external partners. These include adjusting the channel retry to enforce some flood control, disabling QMgr resolution by not naming XMit queues after the QMgr they serve and SVR channel behavior which can be exploited in a hub environment.

Sam Garforth
SJG Consulting Ltd (UK) © S Garforth 2003

Sam’s Views on Cloud for Government Policy Makers

I was honoured to be asked to present yesterday on “Cloud Skills, Flexibility and Strategy” at the Westminster eForum Keynote Seminar: Next steps for cloud computing.

English: The Palace of Westminster from Whitehall.

English: The Palace of Westminster from Whitehall. (Photo credit: Wikipedia)

As explained on its website, Westminster Forum Projects enjoys substantial support and involvement from key policymakers within UK and devolved legislatures, governments and regulatory bodies and from stakeholders in professional bodies, businesses and their advisors, consumer organisations, local government representatives, and other interested groups. The forum is structured to facilitate the formulation of ‘best’ public policy by providing policymakers and implementers with a sense of the way different stakeholder perspectives interrelate with an aim is to provide policymakers with context for arriving at whatever decisions they see fit.

The abstract to the session asked about the extent to which Government departments embracing the cloud, what progress is being made in achieving the UK’s Data Capability Strategy on skills and infrastructure development, whether organisations are doing enough to address the emerging shortfall in skills and also asked about the contradiction between mobile device power and cloud.

I was part of a panel and the following was my five minute introduction.

In my five minutes I’d like to talk about the power of cloud and within that to address three areas raised in the abstract to this session – shared services and shared data; mobile; and skills.

We see cloud as being used in three different ways – optimisation, innovation and disruption. Most of what I’ve seen so far in cloud adoption is about optimisation or cost saving. How to use standardisation, automation, virtualisation and self service to do the same things cheaper and faster.

What’s more interesting is the new things that can be achieved with the innovation and disruption that this can provide.

I’ve been working with various groups – local authorities, police forces, and universities, discussing consolidating their data centres. Instead of each one managing their own IT environment, they can share it in a cloud. They justify this with the cost saving argument but the important thing is, firstly, that they can stop worrying about IT and focus on what their real role is, and secondly that by putting their data together in a shared environment they can achieve things that they’ve never done before.

English: The road to Welton, East Riding of Yo...

English: The road to Welton, East Riding of Yorkshire, just south of Riplingham. Taken on the Riplingham to Welton road at MR: SE96293086 looking due south. This is typical south Yorkshire Wolds country. (Photo credit: Wikipedia)

For example, Ian Huntley would never have been hired as a caretaker and so the Soham murders would have been less likely to happen if the police force had access to the data that he was known by a different force.

And we wouldn’t have issues with burglars crossing the border between West and North Yorkshire to avoid detection if data was shared.

In Sunderland we predict £1.4m per year in cost savings by optimising their IT environment but what’s more important is that this has helped to create a shared environment for start up companies to get up and running quickly so it’s stimulating economic growth in the area.

Another example is Madeleine McCann. After her disappearance it was important to collect holiday photos from members of the public as quickly as possible. Creating a website for this before cloud would have taken far too long. Nowadays it can be spun up very quickly. This isn’t about cost saving and optimisation, it’s about achieving things that could never have been done before.

This brings me to the question in the abstract about mobile: “As device processing power increases, yet cloud solutions rely less and less on that power, is there a disconnect between hardware manufacturers and app and software developers”. I think this is missing the point. Cloud isn’t about shifting the processing power from one place to another, it’s about doing the right processing in the right place.

English: GPS navigation solution running on a ...

English: GPS navigation solution running on a smartphone (iphone) mounted to a road bike. GPS is gaining wide usage with the integration of GPS sensors in many mobile phones. (Photo credit: Wikipedia)

In IBM we talk about CAMS – the nexus of forces of Cloud, Analytics, Mobile and Social, and we split the IT into Systems of Record and Systems of Engagement. The Systems of Record are the traditional IT – the databases that we’re talking about moving from the legacy data centres to the cloud. And, as we’ve discussed, putting it into the cloud means that a lot of new analytics can happen here. With mobile and social we now have Systems of Engagement. The devices that interact with people and the world. The devices that, because of their fantastic processing power, can gather data that we’ve never had access to before. These devices mean that it’s really easy to take a photo of graffiti or a hole in the road and send it to the local council through FixMyStreet and have it fixed. It’s not just the processing power, it’s the instrumentation that this brings. We now have a GPS location so the council know exactly where the hole is. And of course this makes it a lot easier to send photos and even videos of Madeleine McCann to a photo analytics site.

We’re also working with Westminster council to optimise their parking. The instrumentation and communication from phones helps us do things we’ve never done before, but then we move onto the Internet of Things and putting connected sensors in parking spaces.

With connected cars we have even more instrumentation and possibilities. We have millions of cars with thermometers, rain detection, GPS and connectivity that can tell the Met Office exactly what the weather is with incredible granularity, as well as the more obvious solutions like traffic optimisation.

Moving on to talking about skills. IBM has an Academic Initiative where we give free software to universities, and work with them on the curriculum and even act as guest lecturers. With Imperial College we’re proving cloud based marketing analytics software as well as data sets and skills, so that they can focus on teaching the subject rather than worrying about the IT. With computer science in school curriculums changing to be more about programming skills we can offer cloud based development environments like IBM Bluemix. we’re working with the Oxford and Cambridge examination board on their modules for cloud, big data and security.

Classroom 010

Classroom 010 (Photo credit: Wikipedia)

To be honest, it’s still hard. Universities are a competitive environment and they have to offer courses that students are interested in rather than ones that industry and the country need. IT is changing so fast that we can’t keep up. Lecturers will teach subjects that they’re comfortable with and students will apply for courses that they understand or that their parents are familiar with. A university recently offered a course on social media analytics, which you’d think would be quite trendy and attractive but they only had two attendees. It used to be that universities would teach theory and the ability to learn and then industry would hire them and give them the skills, but now things are moving so fast that industry doesn’t have the skills and is looking for the graduates to bring them.

Looking at the strategy of moving to the cloud, and the changing role of the IT department, we’re finding that by outsourcing the day to day running of the technology there is a change in skills needed. It’s less about hands on IT and more about architecture, governance, and managing relationships with third party providers. A lot of this is typically offered by the business faculty of a university, rather than the computing part. We need these groups to work closer together.

To a certain extent we’re addressing this with apprenticeships. IBM’s been running an apprenticeship scheme for the last four years This on the job training means that industry can give hands on training with the best blend of up to the minute technical, business and personal skills and this has been very effective, with IBM winning the Best Apprenticeship Scheme from Target National Recruitment Awards and National Apprenticeship Services and Everywoman in technology.

In summary, we need to be looking at the new things that can be achieved by moving to cloud and shared services; exploiting mobile and the internet of things; and training for the most appropriate skills in the most appropriate way.

Using a Cloudant database with a BlueMix application

I wanted to learn how to use the Cloudant database with a BlueMix application. I found this great blog post Build a simple word game app using Cloudant on Bluemix by Mattias Mohlin. I’ve been working through it.


I’ve learned a lot from it – as the writer says “I’ll cover aspects that are important when developing larger applications, such as setting up a good development environment to enable local debugging. My goal is to walk you through the development of a small Bluemix application using an approach that is also applicable to development of large Bluemix applications.” So it includes developing on a PC and also setting up Cloudant outside of BlueMix.

So here’s my simplified version focusing purely on getting an application up and running using a Cloudant BlueMix service and staying in DevOps Services as much as possible.

The first step is to take a copy of Mattias’s code so go to the GuessTheWord DevOps Services project.

click on “Edit Code” and then “Fork”


I chose to use the same project name GuessTheWord – in DevOps Services it will be unique as it’s in my project space.


This takes me into my own copy of the project so I can start editing it.

I need to update the host in the manifest file otherwise the deployment will conflict with Mattias’s. So in my case I change it to GuessTheWordGarforth but you’ll need to change it to something else otherwise yours will clash with mine. Don’t forget to save the file with Ctrl-S, or File/Save or at least changing file.


Now I need to set up the app and bind the database on BlueMix so I click on “deploy”. I know it won’t run but it will start to set things up.

At this point I logged onto BlueMix itself for the first time and located the new GuessTheWord in the dashboard.


I clicked on it and selected “add a service” and then scrolled down to the Cloudant NoSQL DB


and click on it. I clicked on “create” and then allowed it to restart the application. Unsurprisingly it still did not start as there is more coding to do. However the Cloudant service is there so I clicked on “Show Credentials” and saw that the database has  username, password, url etc so the registration etc on the Cloudant site is not necessary as this is all handled by BlueMix.

image015image017Clicking on Runtime on the left and then scrolling down to Environment variables I can see that these Cloudant credentials have been set up as VCAP_SERVICES environment variables for my app. So I just need to change the code to use these.

I switch back to DevOps Services and go to the server.js file to modify the code for accessing this database.

I change line 27 from
Cloudant = env[‘user-provided’][0].credentials;
Cloudant = env[‘CloudantNoSQLDB’][0].credentials;

So we’re providing the high level environment variable not the name or the label.

Unfortunately there is also an error in Mattias’s code. I don’t know whether the BlueMix Cloudant service has changed since he wrote it but he builds the url for the database by adding the userid and password to it but actually these are already in my environment variable url

so I change line 30 from

var nano = require(‘nano’)(‘https://’ + Cloudant.username + ‘:’ + Cloudant.password + ‘@’ + Cloudant.url.substring(8));
to simply
var nano = require(‘nano’)(Cloudant.url);

Now save the file and click deploy. When it’s finished a message pops up saying see manual deployment information in the root folder page.


So I click on that and hopefully see a green traffic light in the middle.


Click on the GuessTheWord hyperlink and should take you to the working game which in my case is running at


However there are still no scores displayed as there is no database table or entries.

I spent a long time trying to do this next part in the code but eventually ran out of time and had to go through the Cloudant website. If anyone can show me how to do this part in code I’d really appreciate it.

So for now, go to the GuessTheWord app on BlueMix and click on the running Cloudant service


From here you get to a Launch button


Pressing this logs you on to the Cloudant site using single sign on


Create a new database named guess_the_word_hiscores. Then click the button to create a new secondary index. Store it in a document named top_scores and name the index top_scores_index. As Mattias says, the map function defines which objects in the database are categorised by the index and what information we want to retrieve for those objects. We use the score as the index key (the first argument to emit), then emit an object containing the score, the name of the player, and the date the score was achieved. Following is the JavaScript implementation of the map function, which we need to add before saving and building the index.

function(doc) {
emit(doc.score, {score : doc.score, name :, date :});


Again, we should really be able to do the following as part of the program startup but anyway, the following should add an entry to the database, replacing guessthewordgarforth in the URL with the host name you chose for your application:

You should see a success message. Enter the following URL, again replacing guessthewordgarforth with your application host name.

The entry you just added should appear encoded in JSON e.g.


So, the code and the database are working correctly. Now it just remains to play the game. Go to

(replacing guessthewordgarforth with your hostname)

This time it will include Bob in the high score table


and click on “Play!”


My twitter feed