Posts Tagged 'software'

Upgrading a Salesforce App from Classic to Lightning Experience by Sam Garforth

Introduction

A while ago, I wrote a Salesforce app to be used as example of the kinds of apps that can be built on the Salesforce platform. It is called Shared Rides and is a carpooling app to offer and accept shared rides via Chatter. You earn green mile credits which can be tracked on a dashboard. It is available as a Salesforce App Cloud Quick Start App and can be downloaded as a managed package from here.

Salesforce has recently introduced a new look and feel called Lightning Experience (LEX) along with associated tooling to help you develop apps that use it. It is a component based framework (Model View View Controller), as opposed to the Model View Controller framework that Classic uses. I decided to check whether my app worked in the new interface and make any changes necessary take advantage of all the benefits of being a Lightning app.

There are some great training modules in Trailhead. The trail Migrate to Lightning Experience tell you a lot of what you need to know to understand LEX and, in particular, the unit Upgrade a Classic App to a Lightning App in module Lightning Apps gives the recommended steps for upgrading.

The Standard Steps

So here are the steps I took:

  1. Create a fresh org to develop and test the app one
  2. Install the app package that I wanted to upgrade into the new org.
  3. Enable Lightning Experience for the org and switch into it.image001image003image005
  4. From Setup, enter App in the Quick Find box, then select App Manager.
  5. image007

  6. Find your app in the list. In my case, It’s called Shared Rides.
  7. Click the pulldown from the app row, and select Upgrade.
  8. image011

  9. Leave the suggested name as-is, e.g. “Shared Rides Lightning” and click Upgrade.
  10. image013

  11. The Classic app is copied and upgraded for Lightning Experience. There are now two versions of the app: a Classic version, and a Lightning version. After you upgrade it, the Classic app is no longer accessible in Lightning Experience via the App Launcher. You’ll still see the Classic app in the apps list, but with the Visible in Lightning column deselected.image015
  12. After you upgrade a Classic app, the two versions of the app must be managed separately. Future changes you make to the Classic app won’t be reflected in the Lightning version of the app, and vice versa.
  13. So now the Lightning app is available in the App Launcher and it even has the data from the old version
    image017image019

    Adding the Enhancements

    However, it’s not taking advantage of the great enhancements that Lightning apps can offer. So we go through the following steps.

    1. Back in the Lightning Experience App Manager, find the Lightning app in the list.
    2. Click  from the Lightning app row, and select Edit.
    3. image022

    4. Update the description if needed as the app description displays alongside the icon in the App Launcher, so it’s a great way to tell your users what the app is for, but keep it brief.
    5. In the App Branding section, change the Primary Color to something appropriate and also upload an image.image024image026
    6.  

      After saving, click the Select Items tab, and remove the Home tab from the Selected Items list by selecting it and then clicking the ‘<’ button between the two columns. Then ensure you’re happy with the other items and click ‘Save’ and .Done’.image028

      Now, when you go into the App Launcher the app should be there with the correct branding. The changes may not appear immediately as the page is cached. You may need to force a reload of the browser.image030

      That brings us to the end of the official instructions. Now comes the testing.

      Further Tidying

      image032

      The new version of the app has quick action buttons for ‘New Contact’ and ‘New Opportunity’. These aren’t relevant to a car sharing app so I need to remove them.

      Also, there is a pane with functionality for activity, new task, log a call etc which is not relevant.

      So click on ‘edit page’

      image036

      This takes you into Lightning App Builder. Highlight the Activity pane.image038

      and click the x in the top right corner of the paneimage040

      Ideally you would want to change the layout of the page (template) but in my case this option in not available. Perhaps it’s inherited.

      Now we need to remove the ‘New Contact’ and ‘New Opportunity’ buttons so click on the pane that contains these i.e. the Highlights Panel.

      image041

      On the right hand side it tells you which page layout you are using for the actions. Click through to it, in my case I click on “Location Layout”.

      Scroll down the page layout setup page to the Lightning Experience section and click on the ‘override the predefined actions’ hyperlink.

      image043

      Remove the actions you don’t want by dragging them up to the Lightning Actions palette.

      image045

      then click ‘Save’.

      Go back to the Lightning App Builder page and click ‘save’ to save the other changes and then select ‘Activate’.

      Select what level you want to assign the page as the default for. I selected ‘Org Default’.

      image047image049

      Click ‘Back’ to go out of the App Builder back to the page.  I can see that my changes have taken effect.

      image051

      These changes only affected the Location page. I still needed to go through the same steps to adjust the layout and actions on the other pages.

      JavaScript Button

      In the Classic version of my app I had an “Accept Ride” button which someone could click to say that they would like to share a journey that had been offered.image053

      This was a JavaScript button.

      image055image057

      JavaScript buttons are not supported in LEX so instead I needed to create a Lightning Component Quick Action button. So, these are the steps I took:

      Create the Lightning Component

      1. Open the Dev Consoleimage059
      2. Create a new Lightning Component.
      3. Give the component the nameAcceptRide.
      4. I selected Lightning Record Page and Lightning Quick Action.image061
      5. Then I replaced the contents of the component with the following:

      Add a Component Controller

      1. Click the Controller button in the Dev Console.
      2. Replace the contents of the controller with the following and then save it
      ({
      handleAcceptRide: function(component, event, helper) {
      var sharedRide_id = component.get("v.recordId");
      var action = component.get("c.getSharedRide");
      action.setParams({
      "sharedRideID": sharedRide_id
      });
      action.setCallback(this, function(response) {
      var res = response.getReturnValue();
      var resultsToast = $A.get("e.force:showToast");
      if (res) {
      resultsToast.setParams({
      "title": "Ride Accepted",
      "message": "You have been added to the Shared Ride riders list."
      });
      } else {
      resultsToast.setParams({
      "title": "Error",
      "message": "Sorry, you could not be added to the Shared Ride riders list."
      });
      }
      resultsToast.fire();
      $A.get("e.force:refreshView").fire();
      var dismissActionPanel = $A.get("e.force:closeQuickAction");
      dismissActionPanel.fire();
      });
      $A.enqueueAction(action);
      }
      })

      Create the Apex Controller

      1. In the Dev Console, create a new Apex Class,File > New > Apex Class.
      2. Give it a name ofAcceptRideApexController.
      3. I replaced the contents with the following and saved it.
      public class AcceptRideApexController {
      @AuraEnabled
      public static Boolean getSharedRide (Id sharedRideID) {
      PSE_Shared_Rides__c rideToUpdate = [SELECT Id, Number_of_spaces__c FROM PSE_Shared_Rides__c WHERE Id = :sharedRideID];
      Decimal spacesleft = rideToUpdate.Number_of_spaces__c;
      if (spacesleft  0 ) rideToUpdate.Number_of_spaces__c = spacesleft - 1;
      update rideToUpdate;
      return true;
      }
      }
      

      The original javascript looked like this:

      
      /* */
      {!REQUIRESCRIPT("/soap/ajax/32.0/connection.js")}
      try{
      var rideToUpdate = new sforce.SObject("PSE_Shared_Rides__c");
      var spacesleft = "{!PSE_Shared_Rides__c.Number_of_spaces__c}";
      rideToUpdate.Id = "{!PSE_Shared_Rides__c.Id}";
      if (spacesleft == 4) rideToUpdate.Ride_Sharer_4__c = "{!$User.Id}";
      if (spacesleft == 3) rideToUpdate.Ride_Sharer_3__c = "{!$User.Id}";
      if (spacesleft == 2) rideToUpdate.Ride_Sharer_2__c = "{!$User.Id}";
      if (spacesleft == 1) rideToUpdate.Ride_Sharer_1__c = "{!$User.Id}";
      if (spacesleft > 0 ) rideToUpdate.Number_of_spaces__c = spacesleft - 1;
      var result = sforce.connection.update([rideToUpdate]);
      if(result[0].success === "true"){
      location.reload();
      }else{
      alert(
      "An Error has Occurred. Error: " +
      result[0].errors.message
      );
      }
      }catch(e){
      alert(
      "An Unexpected Error has Occurred: Error: " + e
      );
      }
      1. Save the file.
      2. Ensure that the Apex controller is referenced in the tag in AcceptRide.cmp file i.e. controller=”AcceptRideApexController”

      Make the component a Quick Action

      1. Ensure that force:lightningQuickActionWithoutHeader is an implements attribute of the component.
      2. Save all the files.
      3. Navigate back to the Object Manager and choose the Shared Ride object.
      4. Scroll down to the Buttons, Links and Actions section.
      5. Click the New Action button.
      6. Select Lightning Component as the Actions Type.
      7. Select c:AcceptRide as the Lightning Component and set the height to 400px.
      8. Type Accept Ride in the Label field and Accept_Ride becomes the name.
        image063
      9. Click Save.
      10. Navigate back to Shared Ride object page and click the Shared Ride Layout in the Page Layouts section.
      11. Click the override the predefined actions link in the Salesforce1 and Lightning Experience Actions section.
      12. Click the Salesforce1 & Lightning Actions link in the Property Layout box.
      13. Drag the Accept Ride tile to the Salesforce1 and Lightning Experience Actions section and place it as the first item. Remove the buttons that you don’t need.
      14. image065

      15. Click the Save button in the Shared Ride Layout box.
      16. Navigate back to a Shared Ride Record page. If the changes don’t appear then force refresh the page.image067
      17. Click the Accept Ride Quick Action button to accept the offer.
      1. In my case I have different page layouts for different record types and I change record type each time someone accepts a ride. I needed to change each of these layouts.

      Create the Test Class

      So now, although the app is working I want to be able to package it and share it to other users and orgs. For this I need a test class for the new apex code. Here is my new test Apex Class:

      @isTest(seeAllData=true)
      private class PSE_TestAcceptRide {
      static testMethod void myUnitTest() {
      Test.startTest();
      PSE_Location__c l1 = new PSE_Location__c();
      l1.Location__c = 'SO53 1JA';
      insert l1;
      PSE_Location__c l2 = new PSE_Location__c();
      l2.Location__c = 'EC2N 4AY';
      insert l2;
      PSE_Route__c r = new PSE_Route__c();
      r.Start_Location__c = l1.Id;
      r.Destination__c = l2.Id;
      r.Name = 'Chandlers Ford to SFT';
      insert r;
      update r;
      PSE_Shared_Rides__c s = new PSE_Shared_Rides__c();
      s.Name = 'Test Ride';
      s.Route__c = r.Id;
      s.Time__c = Datetime.newInstance(2015, 10, 17);
      s.Number_of_spaces__c = 3;
      try
      {
      insert s;
      update s;
      AcceptRideApexController.getSharedRide(s.Id);
      AcceptRideApexController.getSharedRide(s.Id);
      AcceptRideApexController.getSharedRide(s.Id);
      AcceptRideApexController.getSharedRide(s.Id);
      AcceptRideApexController.getSharedRide(s.Id);
      }
      catch(System.DMLException e)
      {
      e.getMessage();
      }
      Test.stopTest();
      }
      }

      Package The New App

      Once the app is working it needs to be packaged if you intend to share it as a package. So go to the Package Manager and select ‘New’. Give it a name and save it and select ‘Add’ to add the components.

      Start with component type App and select both versions of your app. At first I just selected the Lightning one but then it will only work in LEX. You need the classic version too for people who don’t have LEX.

      I also had to manually add the Apex test class and the AcceptRideController.js

      Select Upload and fill in the fields.

      Conclusion

      This was my personal experience of upgrading a packaged app with a JavaScript button to Lightning Experience. I hope you found it useful. I would welcome any comments on improvements I could make to my process and also other learnings you have had upgrading other apps.

Beyond WebSphere Family Monitoring

The integrated application processes that drive real-time business must flow smoothly, efficiently and without interruption. These processes are comprised of a series of interrelated complex events which must be tightly managed to ensure the health of your business processes. To achieve this, deep visibility into the core metrics of business processes is crucial.. But this level of insight is impossible when you’re limited to static events. This paper explores the next level in dealing with the increasing complexity of inter-application communication in the real-time enterprise with the ability to dynamically create events based on actual conditions and related data elements across the enterprise.

Stanford University Emeritus professor David Luckham has spent decades studying event processing. In his book, The Power of Events he says that we need to

“View enterprise system based on how we use them – not in terms of how we build them.”  This is an important paradigm shift which can move us towards the goal of business activity monitoring

Currently we have Enterprise systems management (ESM) with a foundation of event based monitoring (EBM). It provides availability with automation using event based instrumentation, with threshold based performance and alerts and notifications. This is good but not good enough for the evolving needs of real time business. Gartner says that event driven applications are the next big thing.

Typically in event based monitoring events come from different middlewares – web servers, databases, applications, network devices, mobile devices, through the IT infrastructure of middleware, application servers, applications and the network. The events can be seen by the data centre. The idea is that the ESM system will monitor, detect, notify and take corrective action either automatically or with manual intervention. But there are different business users with their own perpectives.

Event based ESMs can’t really take good corrective action as they can’t correlate the event with the effect.

Here’s a typical example of a business activity. The head office requests a price change in all the stores. It updates the price in the master database and checks the inventory levels and then transmits the change to all the stores. But what happens if something goes wrong.

You/the customer will be asking yourself lots of questions. 1. From an IT perspective you’ll want to know where the problem/slowdown is i.e. which queue or channel has the problem that caused the change not to happen. But there are business level questions too. 2. You want to know why the change didn’t take place at all the stores. 3. You need to ask yourself from a business perspective what the impact of this is. 4. You may know that a channel’s gone down or you may know that a price change hasn’t happened but do you know what else has been affected?

These are problems that you don’t really have time to address. EBMs don’t know what to do about this out of the box as they are designed at a technology level and to configure them to understand the business is too hard to set up. With constantly changing business and application needs you can’t adapt your monitoring and automation fast enough.

Here is another real life example.  A stock trade or share purchase. The customer says they want to buy something. You check their account, then the stock availability and price and then they agree they want to buy at that price. Then you process and complete the trade and update the stock price.

This is straight through processing. The transaction has to be done atomically as one unit of work within a certain time. These are serially dependent transactions. But what happens if they don’t complete in time. Again you will ask yourself the questions. From an IT perspective you will want to know what the cause of the problem was. But also the business units who were affected will want to know about it. You will want to know which business units and transactions were affected. The correct, and only the correct, business guys will want to know. You will want to know the business impact of this – to see the problem and impact from a business perspective.

So what is the business impact? At a high level its loss of money. You’ll know that the transaction didn’t take place but you won’t know the real root cause so everyone will be blaming everyone else, wasting time and damaging morale and relationships. During this time you are not delivering the business service. You have damaged your relationship with the customer as you haven’t delivered what they needed and you can’t even explain why. So you’re going to lose your competitive advantage.

To give this root cause analysis and business impact analysis customers normally have to put a lot of resource into customising an event based solution or just developing their own monitoring solution but this is not flexible enough. It is not feasible in an increasingly complex environment of technology. So we have to ask ourselves how can we have a monitoring solution that is flexible enough to keep our business systems productive, rapidly and constantly adapting to incorporate a changing IT and business environment.

So in summary, the big questions are Why is it so hard to detect and prevent these situations? How can we make the transition to real-time on-demand monitoring. How can we align our IT environment with the business priorities to achieve the business goals?

These problems arise because we’re using event based monitoring. Monitoring at an IT or technology level is preventing us from achieving business activity monitoring.

David Luckham refines this more to talk about Business Impact Analysis – overcoming IT blindness. We should be looking at the complex events. Correlating or aggregating the various events and metrics to see the business impact. He talks about the event hierarchy and the processing of complex events. MQ has about 40 static events like queue full, channel stopped etc. But there are events from WAS, DB2 etc, and there are metrics like channel throughput, cpu usage and TIME. There are also home grown apps which need monitoring and there are business events and metrics. All these need to be taken into account to give a higher level complex event. For example if a queue is filling up at a certain rate you can calculate that in a certain amount of time you will receive a static simple queue high event. But by that time it will be too late. You need to aggregate the metrics queue fill rate, queue depth, maximum depth and time to generate a complex event.

So the problems with the current state of event based monitoring are:

Event causality – there’s not enough information to identify the root cause of the problem. The price update didn’t’ happen, but why? MQ failed but why? Maybe it was a disk space problem. Maybe it was caused by something in a different part of the environment – a different computer or application.

Interpretation – looking at it at a simple technology level we don’t have enough information to see the effect of this simple problem on the different parts of the enterprise – to see the effect from the perspectives of the different users, and to notify them and resolve the problems it causes for them.

Agility – Out of the box ESM or EBM solutions cannot possibly know the business requirements. They require a lot of customisation when you initially set them up to be able to understand the effects of different problems on the different users and then constant customisation as the technological and business environment constantly changes. They are constantly playing a game of catch up that they can never win.

Awareness – Because they are only looking at individual points of technology they have a blindness to the end to end transaction. They cannot know how a simple technology problem affects the rest of the technologies or businesses.

Another shortcoming of the current generation of system management is false positives. This is a big problem with simple event based monitoring. You have a storm of alerts. The data centre sees an event saying the queue is full. They call the MQ team who say not to worry about it; it’s just a test queue, or a queue for an application that hasn’t gone live yet. After the first 24 times that this happens the data centre stops paying attention to queue full events. Then the 25th one happens which is for an important transaction which needs to be dealt with immediately and they just ignore it. The company loses business etc and it’s as if they didn’t have a monitoring solution at all. So what we need is a high level of granularity on the queue monitoring based not just on whether a queue is full but what queue it is, who the application owner is, what time of day it is, what applications are reading from the queue etc.

It’s not enough to provide monitoring data, it has to be information. It has to be interpreted in a way that is useful. What we need is dynamic metric based monitoring. The difference between events and metrics or data and information. You need metric based monitoring to create complex events – in context, user specific events that are pre-emptive before a real business problem happens which can be actionable. The problem isn’t getting events, its event correlation with rules etc. You need to watch more than the vendor gives. It can’t be enough.

There is something called the ‘aha phenomenon’. When a problem occurs you spend ages trying to identify the cause, looking at all the queues and middlewares and applications. All the time you’re looking the technology’s not running and the business is losing money. Eventually you find it and say ‘aha!’ Then what happens? Can you easily adapt your monitoring environment to make sure it doesn’t happen again or that you at least don’t have to search again when it does happen. In other words you need dynamic monitoring – where the monitoring environment of event correlation, metric selection and rules application can be constantly updated.

So let’s expand the vision of what we need. We need a unified approach like the service oriented architectures that are so popular for applications i.e. a reusable monitoring architecture. We don’t need a silo or isolated tool – the antithesis of SOAs. It needs to be a business oriented on demand solution. It needs to be modular, extensible, adaptable, scalable and reusable. We need instrumentation for all the different applications and middlewares. And the environment status needs to be shown to all the different stakeholders from their own perspective for their own roles and responsibilities.

By applying the service oriented architecture principles we can achieve the Business Activity Monitoring and business agility that we really need. A business centric solution aligning IT to the business processes so the business can actually benefit from the technology rather than being constrained by it. Using this you can see the impact of a problem from all perspectives and you can rapidly adapt to the changing business and technological environment learning from mistakes. Currently 80% of IT resource is consumed by maintaining the technology. Using this architecture we can free the resources to other products, develop the business and make more money.

In summary, this unified model gives business and technology continuity and automatic recovery. It gives very granular complex events allowing root cause analysis and business impact analysis by being aware of the business processes affected by the technology and displaying the information in a business context giving an improved quality of service.

Of course there are pros and cons to being standards based. Some Service Oriented Architectures such as .Net and WebServices are still in flux. We need unified SOA security across all platforms. To be proactive in the way that is needed will require polling which needs to be configured to avoid performance problems.

But anyway, what I’ve proposed here is a unified model, a base for business activity monitoring. As David Luckham says “The challenge is not to restrict communication flexibility, but to develop new technologies to understand it”. So I propose that the key to dealing with complexity and delivering true business activity monitoring solutions is a unified model based on a service oriented architecture. This doesn’t happen out of the box as no vendor or developer can know all your requirements but it is a framework which is modular, extensible, adaptable, scalable and resuable enough to facilitate what we need.

© Sam Garforth   2005

Closing the holes in MQ security

In choosing the default settings for MQSeries, IBM has had to strike a balance between making the product easy to use as quickly as possible and making it secure straight out of the box. In more recent releases, they have put more emphasis on ease of use and so relaxed the default security settings. This is one of the reasons why administrators must now reconfigure their systems if they require them to be secure. This article examines some of the potential security holes of which administrators should be aware, and also describes ways in which administrators can close these holes.

Default channel definitions

There are a number of objects, such as SYSTEM.DEF.SVRCONN and SYSTEM.DEFAULT.LOCAL.QUEUE, that are created by default when you install and configure a queue manager. These are really intended only as definitions to be cloned for their default attributes in the creation of new objects. However, a potential infiltrator can exploit the fact that they are also well-defined objects that probably exist on your system.

Originally, on distributed platforms, the definition of channel SYSTEM.DEF.SVRCONN had its MCAUSER parameter set to ‘nobody’. IBM had so many complaints from users who couldn’t get clients connected that it has now changed this parameter to blank (‘ ’).

The MCAUSER parameter specifies the userid that is checked when an inbound message is put on a queue. Setting this field to blank means that the authority of the userid running the channel (usually ‘mqm’) is checked. In other words, messages are always authorized to be put on all queues.

The thinking behind putting ‘nobody’ in this field is that no one should be allowed to put messages on queues unless the administrator actually changes settings to allow them to do so. Unfortunately this default setting was not documented and so users could not work out how they were required to change things.

There are many users who don’t need client channels and so haven’t even read this section of the manual. They’re unaware that nowadays, with default settings in place, anyone who can connect to their machine (for instance, someone on the same LAN) can start a client channel to them called SYSTEM.DEF.SVRCONN and have access to put messages on any of their queues and – often more importantly – to get messages from any of their queues.

This is not an entirely new problem – even the original systems suffered from it, as there are other channels, such as SYSTEM.DEF.RECEIVER and SYSTEM.DEF.REQUESTER, that have always had a blank MCAUSER. With a little effort, users have always been able to connect to these and put messages on queues using full authority. If the queue manager is the default one, the infiltrator needs no prior knowledge of the system.

As previously mentioned, these definitions are used to provide defaults for the creation of new channels. This means that, in many systems, newly created channels also have MCAUSER set to blank.

It is recommended that the following commands be executed using RUNMQSC to close this loophole:

alter chl(SYSTEM.DEF.SVRCONN) chltype(SVRCONN) trptype(LU62) +

Mcauser(NOBODY)

alter chl(SYSTEM.DEF.RECEIVER) chltype(RCVR) trptype(LU62) +

Mcauser(NOBODY)

alter chl(SYSTEM.DEF.REQUESTER) chltype(RQSTR) trptype(LU62) +

Mcauser(NOBODY)

Do not start MQ using root

It’s worth noting that much of this section is described in Unix terms, though it’s applicable to most platforms, once Unix terms are substituted with their equivalents.

All MQSeries components should be started using the MQSeries administration userid (mqm). Many system administrators like to make the system administration userid (root) a member of the mqm group. This is understandable, as they can then run all of their administration commands, not all of which are for MQ, as root. However, this is a very dangerous thing for them to do as they are effectively giving root authority to all of the members of the mqm group.

For example, if the trigger monitor of the default queue manager is started by root using default parameters, a member of the mqm group whose workstation has IP address ‘myhost’ can enter the following commands using RUNMQSC:

DEFINE QL(MYQUEUE) TRIGGER PROCESS(MYPROCESS) +

INITQ(SYSTEM.DEFAULT.INITIATION.QUEUE)

DEFINE PROCESS(MYPROCESS) APPLICID(‘xterm –display myhost:0 &’)

and then enter the command:

echo hello | amqsput MYQUEUE

This causes a terminal to appear on their screen giving them a command line with root authority from which they have full control of the system.

Similarly, if a channel is started by root, or the channel initiator starts a channel and the channel initiator is started by root, then any exits called by the channel will run as root. So the mqm member could write and install an exit that again spawns a root-authorized xterm.

The receiver channel could have the same problems, for example, if started as root by the listener, inetd, or Communications Manager.

A good start to overcoming this problem is to remove root from the mqm group. However, on some systems root will still have access to the strmqm command and, while it may look as though it has started the queue manager, there may be unexpected errors later when it performs commands for which the OAM checks authority.

The system administrator may find it useful to create commands that only root is authorized to run which switch to the mqm userid before performing the instruction. For example the following shell script could be called strmqm and put higher in root’s path than the real strmqm.

#!/bin/ksh

su – mqm -c /usr/lpp/mqm/bin/strmqm $1

Only use groups on UNIX OAM

The setmqaut command is used to set access to MQSeries objects. Among its parameters you may specify ‘-p PrincipalName’ or ‘-g GroupName’ to indicate to which users you intend this command to apply.

For example, the following command specifies that all members of the group tango are to be allowed to put messages on queue orange.queue on queue manager saturn.queue.manager (note the use of the continuation character, ‘‰’, in the code below to show that one line of code maps to more than one line of print)

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -g tango +put

Similarly, the command:

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p theuser +put

specifies that the userid theuser should be allowed to put messages on queue orange.queue on queue manager saturn.queue.manager. On most platforms this works fine. However, the implementation on Unix systems is that:

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p theuser +put

specifies that all of the members of theuser’s primary group are allowed to put messages on queue orange.queue on queue manager saturn.queue.manager.

This is can be very dangerous, as a system administrator can give access to a particular user unaware that in doing so he has accidentally also given access to many other users. User theuser may also be unhappy to be blamed by administrators for actions that they believe only he is authorized to have carried out.

The way around this problem is never to use the ‘-p’ parameter on Unix. The same effect can be obtained by specifying ‘-g PrimaryGroup’, which is a lot clearer.

Only create objects as mqm on unix

As described above, MQSeries on Unix does all of its security using the primary group of a userid rather than the userid itself, as you would expect. This has other knock-on effects.

When a queue is created, access to it is automatically granted to the mqm group and to the primary group of the userid that created it. It’s quite reasonable for someone designing the security of an MQSeries infrastructure to assume that access to all queues has been forbidden to all users except members of the mqm group. From here, the administrator would specify additional security settings that need to be made.

This works fine when queues are created either by the mqm user or by someone whose primary group is mqm. The problem arises when another user whose primary group is, for instance, staff, but who is also a member of mqm, defines the queue. In this case authority is also granted automatically and unintentionally to all members of the staff group.

This also applies to the creation of queue managers. If a queue manager is created by a userid whose primary group is staff, then all members of staff by default have access to the queue manager.

The simplest solution to this problem is to enforce a policy whereby no userid other than mqm may create MQSeries objects or queue managers. An alternative policy is never to make a userid a member of the mqm group unless this is its primary group.

OAM uses union

The Object Authority Manager uses the union of the authority settings that it finds. So, to take the example above a step further, suppose a queue, orange.queue, is created by a userid whose primary group is staff. At some point later it is found that another userid, worker, who shouldn’t have access to the queue, is nevertheless able to access it. worker is a member of staff but has team as his primary group. To resolve this problem an administrator might try running:

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p worker –all

However, this will not solve the problem. While it will remove team from the authorization list, members of staff, including worker, still have access to the queue.

This also applies to other platforms, such as NT, that implement the ‘-p’ parameter. Although the problem of primary groups is not present, it should be realized that, while:

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p worker +all

gives full access to worker,

setmqaut -m saturn.queue.manager -n orange.queue -t queue

‰  -p worker –all

only forbids all access if worker is not a member of any authorized groups.

Caching

On some platforms, such as Unix, group membership is cached by MQSeries. This means that, if a new user joins a group and needs access to MQSeries objects, the queue manager needs to be restarted. Similarly (and probably more importantly), if a user leaves the team or company, it is not sufficient just to remove them from the group. The user retains access to objects until such a time as the queue manager is restarted.

Only enable things if you need them

This is no more than common sense, and the defaults are such that this won’t cause problems, but for the sake of completion the following points are worth mentioning:

  • Automatic channel definition

Enabling the automatic definition of channels increases the ability of machines to connect to your queue manager with little prior knowledge of your system, so this should be enabled only if definitely required.

  • Command server

The command server is very powerful and can render weak security even weaker. For instance, on a system running MQSeries version 2 in which users do not have the authority to use the client channel, they could still connect using a sender channel called SYSTEM.DEF.RECEIVER. This could put messages on the command server’s input queue requesting it to create a channel and transmission queue back out. This could then be used for further breaches of security. If you’re not confident of your system’s security, it’s advisable to start the command server only when it is needed and to grant users only the minimum required levels of authority to it.

 

Sam Garforth

SJG Consulting Ltd (UK)                                                     © S Garforth 1999

Using a Cloudant database with a BlueMix application

I wanted to learn how to use the Cloudant database with a BlueMix application. I found this great blog post Build a simple word game app using Cloudant on Bluemix by Mattias Mohlin. I’ve been working through it.

image001

I’ve learned a lot from it – as the writer says “I’ll cover aspects that are important when developing larger applications, such as setting up a good development environment to enable local debugging. My goal is to walk you through the development of a small Bluemix application using an approach that is also applicable to development of large Bluemix applications.” So it includes developing on a PC and also setting up Cloudant outside of BlueMix.

So here’s my simplified version focusing purely on getting an application up and running using a Cloudant BlueMix service and staying in DevOps Services as much as possible.

The first step is to take a copy of Mattias’s code so go to the GuessTheWord DevOps Services project.

click on “Edit Code” and then “Fork”

image003

I chose to use the same project name GuessTheWord – in DevOps Services it will be unique as it’s in my project space.

image005

This takes me into my own copy of the project so I can start editing it.

I need to update the host in the manifest file otherwise the deployment will conflict with Mattias’s. So in my case I change it to GuessTheWordGarforth but you’ll need to change it to something else otherwise yours will clash with mine. Don’t forget to save the file with Ctrl-S, or File/Save or at least changing file.

image007

Now I need to set up the app and bind the database on BlueMix so I click on “deploy”. I know it won’t run but it will start to set things up.

At this point I logged onto BlueMix itself for the first time and located the new GuessTheWord in the dashboard.

image009

I clicked on it and selected “add a service” and then scrolled down to the Cloudant NoSQL DB

image011image013

and click on it. I clicked on “create” and then allowed it to restart the application. Unsurprisingly it still did not start as there is more coding to do. However the Cloudant service is there so I clicked on “Show Credentials” and saw that the database has  username, password, url etc so the registration etc on the Cloudant site is not necessary as this is all handled by BlueMix.

image015image017Clicking on Runtime on the left and then scrolling down to Environment variables I can see that these Cloudant credentials have been set up as VCAP_SERVICES environment variables for my app. So I just need to change the code to use these.

I switch back to DevOps Services and go to the server.js file to modify the code for accessing this database.

I change line 27 from
Cloudant = env[‘user-provided’][0].credentials;
to
Cloudant = env[‘CloudantNoSQLDB’][0].credentials;

So we’re providing the high level environment variable not the name or the label.

Unfortunately there is also an error in Mattias’s code. I don’t know whether the BlueMix Cloudant service has changed since he wrote it but he builds the url for the database by adding the userid and password to it but actually these are already in my environment variable url

so I change line 30 from

var nano = require(‘nano’)(‘https://&#8217; + Cloudant.username + ‘:’ + Cloudant.password + ‘@’ + Cloudant.url.substring(8));
to simply
var nano = require(‘nano’)(Cloudant.url);

Now save the file and click deploy. When it’s finished a message pops up saying see manual deployment information in the root folder page.

image019

So I click on that and hopefully see a green traffic light in the middle.

image021

Click on the GuessTheWord hyperlink and should take you to the working game which in my case is running at

http://guessthewordgarforth.mybluemix.net/

image023

However there are still no scores displayed as there is no database table or entries.

I spent a long time trying to do this next part in the code but eventually ran out of time and had to go through the Cloudant website. If anyone can show me how to do this part in code I’d really appreciate it.

So for now, go to the GuessTheWord app on BlueMix and click on the running Cloudant service

image025

From here you get to a Launch button

image027

Pressing this logs you on to the Cloudant site using single sign on

image029

Create a new database named guess_the_word_hiscores. Then click the button to create a new secondary index. Store it in a document named top_scores and name the index top_scores_index. As Mattias says, the map function defines which objects in the database are categorised by the index and what information we want to retrieve for those objects. We use the score as the index key (the first argument to emit), then emit an object containing the score, the name of the player, and the date the score was achieved. Following is the JavaScript implementation of the map function, which we need to add before saving and building the index.

function(doc) {
emit(doc.score, {score : doc.score, name : doc.name, date : doc.date});
}

image031

Again, we should really be able to do the following as part of the program startup but anyway, the following should add an entry to the database, replacing guessthewordgarforth in the URL with the host name you chose for your application:

http://guessthewordgarforth.mybluemix.net/save_score?name=Bob&score=4

You should see a success message. Enter the following URL, again replacing guessthewordgarforth with your application host name.

http://guessthewordgarforth.mybluemix.net/hiscores

The entry you just added should appear encoded in JSON e.g.

[{“score”:4,”name”:”Bob”,”date”:”2014-08-07T14:27:34.553Z”}]

So, the code and the database are working correctly. Now it just remains to play the game. Go to

http://guessthewordgarforth.mybluemix.net

(replacing guessthewordgarforth with your hostname)

This time it will include Bob in the high score table

image033

and click on “Play!”

game

Cloud With DevOps Enabling Rapid Business Development

My point of view on accelerating business development with improved time to market by using lean principles enabled by devops and cloud.

The Roadmap to Cloud by Sam Garforth

An abridged version of this was first published on Cloud Services World as “Transition vs Transformation: 8 factors to consider when choosing the best route to cloud” in July 2013

Many companies have arrived at cloud naturally over the years through data centre optimisation. Traditional IT had separate servers for each project and application. We then had consolidation of the physical infrastructure and virtualisation to allow multiple operating systems and application stacks to run on the same server, plus shared application servers and middleware. Standardisation and SOA allowed the monolithic applications to be broken up into functions that could be reused, shared and maintained independently. By using patterns and putting the application stacks into templates, automation enabled the self-service, elasticity and the dynamic workload provisioning benefits of private cloud.

datacentreoptimisation

Now that the functions have been sufficiently encapsulated and commoditised we are seeing more and more customers moving them onto a public cloud and perhaps linking them back to their core existing business as a hybrid cloud. Some companies are teaming together and sharing services in a club cloud.

However you don’t have to go through all these steps in order. It is possible to migrate workloads straight to the cloud, but it’s important to do this using a carefully considered methodology.

According to KPMG’s Global Cloud Provider’s Survey of 650 senior executives, the number one challenge of their adoption of cloud is the high cost of implementation/transition/integration. Steve Salmon of KPMG said “Implementation and integration challenges are critically important to overcome and can threaten both the ROI and business benefits of cloud.”

To get the most benefit from moving to the cloud it is critical that you understand your current portfolio of applications and services and align them with a cloud deployment strategy. Begin by looking at the strategic direction of your company. Next, analyse the business applications and infrastructure components and create a prioritised list of suitable workloads for migration to the cloud as well as an analysis of the potential costs and migration impacts. Then look at your existing environment and determine an appropriate cloud computing delivery model e.g. private, public, hybrid, community etc. Define the architectural model and perform and gap analysis, build the business case and then implement based on a roadmap.

Application Migration Assessment

An assessment needs to be carried out against each application, or group of applications assessing the benefit and effort of moving to the cloud. This will form a roadmap/prioritisation of which applications to move in which order. A typical application migration roadmap would be based on the following chart. This shows risk/ease of migration against gain. In terms of time it is recommended to start migrating the apps in the top right corner of the diagram and end with the ones in the bottom right.

applicationmigrationassessment

Transition vs. Transformation

When considering whether to move an application to the cloud, it is important to consider both the business purpose of the application and the role that application plays in supporting business and IT strategies. This context is important for considering whether to transition an application to a cloud environment, or whether to rearchitect or “transform” the application – and if so, how to do it.

 Transition, commonly referred to as the “lift and shift model,” is applied to situations when the application runs as-is or with minimal changes to the architecture, design or delivery model necessary to accommodate cloud delivery. For example, an application with no duplication of functionality and that supports current performance and security requirements would be a good candidate to transition to a cloud. The transition of such an application typically includes:

  • Selecting a private or public cloud environment to host the application.
  • Provisioning and configuring the Infrastructure-as-a- Service and the Platform-as-a-Service needed to deploy the application.
  • Provisioning the application to deliver built cloud characteristics such as monitoring, metering, scaling, etc.

When identifying enterprise applications for transition, there are a number of factors to consider.:

  • Business model –Business services and capabilities should be separated from the underlying IT infrastructure on which they run.
  • Organisation – Enterprise IT governance should be well established. Service usage is tracked and the service owner and provider are able to reap paybacks for the services developed and exposed by them.
  • Methods and architecture –The application architecture should support service-oriented principles and dynamically configurable services for consumption within the enterprise, its partners and throughout the services ecosystem.
  • Applications –The application portfolio should be structured so that key activities or steps of business processes are represented as services across the enterprise.
  • Information –The application’s business data vocabulary. This enables integration with external partners, as well as efficient business process reconfiguration.
  • IT infrastructure – Services can be virtualised such that any given instance may run on a different and/or shared set of resources. Services can be discovered and reused in new, dynamic ways without a negative impact on the infrastructure.
  • Operational management – Service management incorporated into the application design addresses demand, performance, recoverability and availability. It also tracks and predicts changes to reduce costs and mitigate the impact on quality of service.
  • Security –Good application and network security design supports both current and future enterprise security requirements.

In some cases, business and IT objectives and conditions warrant larger, more comprehensive changes to an application that is moving to the cloud than are possible under the transition approach. Transforming existing applications involves rearchitecting and redesigning the application to be deployed in either a private or public cloud environment. This path involves the application being redesigned to fit a more open computing model, for example to accommodate service-oriented architecture (SOA), exposed APIs or multi-tenancy. An SOA application model is valuable in that it allows for integration across custom and packaged applications as well as data stores, while being able to easily incorporate business support services that are needed for cloud deployment.

Typically, transforming applications for a cloud environment includes the same set of criteria as transitioning applications to a cloud environment, but with different conditions. Often, applications targeted for transformation are tightly coupled with enterprise legacy systems and do not meet current security, availability and scalability requirements. The situational factors that support the transformation decision include:

  • Business model – In an application that is a candidate for transformation, business services tend to be isolated with each line of business maintaining its own siloed applications. Also, there is minimal automated data interaction or process integration between the silos. By transforming the application for cloud delivery, the organisation can extend the business value of the service or application to other lines of business (LOBs) and partners.
  • Organisation –When each business unit owns its own siloed applications, it defines its own approach, standards and guidelines for implementing, consuming and maintaining application-delivered services. These may not align well with the need of the organisation as a whole.
  • Methods and architecture – In siloed applications there is no consistent approach for developing components or services. LOBs tend to throw requirements “over the fence” to the IT organisation, which then develops solutions with-out feedback from the business. The application architecture is typically monolithic and tightly coupled, with minimal separation between presentation, business logic and the database tiers. Often there is mostly – or in some cases only – point-to-point integration.
  • Applications –Usually, portfolios of discrete applications take minimal advantage of service-oriented architecture concepts, and business processes are locked in application silos.
  • Information – Information sharing tends to be limited across separated applications. Data formats are often application-specific and the relatively inefficient extract-transform-load process is the primary means for information sharing between applications.
  • IT infrastructure –Platform-specific infrastructures are maintained for each application and infrastructure changes have had to be put in place to accommodate service orientation, where it exists.
  • Operational management – Service management functionalities such as monitoring and metering to manage enterprise business applications and/or services are either not supported at all, or only to a limited extent.
  •  Security –Enterprise application and network security enhancements are required in transformation candidate applications to meet current and future security requirements.

After selecting the appropriate cloud delivery model (private or public), the decision to transition or transform an existing enterprise application is important in order to help ensure a successful move to cloud. When resources are limited, it is possible for an enterprise to choose to run the transformation in parallel with transition to meet short-term needs, while planning for longer-term, best-in-class performance through application transformation.

Consider Engaging a Third Party

Enterprise-wide cloud implementation can be a challenging process. Operating under ever-tightening budgets, IT staffs typically spend most of their resources to simply maintain existing server environments. Even those organisations capable of building their own clouds often find, emerging from the testing stage, that they would benefit from outside management support. Because of these challenges, organisations must carefully think about how to best source their cloud technologies. In making a sourcing decision, they should keep in mind business design, necessary service levels and deployment models. The question of who will migrate, integrate and manage applications must also be addressed. After considering these issues, many organisations choose to turn to third-party technology partners for help with enterprise cloud endeavours. Find a third-party provider that offers its clients a choice in the areas of scope, service levels and deployment. The partner should offer deep expertise in strategic visioning and in cloud migration.

To protect the client’s cloud infrastructure, technology partners should provide multiple security and isolation choices, robust security practices and hardened security policies proven at the enterprise level. Security procedures should be transparent, providing visibility into cloud activities and threats. Data monitoring and reporting must be offered. Since business needs are ever-evolving, the best cloud partners also offer a full portfolio of associated services in the fields of security, business resiliency and business continuity. Finally, the client organisation should be able to maintain control over its cloud environment. Technologies that provide this type of control include online dashboards, which can allow the client organisation to manage the cloud environment from virtually anywhere in the world. 

Example Roadmap

An enterprise can choose to run the two approaches in parallel transitioning some to meet short-term needs, while planning for longer-term application transformation.

Middlesex University has embarked on the journey to cloud. They needed to reduce the number of machines from around 250 to 25, electricity usage by 40%, and physical space requirements from approximately 1,000 square feet to 400 square feet. Steve Knight, Deputy Vice-Chancellor, explained, “We need a system that allows flexibility according to our changing requirements. We were looking for a platform solution to complement our longer term plans to achieve a dynamic infrastructure.”

A popular roadmap, similar to the journey Middlesex is on, is:

  • Begin to consolidate, rationalise and virtualise the existing on premises infrastructure into a private cloud. Perhaps install a modern expert integrated system which allows you to start consolidating key applications onto a smaller and more economical estate. This should be done gradually so as not to effect current service delivery. Applications and images on the private cloud should be portable to a public cloud at a later date.
  • Start “experimenting” with hosted unmanaged cloud running new development and test workloads.
  • Look to a MSP to manage your on premises infrastructure and/or private cloud. Start using the cloud provider for disaster recovery and backup.
  • Eventually look to move everything to a flexible hosted managed cloud.

So in summary, a blended approach is needed, choosing whether to cloud-enable existing applications or to write new cloud native applications, depending on the characteristics of the applications and their integration requirements.

Cloud’s impact on the IT team job descriptions

This was first posted on businesscloud9 in July 2012

For many people Cloud is seen as an evolution of outsourcing. By moving the traditional IT resources into a public Cloud customers can focus on their core business differentiators. Cloud doesn’t take away the need for the hardware, software, and systems management, it just encapsulates and shields the user from them.

It puts IT in the hands of the external specialists working inside the Cloud . And by centralising IT and the skills, costs can be reduced, risk can be reduced, and businesses can focus on their core skills giving improved time-to-market and business agility.
But where does this leave the customer’s IT department? Can they all go home or do some of the roles remain, or are there actually new roles created? Will we have the skills needed for this new environment?
Let’s look in more detail at some of these roles and the impact that the extreme case of moving all IT workloads to external Cloud providers would have on them:
IT Strategy
Strategic direction is still important in the new environment. Business Technology and Governance Strategy is still required to map the Cloud provider‘s capabilities to the business requirements. Portfolio Management & Service Management Strategies have increased importance to analyse investments, ascertain value, and determine how to get strategic advantage from the standardised services offered by Cloud . However, the role of Enterprise Architecture is significantly simplified.
Control is still needed although the scope is significantly reduced. IT Management System Control retains some budgetary control, but much of its oversight, coordination and reporting responsibilities are better done elsewhere. Responsibility for Portfolio Value Management & Technology Innovation is mainly handed to the Cloud provider.
At the operational level, Project Management is still required while Knowledge Management has reduced scope but best practices and experiences will still need to be tracked.
IT Administration
The scope of IT Business Modelling is reduced as many of the functions in the overall business and operational framework are no longer required.
There are changes in administration control. Sourcing Relationships and Selection is critical for the initiation and management of relationships with providers. Financial Control and Accounting will continue to manage all financial aspects of IT operations. HR Planning and Administration is still required, but the number of people supported is reduced. Site and Facility Administration is no longer needed.
All of the operational roles in IT administration have increased importance. IT  Procurement and Contracts as well as Vendor Service Coordination are needed to manage the complex relationships between the enterprise and Cloud provider. Customer Contracts and Pricing is needed for the allocation of Cloud costs to internal budgets as well as providing a single bill for services from multiple Cloud providers.
The main casualties of the move to Cloud are the build and run functions. The Service Delivery Strategy will remain in-house, although once the strategic decision has been made to source solely from the Cloud this becomes largely redundant. Responsibility for the Service Support Strategy moves to the Cloud provider.
Service Delivery Control and Service Support Planning also move to Cloud provider, while the Infrastructure Resource Planning functions are likely to be subsumed into the Customer Contracts and Pricing administration role.
Responsibility for Service Delivery Operations and Infrastructure Resource Administration moves to Cloud provider. However the help desk and desk-side support services from Service Support Operations remain essential activities for initial level 1 support, but beyond this support will be offered by the Cloud provider.
 
Further observations
Governance is a critical capability, particularly around maintaining control over SaaS adoption. Integration of services will be a challenge, but perhaps this will also be available as a service in the future. Relationships with partners and service providers in all guises will become increasingly important.
There is a potential issue with skills. With many of the traditional junior roles in development and operations moving outside the enterprise, it’s hard to see how candidates for these new strategy and coordination roles will gain the experience they need. Academia therefore has an important part to play in ensuring that graduates are equipped with the right skills.
In summary:
1. Most current job roles remain, although many have reduced scope or importance.
2. Fewer strategic roles are impacted than control or operational ones.
3. Build and Run are the main functions which move to the Cloud providers.
4. Planning and commercial skills are key; linking the IT department more closely to the business
In my next blog post here I discuss Delivering the new skill sets needed for cloud.

My twitter feed