Posts Tagged 'Information technology'

Beyond WebSphere Family Monitoring

The integrated application processes that drive real-time business must flow smoothly, efficiently and without interruption. These processes are comprised of a series of interrelated complex events which must be tightly managed to ensure the health of your business processes. To achieve this, deep visibility into the core metrics of business processes is crucial.. But this level of insight is impossible when you’re limited to static events. This paper explores the next level in dealing with the increasing complexity of inter-application communication in the real-time enterprise with the ability to dynamically create events based on actual conditions and related data elements across the enterprise.

Stanford University Emeritus professor David Luckham has spent decades studying event processing. In his book, The Power of Events he says that we need to

“View enterprise system based on how we use them – not in terms of how we build them.”  This is an important paradigm shift which can move us towards the goal of business activity monitoring

Currently we have Enterprise systems management (ESM) with a foundation of event based monitoring (EBM). It provides availability with automation using event based instrumentation, with threshold based performance and alerts and notifications. This is good but not good enough for the evolving needs of real time business. Gartner says that event driven applications are the next big thing.

Typically in event based monitoring events come from different middlewares – web servers, databases, applications, network devices, mobile devices, through the IT infrastructure of middleware, application servers, applications and the network. The events can be seen by the data centre. The idea is that the ESM system will monitor, detect, notify and take corrective action either automatically or with manual intervention. But there are different business users with their own perpectives.

Event based ESMs can’t really take good corrective action as they can’t correlate the event with the effect.

Here’s a typical example of a business activity. The head office requests a price change in all the stores. It updates the price in the master database and checks the inventory levels and then transmits the change to all the stores. But what happens if something goes wrong.

You/the customer will be asking yourself lots of questions. 1. From an IT perspective you’ll want to know where the problem/slowdown is i.e. which queue or channel has the problem that caused the change not to happen. But there are business level questions too. 2. You want to know why the change didn’t take place at all the stores. 3. You need to ask yourself from a business perspective what the impact of this is. 4. You may know that a channel’s gone down or you may know that a price change hasn’t happened but do you know what else has been affected?

These are problems that you don’t really have time to address. EBMs don’t know what to do about this out of the box as they are designed at a technology level and to configure them to understand the business is too hard to set up. With constantly changing business and application needs you can’t adapt your monitoring and automation fast enough.

Here is another real life example.  A stock trade or share purchase. The customer says they want to buy something. You check their account, then the stock availability and price and then they agree they want to buy at that price. Then you process and complete the trade and update the stock price.

This is straight through processing. The transaction has to be done atomically as one unit of work within a certain time. These are serially dependent transactions. But what happens if they don’t complete in time. Again you will ask yourself the questions. From an IT perspective you will want to know what the cause of the problem was. But also the business units who were affected will want to know about it. You will want to know which business units and transactions were affected. The correct, and only the correct, business guys will want to know. You will want to know the business impact of this – to see the problem and impact from a business perspective.

So what is the business impact? At a high level its loss of money. You’ll know that the transaction didn’t take place but you won’t know the real root cause so everyone will be blaming everyone else, wasting time and damaging morale and relationships. During this time you are not delivering the business service. You have damaged your relationship with the customer as you haven’t delivered what they needed and you can’t even explain why. So you’re going to lose your competitive advantage.

To give this root cause analysis and business impact analysis customers normally have to put a lot of resource into customising an event based solution or just developing their own monitoring solution but this is not flexible enough. It is not feasible in an increasingly complex environment of technology. So we have to ask ourselves how can we have a monitoring solution that is flexible enough to keep our business systems productive, rapidly and constantly adapting to incorporate a changing IT and business environment.

So in summary, the big questions are Why is it so hard to detect and prevent these situations? How can we make the transition to real-time on-demand monitoring. How can we align our IT environment with the business priorities to achieve the business goals?

These problems arise because we’re using event based monitoring. Monitoring at an IT or technology level is preventing us from achieving business activity monitoring.

David Luckham refines this more to talk about Business Impact Analysis – overcoming IT blindness. We should be looking at the complex events. Correlating or aggregating the various events and metrics to see the business impact. He talks about the event hierarchy and the processing of complex events. MQ has about 40 static events like queue full, channel stopped etc. But there are events from WAS, DB2 etc, and there are metrics like channel throughput, cpu usage and TIME. There are also home grown apps which need monitoring and there are business events and metrics. All these need to be taken into account to give a higher level complex event. For example if a queue is filling up at a certain rate you can calculate that in a certain amount of time you will receive a static simple queue high event. But by that time it will be too late. You need to aggregate the metrics queue fill rate, queue depth, maximum depth and time to generate a complex event.

So the problems with the current state of event based monitoring are:

Event causality – there’s not enough information to identify the root cause of the problem. The price update didn’t’ happen, but why? MQ failed but why? Maybe it was a disk space problem. Maybe it was caused by something in a different part of the environment – a different computer or application.

Interpretation – looking at it at a simple technology level we don’t have enough information to see the effect of this simple problem on the different parts of the enterprise – to see the effect from the perspectives of the different users, and to notify them and resolve the problems it causes for them.

Agility – Out of the box ESM or EBM solutions cannot possibly know the business requirements. They require a lot of customisation when you initially set them up to be able to understand the effects of different problems on the different users and then constant customisation as the technological and business environment constantly changes. They are constantly playing a game of catch up that they can never win.

Awareness – Because they are only looking at individual points of technology they have a blindness to the end to end transaction. They cannot know how a simple technology problem affects the rest of the technologies or businesses.

Another shortcoming of the current generation of system management is false positives. This is a big problem with simple event based monitoring. You have a storm of alerts. The data centre sees an event saying the queue is full. They call the MQ team who say not to worry about it; it’s just a test queue, or a queue for an application that hasn’t gone live yet. After the first 24 times that this happens the data centre stops paying attention to queue full events. Then the 25th one happens which is for an important transaction which needs to be dealt with immediately and they just ignore it. The company loses business etc and it’s as if they didn’t have a monitoring solution at all. So what we need is a high level of granularity on the queue monitoring based not just on whether a queue is full but what queue it is, who the application owner is, what time of day it is, what applications are reading from the queue etc.

It’s not enough to provide monitoring data, it has to be information. It has to be interpreted in a way that is useful. What we need is dynamic metric based monitoring. The difference between events and metrics or data and information. You need metric based monitoring to create complex events – in context, user specific events that are pre-emptive before a real business problem happens which can be actionable. The problem isn’t getting events, its event correlation with rules etc. You need to watch more than the vendor gives. It can’t be enough.

There is something called the ‘aha phenomenon’. When a problem occurs you spend ages trying to identify the cause, looking at all the queues and middlewares and applications. All the time you’re looking the technology’s not running and the business is losing money. Eventually you find it and say ‘aha!’ Then what happens? Can you easily adapt your monitoring environment to make sure it doesn’t happen again or that you at least don’t have to search again when it does happen. In other words you need dynamic monitoring – where the monitoring environment of event correlation, metric selection and rules application can be constantly updated.

So let’s expand the vision of what we need. We need a unified approach like the service oriented architectures that are so popular for applications i.e. a reusable monitoring architecture. We don’t need a silo or isolated tool – the antithesis of SOAs. It needs to be a business oriented on demand solution. It needs to be modular, extensible, adaptable, scalable and reusable. We need instrumentation for all the different applications and middlewares. And the environment status needs to be shown to all the different stakeholders from their own perspective for their own roles and responsibilities.

By applying the service oriented architecture principles we can achieve the Business Activity Monitoring and business agility that we really need. A business centric solution aligning IT to the business processes so the business can actually benefit from the technology rather than being constrained by it. Using this you can see the impact of a problem from all perspectives and you can rapidly adapt to the changing business and technological environment learning from mistakes. Currently 80% of IT resource is consumed by maintaining the technology. Using this architecture we can free the resources to other products, develop the business and make more money.

In summary, this unified model gives business and technology continuity and automatic recovery. It gives very granular complex events allowing root cause analysis and business impact analysis by being aware of the business processes affected by the technology and displaying the information in a business context giving an improved quality of service.

Of course there are pros and cons to being standards based. Some Service Oriented Architectures such as .Net and WebServices are still in flux. We need unified SOA security across all platforms. To be proactive in the way that is needed will require polling which needs to be configured to avoid performance problems.

But anyway, what I’ve proposed here is a unified model, a base for business activity monitoring. As David Luckham says “The challenge is not to restrict communication flexibility, but to develop new technologies to understand it”. So I propose that the key to dealing with complexity and delivering true business activity monitoring solutions is a unified model based on a service oriented architecture. This doesn’t happen out of the box as no vendor or developer can know all your requirements but it is a framework which is modular, extensible, adaptable, scalable and resuable enough to facilitate what we need.

© Sam Garforth   2005

Advertisements

Sam’s Views on Cloud for Government Policy Makers

I was honoured to be asked to present yesterday on “Cloud Skills, Flexibility and Strategy” at the Westminster eForum Keynote Seminar: Next steps for cloud computing.

English: The Palace of Westminster from Whitehall.

English: The Palace of Westminster from Whitehall. (Photo credit: Wikipedia)

As explained on its website, Westminster Forum Projects enjoys substantial support and involvement from key policymakers within UK and devolved legislatures, governments and regulatory bodies and from stakeholders in professional bodies, businesses and their advisors, consumer organisations, local government representatives, and other interested groups. The forum is structured to facilitate the formulation of ‘best’ public policy by providing policymakers and implementers with a sense of the way different stakeholder perspectives interrelate with an aim is to provide policymakers with context for arriving at whatever decisions they see fit.

The abstract to the session asked about the extent to which Government departments embracing the cloud, what progress is being made in achieving the UK’s Data Capability Strategy on skills and infrastructure development, whether organisations are doing enough to address the emerging shortfall in skills and also asked about the contradiction between mobile device power and cloud.

I was part of a panel and the following was my five minute introduction.

In my five minutes I’d like to talk about the power of cloud and within that to address three areas raised in the abstract to this session – shared services and shared data; mobile; and skills.

We see cloud as being used in three different ways – optimisation, innovation and disruption. Most of what I’ve seen so far in cloud adoption is about optimisation or cost saving. How to use standardisation, automation, virtualisation and self service to do the same things cheaper and faster.

What’s more interesting is the new things that can be achieved with the innovation and disruption that this can provide.

I’ve been working with various groups – local authorities, police forces, and universities, discussing consolidating their data centres. Instead of each one managing their own IT environment, they can share it in a cloud. They justify this with the cost saving argument but the important thing is, firstly, that they can stop worrying about IT and focus on what their real role is, and secondly that by putting their data together in a shared environment they can achieve things that they’ve never done before.

English: The road to Welton, East Riding of Yo...

English: The road to Welton, East Riding of Yorkshire, just south of Riplingham. Taken on the Riplingham to Welton road at MR: SE96293086 looking due south. This is typical south Yorkshire Wolds country. (Photo credit: Wikipedia)

For example, Ian Huntley would never have been hired as a caretaker and so the Soham murders would have been less likely to happen if the police force had access to the data that he was known by a different force.

And we wouldn’t have issues with burglars crossing the border between West and North Yorkshire to avoid detection if data was shared.

In Sunderland we predict £1.4m per year in cost savings by optimising their IT environment but what’s more important is that this has helped to create a shared environment for start up companies to get up and running quickly so it’s stimulating economic growth in the area.

Another example is Madeleine McCann. After her disappearance it was important to collect holiday photos from members of the public as quickly as possible. Creating a website for this before cloud would have taken far too long. Nowadays it can be spun up very quickly. This isn’t about cost saving and optimisation, it’s about achieving things that could never have been done before.

This brings me to the question in the abstract about mobile: “As device processing power increases, yet cloud solutions rely less and less on that power, is there a disconnect between hardware manufacturers and app and software developers”. I think this is missing the point. Cloud isn’t about shifting the processing power from one place to another, it’s about doing the right processing in the right place.

English: GPS navigation solution running on a ...

English: GPS navigation solution running on a smartphone (iphone) mounted to a road bike. GPS is gaining wide usage with the integration of GPS sensors in many mobile phones. (Photo credit: Wikipedia)

In IBM we talk about CAMS – the nexus of forces of Cloud, Analytics, Mobile and Social, and we split the IT into Systems of Record and Systems of Engagement. The Systems of Record are the traditional IT – the databases that we’re talking about moving from the legacy data centres to the cloud. And, as we’ve discussed, putting it into the cloud means that a lot of new analytics can happen here. With mobile and social we now have Systems of Engagement. The devices that interact with people and the world. The devices that, because of their fantastic processing power, can gather data that we’ve never had access to before. These devices mean that it’s really easy to take a photo of graffiti or a hole in the road and send it to the local council through FixMyStreet and have it fixed. It’s not just the processing power, it’s the instrumentation that this brings. We now have a GPS location so the council know exactly where the hole is. And of course this makes it a lot easier to send photos and even videos of Madeleine McCann to a photo analytics site.

We’re also working with Westminster council to optimise their parking. The instrumentation and communication from phones helps us do things we’ve never done before, but then we move onto the Internet of Things and putting connected sensors in parking spaces.

With connected cars we have even more instrumentation and possibilities. We have millions of cars with thermometers, rain detection, GPS and connectivity that can tell the Met Office exactly what the weather is with incredible granularity, as well as the more obvious solutions like traffic optimisation.

Moving on to talking about skills. IBM has an Academic Initiative where we give free software to universities, and work with them on the curriculum and even act as guest lecturers. With Imperial College we’re proving cloud based marketing analytics software as well as data sets and skills, so that they can focus on teaching the subject rather than worrying about the IT. With computer science in school curriculums changing to be more about programming skills we can offer cloud based development environments like IBM Bluemix. we’re working with the Oxford and Cambridge examination board on their modules for cloud, big data and security.

Classroom 010

Classroom 010 (Photo credit: Wikipedia)

To be honest, it’s still hard. Universities are a competitive environment and they have to offer courses that students are interested in rather than ones that industry and the country need. IT is changing so fast that we can’t keep up. Lecturers will teach subjects that they’re comfortable with and students will apply for courses that they understand or that their parents are familiar with. A university recently offered a course on social media analytics, which you’d think would be quite trendy and attractive but they only had two attendees. It used to be that universities would teach theory and the ability to learn and then industry would hire them and give them the skills, but now things are moving so fast that industry doesn’t have the skills and is looking for the graduates to bring them.

Looking at the strategy of moving to the cloud, and the changing role of the IT department, we’re finding that by outsourcing the day to day running of the technology there is a change in skills needed. It’s less about hands on IT and more about architecture, governance, and managing relationships with third party providers. A lot of this is typically offered by the business faculty of a university, rather than the computing part. We need these groups to work closer together.

To a certain extent we’re addressing this with apprenticeships. IBM’s been running an apprenticeship scheme for the last four years This on the job training means that industry can give hands on training with the best blend of up to the minute technical, business and personal skills and this has been very effective, with IBM winning the Best Apprenticeship Scheme from Target National Recruitment Awards and National Apprenticeship Services and Everywoman in technology.

In summary, we need to be looking at the new things that can be achieved by moving to cloud and shared services; exploiting mobile and the internet of things; and training for the most appropriate skills in the most appropriate way.

Using a Cloudant database with a BlueMix application

I wanted to learn how to use the Cloudant database with a BlueMix application. I found this great blog post Build a simple word game app using Cloudant on Bluemix by Mattias Mohlin. I’ve been working through it.

image001

I’ve learned a lot from it – as the writer says “I’ll cover aspects that are important when developing larger applications, such as setting up a good development environment to enable local debugging. My goal is to walk you through the development of a small Bluemix application using an approach that is also applicable to development of large Bluemix applications.” So it includes developing on a PC and also setting up Cloudant outside of BlueMix.

So here’s my simplified version focusing purely on getting an application up and running using a Cloudant BlueMix service and staying in DevOps Services as much as possible.

The first step is to take a copy of Mattias’s code so go to the GuessTheWord DevOps Services project.

click on “Edit Code” and then “Fork”

image003

I chose to use the same project name GuessTheWord – in DevOps Services it will be unique as it’s in my project space.

image005

This takes me into my own copy of the project so I can start editing it.

I need to update the host in the manifest file otherwise the deployment will conflict with Mattias’s. So in my case I change it to GuessTheWordGarforth but you’ll need to change it to something else otherwise yours will clash with mine. Don’t forget to save the file with Ctrl-S, or File/Save or at least changing file.

image007

Now I need to set up the app and bind the database on BlueMix so I click on “deploy”. I know it won’t run but it will start to set things up.

At this point I logged onto BlueMix itself for the first time and located the new GuessTheWord in the dashboard.

image009

I clicked on it and selected “add a service” and then scrolled down to the Cloudant NoSQL DB

image011image013

and click on it. I clicked on “create” and then allowed it to restart the application. Unsurprisingly it still did not start as there is more coding to do. However the Cloudant service is there so I clicked on “Show Credentials” and saw that the database has  username, password, url etc so the registration etc on the Cloudant site is not necessary as this is all handled by BlueMix.

image015image017Clicking on Runtime on the left and then scrolling down to Environment variables I can see that these Cloudant credentials have been set up as VCAP_SERVICES environment variables for my app. So I just need to change the code to use these.

I switch back to DevOps Services and go to the server.js file to modify the code for accessing this database.

I change line 27 from
Cloudant = env[‘user-provided’][0].credentials;
to
Cloudant = env[‘CloudantNoSQLDB’][0].credentials;

So we’re providing the high level environment variable not the name or the label.

Unfortunately there is also an error in Mattias’s code. I don’t know whether the BlueMix Cloudant service has changed since he wrote it but he builds the url for the database by adding the userid and password to it but actually these are already in my environment variable url

so I change line 30 from

var nano = require(‘nano’)(‘https://’ + Cloudant.username + ‘:’ + Cloudant.password + ‘@’ + Cloudant.url.substring(8));
to simply
var nano = require(‘nano’)(Cloudant.url);

Now save the file and click deploy. When it’s finished a message pops up saying see manual deployment information in the root folder page.

image019

So I click on that and hopefully see a green traffic light in the middle.

image021

Click on the GuessTheWord hyperlink and should take you to the working game which in my case is running at

http://guessthewordgarforth.mybluemix.net/

image023

However there are still no scores displayed as there is no database table or entries.

I spent a long time trying to do this next part in the code but eventually ran out of time and had to go through the Cloudant website. If anyone can show me how to do this part in code I’d really appreciate it.

So for now, go to the GuessTheWord app on BlueMix and click on the running Cloudant service

image025

From here you get to a Launch button

image027

Pressing this logs you on to the Cloudant site using single sign on

image029

Create a new database named guess_the_word_hiscores. Then click the button to create a new secondary index. Store it in a document named top_scores and name the index top_scores_index. As Mattias says, the map function defines which objects in the database are categorised by the index and what information we want to retrieve for those objects. We use the score as the index key (the first argument to emit), then emit an object containing the score, the name of the player, and the date the score was achieved. Following is the JavaScript implementation of the map function, which we need to add before saving and building the index.

function(doc) {
emit(doc.score, {score : doc.score, name : doc.name, date : doc.date});
}

image031

Again, we should really be able to do the following as part of the program startup but anyway, the following should add an entry to the database, replacing guessthewordgarforth in the URL with the host name you chose for your application:

http://guessthewordgarforth.mybluemix.net/save_score?name=Bob&score=4

You should see a success message. Enter the following URL, again replacing guessthewordgarforth with your application host name.

http://guessthewordgarforth.mybluemix.net/hiscores

The entry you just added should appear encoded in JSON e.g.

[{“score”:4,”name”:”Bob”,”date”:”2014-08-07T14:27:34.553Z”}]

So, the code and the database are working correctly. Now it just remains to play the game. Go to

http://guessthewordgarforth.mybluemix.net

(replacing guessthewordgarforth with your hostname)

This time it will include Bob in the high score table

image033

and click on “Play!”

game

Cloud computing trends in the UK: IaaS, PaaS & SaaS

This post was originally published on ThoughtsOnCloud on June 17th, 2014.

I’ve been a cloud architect foEnglish: Flats on Deansgate with cloud. Manche...r the last three years or so and have seen dramatic changes in the IT industry and its view of cloud. I’ve also observed different attitudes to cloud in different industries and countries.

I took on the cloud architect role because I saw that customers were asking about cloud, but they all had different ideas of what this meant. Depending on whom they spoke to first they could think it was hosted managed software as a service, or they could think it was on-premise dynamic infrastructure—or many other permutations between. My job was created to talk to them at the early stage, explain the full scope of what it means, to look at their business requirements and workloads and align them to the most appropriate solution.

Three years later you would hope that it’s all a lot clearer and in many ways it is, but there are still preconceptions that need to be explained, and also the cloud technologies themselves are moving so rapidly that it’s hard enough for people like me to stay abreast of it, let alone the customers.

To begin, I noticed some fairly obvious differences, many of which still hold. The large financial institutions wanted to keep their data on premise, and they had large enough IT departments that it made sense for them to buy the hardware and software to effectively act as cloud service providers to their lines of business. Some investment banks saw their technology as a key differentiator and asked that I treat them as a technology company rather than a bank, so they didn’t want to give away the ownership of IT, the attributes of cloud that they were looking for were standardisation, optimisation and virtualisation.

On the other hand I was working with retail companies and startups who saw IT as an unnecessary cost, a barrier to their innovation.  They saw cloud as a form of outsourcing, where a  service provider could take on the responsibility of looking after commodity IT and let them focus on their core business.

A third industry is government and public sector. This is very different in the UK to other countries. In the United States, the government is investing in large on-premise cloud solutions, and this avoids many of the security and scalability issues. In the UK, with a new government following the global financial crisis, there is an austerity programme, which led to the Government ICT Strategy and Government Digital Strategy and the announcement of the Cloud First Policy. This requires that government bodies use hosted, managed cloud offerings, as well as encouraging the use of open source and small British providers.

The British Parliament and Big Ben

The British Parliament and Big Ben (Photo credit: ** Maurice **)

Our health sector is also very different to the U.S., with our public sector National Health Service being one of the largest employers in the world, whereas in the U.S. health has much more of an insurance focus.

Over the years in all industries there has been a lot of fear, uncertainty and doubt about the location of data and whether or not there are regulations that make this an issue. I’m glad to say that we’ve now worked through a lot of this and it’s a lot clearer to both the providers and the consumers.

In practice most of the cloud investment that happened was infrastructure as a service (IaaS). Much of this was private cloud, with some usage of public cloud IaaS.

We used to have a lot of interest from customers, whether they be meteorological or academic research, looking for high performance computing clouds. This made a lot of sense, as the hardware required for this is very expensive and some customers only need it for short periods of time, so to have it available on a pay as you go basis was very attractive. Last year, IBM acquired SoftLayer, which includes bare metal IaaS as well  as virtualised. This means that HPC cloud is more attainable and with this has come a change of perception of cloud from virtualisation and closer to the hosted, utility based pricing view.

The big change this year is the move from IaaS to platform as a service (PaaS). With the nexus of forces of mobile devices (phones, tablets, wearable devices, internet of things), social media generating large amounts of unstructured data, and high performance broadband, there is a new demand and ability to deliver cloud based mobile apps connecting and exploiting data from multiple sources. This reflects a shift in the IT industry from the systems of record, which store the traditional, fairly static, structured data, to the new systems of engagement, which are much more about the dynamic customer interface and access to the fast changing data.

Developers are becoming key decision makers. They often work in the line of business and want to create business solutions quickly, without the blocker of the traditional IT department. Optimising the speed to market of business solutions by using IaaS, devops has been the first step in this. Now customers are looking to PaaS to give them immediate access to the whole software development environment of infrastructure as well as the necessary middleware for developing, testing, and delivering solutions quickly and reliably with minimal investment. This also includes the new open source middlewares and hyperpolyglot languages.

Finally, SaaS. We are talking to many companies, public sector bodies, and education establishments, who want to become entirely IT free. They don’t want a data centre and they don’t want developers. This requirement is now becoming achievable as IBM and others are committed to making a significant proportion of their offerings available as SaaS solutions. Of course, this brings new challenges around hybrid cloud integration and federated security.

Do my views of the trends in UK cloud align to yours? I’d love to see your comments on this.

Delivering the New Skill Sets Needed for Cloud

This was first posted on businesscloud9 in November 2012

Cracow University of Economics - Lecture Room ...

In my previous blog post here I discussed the potential impact of Cloud on the roles performed by a traditional IT department. I discussed that the build, run and operational roles will be significantly reduced as they will move to the cloud to be performed by automated systems or by managed outsourcing companies. However, there will be an increase in the importance of IT strategy, governance, and relationship management. Commercial and financial skills will be key in aligning the relationships with the different cloud providers, and the internal and external customers. The IT department will be more closely linked to the business, and technical skills will still be needed to integrate the services and provide first line help desk support. I suggested that there is a potential challenge with skills. With many of the traditional junior roles in development and operations moving outside the enterprise, what can be done to ensure that candidates for these new strategy and coordination roles will gain the experience they need?

I think that for most roles there won’t be a big change in training. Currently project management, relationship management, and financial roles in the IT industry are not typically staffed by people who studied Computer Sciences.  The skills were taught as part of industry agnostic courses or even after being recruited into work. There is no need for the content of the training itself to change.  Although there have been many reports of a decline in the number of applicants for traditional university science courses this may not be an issue as the balance of skills in the IT department will change to include more of these less technical, more business aligned roles.

For the IT strategy, governance and control skills, some of these will need to be taught as dedicated subjects within Computer Science courses. Some of the lower level technical skills that are being outsourced will need to be taught so that they can be appreciated and understood even though they will not be used directly in industry.

E-skills research states that employment in the IT industry is forecast to grow at 5 times the national average over the next decade.  With the advent of devices such as the Raspberry Pi, I believe that there will be a resurgence of people gaining technical skills in their home life which hasn’t been seen since the home PCs of the 80s were replaced by games consoles.

Where IT roles have been replaced by automated systems, there is a need for training on how these systems work, how to choose them and how to use them. Students will need to understand the concept of the Cloud Service Provider and how to engage with them. It would be very valuable for students and new hires to spend a period of time on secondment to a Cloud Service Provider. This way, they could develop the technical hands on expertise that has been outsourced, forming a basis of the required governance, strategy and control skills.

Universities are changing rapidly. With the introduction of student tuition fees this year students are now customers and universities are more competitive and more focused on attracting students than ever before. They are starting to work closely with businesses.  Companies like IBM are working closely with industry and academia, including schools and universities, to develop smarter skills that will prepare students for the jobs of tomorrow – whatever they might be.

Through the IBM Academic Initiative, IBM is partnering with a number of universities to expand the resources and experiences offered to students, better preparing them for the careers of tomorrow. For example, IBM collaborates with the University of the West of Scotland giving students access to software and technology training to gain skills in business analytics and business modelling.

IBM is also piloting an Academic Skills Cloud to make its software available in a cloud computing environment to more easily allow universities to incorporate technology into their curricula, enabling universities to be more agile and nimble in keeping students up to date with the latest technologies.  London Metropolitan University is the first in the UK to use this. Anthony Thomson, Chairman of Metro Bank, said, “Metro Bank hires for attitude and trains for skill, but can only recruit among the select number of graduates with strong aptitude for IT. Academic initiatives, such as the one set up by IBM and London Metropolitan University, are extremely useful in helping to build a level of graduates who have the suitable skills set that is required by employers.”

In summary, the future looks bright. We have most of the skills that we need and good progress is being made to establish a pipeline of the right skills for the future.

Infrastructure Optimisation Using Cloud for Higher Education

This post was originally published on ThoughtsOnCloud on February 13, 2013

In my previous blog post I discussed the benefits of using cloud in each of the three pillars of a higher education organisation – administration, education and research. In this post I cover the optimisation of the infrastructure that underpins all of these pillars.

OLYMPUS DIGITAL CAMERA

A university typically runs an IT environment similar to any small and medium sized enterprise (SME). It might run process management software, web portal, collaboration software, HR and finance software, student relationship management software, and on multiple operating systems, all interlinked by using an enterprise service bus (ESB) with service-oriented architecture (SOA), open standards, and a common security directory.

This business is not really the university’s core business. The university doesn’t want to maintain the skills to run these systems, and, more importantly, doesn’t want to worry about the underlying operating systems and databases. Ideally the university would have an empty data centre and for these products to be managed by a cloud service provider (CSP). The university would retain responsibility for the business function, such as the custom nodes of the ESB and the process management work flows. The CSP would upgrade the products when necessary. With well developed component architecture, the university could purchase these various components from separate CSPs and connect the components with cloud broker software, also available on the cloud.

Universities might want to own their own software licenses for the normal workload but there will be peak periods where more CPUs are needed than the software licenses allow (for example, student registration is used far more in late August and early September, so currently they have to pay for this peak capacity all year). With cloud, universities can potentially pay for this excess on a pay-as-needed basis.

In this environment, provisioning is more important than ever, that is, universities might benefit from IBM SmartCloud Provisioning with the Hybrid Cloud Integrator plug-in to provision to IBM SmartCloud Enterprise and manage the images. Although, IBM SmartCloud Enterprise does have a good portal and APIs.

Service wrappers for management of middleware and the database can be added or universities can continue to do it themselves and adopt the extended services as these services are made available as standard options in future releases.

As described in my previous post in the student administration section, multiple institutions can benefit from sharing services and data centres in community clouds.

Staying with private cloud, shared between faculties, dynamic infrastructure that measures, predicts and manages a cloud can offer virtualised resources, delivered with elastic scaling and benefiting from economies of scale. In moving its own development infrastructure to cloud, IBM achieved an 84 percent annual saving of $3.3 million by reducing hardware, labour, power, and software license costs.

At North Carolina State University (NCSU) a multi-institute Virtual Computing Laboratory (VCL) serves 30,000+ students and staff and has reduced software license costs by 75 percent. NCSU now makes VCL available to 250,000 users through partners in North Carolina and beyond. The software was donated to the Apache foundation by North Carolina State University and IBM.

Through the IBM Cloud Academy, IBM collaborates with K-12 schools and higher education institutions to integrate cloud technologies into their infrastructures, sharing best practices and working together on the transformation.

Equipping Students for a Cloud-based Workplace

DSCN0085

This was first posted on businesscloud9 in November 2012

In my previous blog post here I discussed the potential impact of Cloud on the roles performed by a traditional IT department. I discussed that the build, run and operational roles will be significantly reduced as they will move to the cloud to be performed by automated systems or by managed outsourcing companies. However, there will be an increase in the importance of IT strategy, governance, and relationship management. Commercial and financial skills will be key in aligning the relationships with the different cloud providers, and the internal and external customers. The IT department will be more closely linked to the business, and technical skills will still be needed to integrate the services and provide first line helpdesk support. I suggested that there is a potential challenge with skills. With many of the traditional junior roles in development and operations moving outside the enterprise, what can be done to ensure that candidates for these new strategy and coordination roles will gain the experience they need?

I think that for most roles there won’t be a big change in training. Currently project management, relationship management, and financial roles in the IT industry are not typically staffed by people who studied Computer Sciences.  The skills were taught as part of industry agnostic courses or even after being recruited into work. There is no need for the content of the training itself to change.  Although there have been many reports of a decline in the number of applicants for traditional university science courses this may not be an issue as the balance of skills in the IT department will change to include more of these less technical, more business aligned roles.

For the IT strategy, governance and control skills, some of these will need to be taught as dedicated subjects within Computer Science courses. Some of the lower level technical skills that are being outsourced will need to be taught so that they can be appreciated and understood even though they will not be used directly in industry.

E-skills research states that employment in the IT industry is forecast to grow at 5 times the national average over the next decade.  With the advent of devices such as the Raspberry Pi, I believe that there will be a resurgence of people gaining technical skills in their home life which hasn’t been seen since the home PCs of the 80s were replaced by games consoles.

Where IT roles have been replaced by automated systems, there is a need for training on how these systems work, how to choose them and how to use them. Students will need to understand the concept of the Cloud Service Provider and how to engage with them. It would be very valuable for students and new hires to spend a period of time on secondment to a Cloud Service Provider. This way, they could develop the technical hands on expertise that has been outsourced, forming a basis of the required governance, strategy and control skills.

Universities are changing rapidly. With the introduction of student tuition fees this year students are now customers and universities are more competitive and more focused on attracting students than ever before. They are starting to work closely with businesses.  Companies like IBM are working closely with industry and academia, including schools and universities, to develop smarter skills that will prepare students for the jobs of tomorrow – whatever they might be.

OLYMPUS DIGITAL CAMERAThrough the IBM Academic Initiative, IBM is partnering with a number of universities to expand the resources and experiences offered to students, better preparing them for the careers of tomorrow. For example, IBM collaborates with the University of the West of Scotland giving students access to software and technology training to gain skills in business analytics and business modelling.

IBM is also piloting an Academic Skills Cloud to make its software available in a cloud computing environment to more easily allow universities to incorporate technology into their curricula, enabling universities to be more agile and nimble in keeping students up to date with the latest technologies.  London Metropolitan University is the first in the UK to use this. Anthony Thomson, Chairman of Metro Bank, said, “Metro Bank hires for attitude and trains for skill, but can only recruit among the select number of graduates with strong aptitude for IT. Academic initiatives, such as the one set up by IBM and LondonMetropolitanUniversity, are extremely useful in helping to build a level of graduates who have the suitable skills set that is required by employers.”

In summary, the future looks bright. We have most of the skills that we need and good progress is being made to establish a pipeline of the right skills for the future.


My twitter feed