Was it an insult I thought, being congratulated for not being technical? It felt like an insult! Of course it wasn’t but it got me thinking about what it means to be leading a technology team in a large organisation where the best way to get engagement is to take away all references to technology, to ensure that the business benefit is at the heart of all conversations relating to the delivery.
There is a risk though, every time we re-iterate to the team that we are delivering business change projects that use technology to assure the benefits, that it is potentially another nail in the coffin of technology as an expertise area. And, if we forget technology because we are busy ensuring benefits are related, then just maybe the technology won’t work. After all it will not look after itself! The role of ‘IT Professional’ has been a recognised role now for an unbelievable 80 years.
The role is now at a pinnacle position in the majority of global organisations, and as we all know, ‘with great power comes great responsibility’. The question though is where does that responsibility best sit, with the business change capability or with the delivery of technology?
So, the trick is where to draw the line. The CIO of any organisation needs to be technically capable and aware of technology and the innovation that can be delivered through its implementation. The need to innovate can only be limited by the organisation’s capacity for change and the business need to do so. The CIO must be able to facilitate change and lead change in some cases due to the disruption technology can bring. However being able to tell the difference between a CAT5 and CAT7 cable is still important! The CIO has to maintain credibility in his or her own peer groups and some of that comes from business drive and success but some will come from the technical battle scars and badges that the CIO has. Without these badges and scars a good manager with a bright mind could lead any technology team, and actually maybe they can!
Early in my career I worked in a team of four project managers. We were not like the IT Crowd, but the next sentence may call that into question. We argued, probably daily, who was the number one technically capable out of the four of us. The challenges ranged from identifying the CAT cable type to being the office Excel Wizard and master of the pivot table. The fact that as number three on a good day back then I am now CIO of an organisation, speaks volumes for the role of the modern CIO compared to the perceptions of the role maybe as recently as five years ago. And yet in the last couple of weeks I feel the need to try to revitalise the importance of technology knowledge and capability in the CIO role.
To be a good CIO requires the ability to translate technology into business delivery. Not to allow all projects to assume technology is business delivery with a wired layer above it. We have successfully migrated the provision of email in the last couple of weeks, but there were bumps on the way. The vast majority of those bumps were technology based, sizing servers for the migration process and replacement kit issues. School boy errors! How did we let this happen? We focused so much on the business benefit. This was not to be a technical project – the training delivery and business change element of this project was immaculate, but we missed two technical elements that, on day one, gave us a bump because we were concentrating on ensuring the business had everything it needed.
The team were great, spotted the issues and ensured that user impact was minimal, but, if we had run the technology part of the project as we would have in the past then we would not have been distracted and missed the elements that caused the bumps. The technology team can and should deliver for the business, it should talk the business talk, be useful for more than wires, but, it should never forget it is there to make the technology work for the customer.
Searching for the balance between technical capability and business focus is like looking for black cats in a dark room! So how to find the cat! In health the balance between technology and business capability is a fine line, and not even a straight line I would suggest. Each delivery project or service needs to be evaluated not just for the final delivery but against the stakeholders within it. With our email project we knew that the stakeholders needed to be ‘sold’ on the business benefit of the migration and that the words we used could not be technology based. Ultimately, migrating was a business imperative, but, the business needed to be taken on a journey.
What was once known as next practice with technology is now best practice and the role of the technology professional now needs to be multi-faceted more than ever before.
However I am now counselling the team to also remember we are here as technology professionals, and we must not forget the technology, let the ‘geek’ part of us be at peace and shine through. It makes sense as organisations flourish that they begin to look to technology to be innovative in the delivery of a disruptive change rather than fighting a rear-guard action to save money and we need to be there for that, but when a project is about delivering a technical change we have learnt we need to get the technology 100% right and support the business change, and in that order!
The 1st of April meant so much to our organisation this year! A complete change in how we manage the delivery of clinical research in the NHS goes ‘live’. No fuss, no trumpets, it simply comes into being: a change from over 100 contracts across the NHS to just 15, a change that sees a network of organisations empowered to deliver and take ownership of clinical research delivery still further.
For the area of the organisation that is tasked with delivering information systems to support research what does this mean? Well, firstly ‘big bang’ go live, something you are ‘taught’ to avoid at all costs needs to be done across multiple integrated systems for go live of the new structures, all at the same time on the same day! Changes to the data models, reference data, workflow, user based access controls, task labels, reporting infrastructures, web site addresses, you name it there is an IS component in there that needs to be changed as the clock ticks over into midnight plus one on the 1st of April.
Protecting ‘the business’ capability through this transition was something the team were tasked with managing, and rightly so. In a business where information is the foundation to what we do, this is a clear priority, the delivery of what we do needs to continue and performance needs to be maintained through any change.
The team has a strategy that by and large sees best of class solutions deployed across the infrastructure and therefore maintaining integration whilst delivering new systems is no easy ask. The control the team have applied to this is through the sharing of resource and a single model of understanding of the changes, not to mention some well placed business understanding and support. We are very lucky in that we have a development team that have an in-depth understanding of our business, our data structures and business needs. As a team the developers were able to get close to the business and the change programme to build a series of specifications in conjunction with the Business Analysis team. Not quite Agile but a hybrid model where the developer was able to translate the requirements directly with the business.
The 1st of April came and went, not completely smooth but the impact of many changes to the systems deployed was kept to as bare minimum as possible. The project and service wrap around the systems deployment was effective and we got to the 3rd of April with the ability to say all systems are live and functional for the new structures. Lessons learnt were how the team worked, how it got close to the business, and maintained that level of interaction throughout and also the level of interaction during go live, keeping all the key stakeholders informed and able to support and react if and when any issues came up.
All in all not an April fools day trick, just a really good result that will continue to be built upon over the next few weeks as any issues are reported, understood and fixed with cutover satisfaction at the heart of the delivery.
Richard Horton is a Service Delivery Manager at the NIHR CRN, he has responsibility for the service improvement of one of our major systems over the last 18 months, turning around a service described by some as the ‘rotten tomato’ service into one that the service board are now hugely proud of.
When organisations look at managing their IT systems as services, typically they start by focusing on how to get people up and running again when something doesn’t work as it should, then they turn to ensuring that changes to the systems in question happen safely. Once this sort of control is in place they start thinking about how to deal with the underlying problems that keep causing things to go wrong. The basic theory here is that behind a problem there is a root cause, so you dig beyond the effect that the problem has and identify the cause, which then means you can fix it.
That makes it sound simpler than it is. Let me take an extreme example – climate change. We know that sea levels are rising. What is the cause of this ? Is it melting ice ? How much does that diagnosis help us ? What is the cause of melting ice ? The world heating up. What is the cause of the world heating up ? And here the situation becomes complex because people don’t agree. Is it all a result of human behaviour ? Is it just natural climate cycles ? Is it a combination and if so how much of this are we able to influence ?
I was struck recently by a scientist who challenged a politician along the following lines. The politician was asked if he insures his house against fire/flood and the answer was yes. He was then asked why, if he insures his house against an event with a less than 1% chance of happening he doesn’t take action on climate change where scientists would say there is at least a 90% chance of it being a result of human behaviour.
Here we see some of the complexities. Insuring your house is a no-brainer – something that is culturally so established that we don’t question it. Changing our environmental impact is completely different. What actions should we change and what difference would they make ? How can we get a global change in behaviour ? How long would an effect take to be felt ? How much is the world prepared to invest ? How far will unilateral action make a difference ?
And all the time we face a challenge. We won’t see the benefit of the huge investment required for a long time, in fact the people who would really benefit (or suffer) from the actions we do or don’t take are our children and their children.
We make decisions all the time based on our perception of risk and opportunity. However, as the scientist’s question highlights, our decisions aren’t necessarily fully thought through or rational and consistent.
I work in the health research arena. Here long timescales are part of the territory, with a drug typically taking 17 years to get from the initial idea to a drug that’s available for patients. So, for example, it’s a concern now that pharmaceutical companies aren’t generating the next generation of antibiotics, though the effect of that absence won’t be felt for a few years.
In the past research didn’t necessarily get its appropriate place in its NHS context. Money was provided but there was nothing stopping that money being diverted to satisfy short term operational needs. While this may keep hospitals running in the short term it won’t address those future questions. Government money has been assigned for some years now, and the supporting research infrastructure has been refashioned to give clearer focus here. The results are being felt in more research happening more efficiently and more effectively. That’s only part of the equation – pharmaceutical industry investment is needed too, for example – but it gives an idea of how some structural changes can make a difference in our effectiveness in addressing problems with complex root causes and hard to quantify risks.
On a micro level, I’m involved in the same sort of activity – seeking to tease out those causes and issues so that the IT services we provide are fit for purpose and support the wider health research community. There is a constant tension between the short term demands of delivery today and positioning our service appropriately for the future. And, as with climate change, it’s a lot easier to argue for concentrating on something today that delivers a tangible outcome than to argue for making an investment for the future that gives a more indirect and less easily quantified benefit.
So, we come back to the beginning. Identifying root causes is a big challenge, but so too is defining the problem appropriately. It’s not just about spotting an immediate effect, it includes spotting time bombs that are waiting to catch us out. It’s not just about the obvious outcome that has already been or could be experienced, it includes having clarity over the deeper impact, and exploring what actual level of risk exposure we are carrying. And even then it’s not enough. If we don’t care about the future or are placed under undue pressure or do not have the capability to gain a consensus among the people with the power to change things, then even with all the evidence in front of us we will take the short term gain decision. Considering the longer term is fundamental to success in areas like health research and climate change, but is not just a Big Picture consideration. We can apply the same principles at our more local levels. We can, and should, seek to improve our practice in this tricky area bottom-up as well as top-down.
Of course it would be nice not to have the problems in the first place. If we can put effort into designing solutions to avoid problems then we make life a lot easier for ourselves – problems are harder and more expensive to identify and resolve after implementation than at the design stage. We can’t side-step cancer like this, but we can address problems faced by researchers by giving the research process that they use this sort of treatment. To go back to my first example, if we can design solutions for the future which anticipate the challenges our responses to climate change are likely to face, and if we can make it possible to address these challenges, then there is more chance of success. Sound a tall order ? I didn’t say it was easy.
To connect with Richard and find out more about his experiences in service management within the Clinical Research Network contact him at: Richard Horton
When I was a young chap I wanted to be called Troy? What a strange name to pull out of the hat at the age of nine! However it was entirely based on one of the lead characters from Stingray, and I was easily influenced! Some names we are given and some names we choose, in recent weeks we have been challenged, and quite rightly, on the names of two of our major systems that have been deployed; the Open Data Platform and the AppCentre. Reflecting on the challenges that had been made and how we came to the decision for the names of the systems prompted me to think about naming in the context of all that we do in our business.
The moniker Open Data Platform (ODP) was put in place for a number of reasons. To us the platform is the first time we have opened up our data from speciality to specialty and from contracted organisation to organisation so it made sense. Also, the ODP term is about a mind set change and moving towards opening up our data to as wider audience as feasibly possible. Over 80% of our data is truly open data; anyone can interrogate it and understand what academic led research is going on where in the UK.
Is a name a brand or a sign post for what the solution is though? With ODP it has become a brand, a brand of a strategy that has a series of applications and tools associated to it. The correct challenge though of ODP has been that not only is it our brand but it also references some amazing work going on globally to truly open up data all over the world and as we have yet to open up our data we are perhaps perverting the name. However with good intention, we will get to the true meaning of ODP in the next year, but with the support of all our partners and in a way that does not expose information in a risky way.
The AppCentre also came under scrutiny, what is an App was the main thrust of this. The common parlance I guess is the Apple definition, a piece of software that you download and becomes part of your tool box.
The obligatory Wikipedia reference is:
The issue that was levelled against the name for our AppCentre is that some of the ‘Apps’ are simply the application layer for the ODP and are not items you download to your own systems. However the AppCentre does enable access to all of the Business Intelligence systems we are deploying, and in the case of so many Appsthat are now available they often are no more than skins and pointers to the web site, and so many of our Apps are exactly this.
The decision as to whether we need to change the name of these systems now needs considering. Re-branding systems is always difficult, both of these systems have a good degree of user adoption and therefore the effort that will need to go into the promotion of a new name will not be insignificant.
As part of the restructuring of our organisation my directorate is changing its name to reference the functions that it delivers. Moving from the Informatics directorate to the Knowledge and Information functional area is an exciting change and will make it easier for people external to the team to know what we do. But back to the title of this blog, ‘What is in a name?’ The difference that changing the name can make should really only be about the ease of understanding and not a marketing ploy. It always confused me when the Marathon chocolate bar became the Snickers, no change to ‘functionality’ or to delivery but it did make it easier to understand what it ‘did’ globally as Marathon was a very UK centric moniker for the chocolate bar.
Once you start to look into this idea you realise quickly how the industry has renamed itself so many times, you can ‘age’ colleagues by the phrase they use to reference the team that manages technology; are we IT, ICT, IS, Networks and Systems, Informatics and now Knowledge and Information. The fact that, for a while at least, the old name sticks shows we need to do a more complete job of spreading the word too.
Paul Maslowski is the Information Manager at a Comprehensive Local Research Network and a member of the virtual Business Intelligence Unit at the NIHR CRN. Paul has a unique view of data in the NHS and in particular in the research environment and has provided a guest blog this week that is well worth a read…
Since early 2008 I have been the Information Manager for Leicestershire, Northamptonshire and Rutland Comprehensive Local Research Network, a part of the National Institute for Health Research. Having moved from data generation through data management in to information management, I have questioned what creates these areas of operation. Sitting here in 2014, the question for me now is what distinguishes information management from business intelligence management? This is because I want our team to provide an intelligence service which fully supports our business. However, it feels like a stepping-up in the way we operate is required in order to provide a consistently high-quality business intelligence service.
So, what differences are there, if any, between Information Management and Business Intelligence Management? Or maybe, more simply, what is the difference between Information and Intelligence?
Having used various models to try and answer this, I was thinking of Professor Stephen Hawking’s fields of probability radiating in to nothingness pre-Big Bang when I came up with ‘Fields of Possibility’. See what you think of this as an analogy…
Let us consider a data item. In this case, an ear of wheat. This ear of wheat has various parameters including height at a particular point in time. So we may consider an ear of wheat as a small dataset.
A larger dataset, therefore, may be a sheaf of wheat.
An extended dataset could be a crop in a field.
However, large fields may contain a number of crops. In which case, we could argue that the field contains a large amount of information which we can analyse and manage.
For the sake of the analogy let us consider that extended information is a number of fields on a farm.
The next stage up from this is where I feel intelligence starts to appear – a point where we are able to consider the farm in its entirety. This is powered by the connections/communication paths between the fields of information. This I split in to two types – internal paths and external paths.
Internal intelligence looks at the enablers for the information, the roots if you like! This could include soil quality, crop yields and, therefore, the return on investment (ROI) and maybe the ability to rotate crops over time.
External intelligence is where things get really interesting. This is where we start to empower the information to the point where we can start to make predictions as well as asking ‘What if’ questions. We have always been able to ask these questions but if we have no intelligence to back them up, it is quite possible that we take the wrong path through ignorance. However, if the farm allows us to back up our questions with real life (and ideally real-time) evidence, we can ask intelligent questions. More fundamentally, however, is the ability to get intelligent answers back.
What if we bought neighbouring fields? What would happen to our ROI then? What if we put a bridge across the river to the field we have always struggled to really utilise properly? If we did, how soon would we recoup the costs and so on.
So, the fundamental question is: by throwing more data at a problem, do we get an intelligent outcome? I feel that this is the same question about taking an almost infinite hard drive and putting more and more data on to it. In time, will it become conscious? Not in and of itself. However, by connecting larger data sets holding more information in an intelligent manner we may get closer to an intelligent result. This after all is what would suit our business better. So, now my question is, what paths can I create across the farm to gain the greatest intelligence? Using this approach I cannot help but feel that there are no limits to the fields of possibility…
Find out more about Paul at – Linkedin Profile
That frustrating moment when the key piece of advice you hear is, ‘Why can’t they just put more resource on it?’. Managing suppliers when a project has difficulties is the most difficult part of any role I have ever had, and managing the stakeholders around project delays is always the area where the most focus needs to be paid. Creating partnerships with suppliers is always the desire, but, if you are paying for a delivery by a date and the date is of significance for other dependencies then keeping the partnership on the ‘happy path’ needs to be second to managing risks, to ensure communication about the scheduling of dates is clear.
So, what to do when the worst has happened? The supplier has called a meeting and admits that not all is as it seems and delivery needs to be delayed. The immediate reaction is to be bullish isn’t it? You stop and think hang on, we’ve paid for this by this date! And being bullish will make you feel better (in the short term) and certainly the supplier needs to feel some pain but, to achieve something from the delay you still need to work with the supplier.
With this in mind the first item on the to do list needs to be to understand what the new dates look like and gain some confidence in them and then build a recovery plan. Don’t jump ship, keep the faith in the team who undoubtedly will have a vast amount of knowledge about the delivery and are the only ones who can get you moving forward again. Consider a new joint plan with the supplier and ensure they are aware at all levels that you are insistent that dates are achievable dates and not the ‘nice story’ to keep you happy. You have one more chance at this kind of recovery, one change of date will be ok, several false starts indicate that there is a management problem and that you need to apply measures to the delivery to ensure it can deliver.
As well as building the recovery plan it is important to understand how bad the problem is. This will obviously help you in building your confidence in the new schedule. Jan Filochowski, in his book that this blog takes its name from, ‘Too Good to Fail’ defines two types of failure on the Yosemite Curve;
The type of failure will have an impact on how much effort needs to be applied to recover from and create success. When trying to analyse the degree of failure always consider both the wider picture and the immediate impact. For example a delivery that has been in plan for years that is a few weeks late need not be seen by the business as a ‘Niagara Drop’ if the end product can be guaranteed to still bring about the degree of business change and capability that has been promised for the length of the project.
Organisations and project teams are complex, living entities from which the best results are obtained by tapping into what they and the people in them already know. Their underlying experience, skills and wisdom in doing their individual jobs in order to succeed is a key asset. A tactic which a team working on a project that is in difficulty can adopt is known as ‘Increasing the area of the known’. If a supplier has not been entirely honest with a schedule then this can immediately be rectified by getting closer to the dates and by taking ownership of the schedule from a supplier, therefore increasing the project team’s ‘area of the known’. This can work particularly well where an IT solution has been outsourced and delays are experienced in delivery as it lends itself to close management of a delivery agent.
Reid Hoffman, the founder of Linkedin gives advice to the recovery of a failing project, “Fail fast, tackle the most hard problems facing your business because you need to know how you are going to get through it.” Failing fast seems an odd concept, to me it means understand failures quickly, communicate them appropriately and put in place the robust management controls that build confidence in a collaborative manner, avoid alienating the key resources. Admitting that a project is failing to the governance structure of a project requires integrity and honesty, qualities that must be matched by those being reported to. One of the most common reasons for a project moving from a Panama Canal passage to a Niagara Drop is the disenfranchised project delivery team.
There are two erroneous perceptions relating to a failing project:
Firstly, that if a project is failing it is as if it has leprosy, when in fact the right analogy is with a common, curable cold.
Secondly, the skills required to ‘turn things around’ are different to those needed to keep a project on the right track, this is wrong, they are the same skills, perhaps more strongly visible and certainly more dramatically applied.
Correcting these two perceptions is important in ensuring success. Firstly dependencies are there because they are dependant. Because a project has the ‘leprosy’ of failure does not mean that dependencies can be de-coupled, it will be better to accept the ‘common cold’ into dependant projects and cure it than to try to remove them from the delivery schedule. The second perception takes the faith of the organisation. It needs to be assured that the skills of the team are appropriate, once it has this assurance it then needs to enable the dramatic application of the skills to the project.
However, back to the title of the blog, too good to fail, is it a question or a statement? If the organisation has had faith in the delivery of a project, the correct governance and appropriate oversight then it is a statement, if the organisation has let the project run without these things then it should be a question. For informatics projects globally we ‘should’ by now have learnt the lessons and it always be a statement.
It almost sounds like a joke from the 1970s; did you hear the one about the wife who was right first time and always right! But, that’s how we want to be in 2014 with our information, right first time so that we can make true and valid predictions and interpretations from the information we have gathered. After all we are in the information ‘business’ and therefore need to be able to give assurances that the data we promote as ‘insight giving information’ is always right.
How are we going to do this though; through the adoption of several principles, one of which being what QlikTech’s John Teichman describes as ‘ishiness’. He describes this as meaning having a quality that gives people the ability to maintain an overall sense of a data set or grouping of information and where they are within it. Through the adoption of this as a base principle for the delivery of all information we will enable our organisation to be able to always use the data we collect in the ‘correct’ way.
(According to Teichman the term ‘ishiness’ a corruption of the way we use ‘ish’ as a suffix in English to denote that something is broadly right.)
Whatever name we give it, it is fantastically helpful when looking at large or complex datasets, reports and pieces of information. Imagine always being able to set the context of a clinical trial recruitment report against other similar trials or against a point in time equivalent to the baseline of the report, and being able to do this without hours of additional statistic preparation, that is what the new Open Data Platform (ODP) applications will enable the user to do. What we then add to this is the fact that the new recruitment data is ‘near real time’ data and that it is built and referenced from comprehensive Meta Data. This is the panacea for all of our reporting capability, and one that we have in our sights before April 2014.
CIO magazine reported in December that on a recent American TV programme Google CEO Eric Schmidt commented that the use of Google as a search engine was one where users were used to not getting the right answer the first time, and that he wanted to change that. “We have more bugs per second than anything else in the world”, Schmidt went on to say, “We should be able to give you the right answer just once. We should know what you meant, we should give it in your language, and we should never be wrong.” Taking this and applying it to our business intelligence systems is the second principle we will ensure is prevalent in our tools. As a goal providing the tools to answer questions with one single, solid and meaningful answer is going to be a stretch, both technically and perhaps more importantly culturally. We are an organisation that has the ‘data debate’ at its core as we are large series of networks rather than management structures to work within. The way in which business intelligence is created in the organisation as the new tools are deployed successfully over the first quarter of 2014 will be significantly impacted upon by the change management of the implementation programmes, something we need to invest time and effort into over the coming few months.
Changing the way in which everyone interacts with information is our aspiration, and one that we know we have to achieve to continue to be successful. In 2014 the virtual Business Intelligence Unit will be in place across the research networks, a group of like minded people willing to work together to enable a new meaning for BI, Business Imagination rather than ‘simply’ intelligence. This is the third principle we are working up into a plan for delivery. We have put in place the tools that enable Business Intelligence to be gained from the insight that information gives us, now we need to enable the whole organisation to be able to utilise the intelligence to create imagination, this change is when the benefit release profile will really reap the dividends we have always wanted it to.
With these three principles nd the continued hard work and creativity of a great team 2014 is looking pretty rosy. However as Richard Corliss is quoted,
“Nothing ages so quickly as yesterday’s vision of the future.”
Therefore even with these three principles in place we need to keep a constant eye on the horizon as we really have got ahead of the game and need to stay there to be efficient and effective.
Reviewing the year, something we should all do to reflect on the success and ensure that lessons are learnt from the elements that could be done differently.
It has been a really busy year; a year that we had been promising the team will enable us to be more considered and reflective in the future, although, as we move to the New Year I know the first quarter is at least as busy as 2013 has been.
So, to break down the key elements, I have collated my highlights on a month by month basis, not because these are the biggest achievements, more because they meant the most to me in some way or because they set us up for the next ‘big thing’ in 2014.
January – We left December 2013 with the vast majority of the contract negotiations with Tribal Education complete and some clarity on how were going to go about building the Central Portfolio Management System (CPMS), our new system for managing the portfolio of research across the NHS. January saw us chasing our tails to get the contract signed by the highest authority and the final elements of it agreed, not least of which was the governance subsequently put in place to deliver the system.
In January we also took a road trip to NHS Bristol to see how they have implemented their Local Portfolio Management System (LPMS) to deliver the most clinical benefit. It was on the journey back that the bones of LPMS systems of choice (SoC) were built to ensure that wherever possible the same benefits could be delivered across the entire research network, in a system agnostic manner.
February – The NIHR CRN delivered demonstrations of its Information Systems strategy to the rest of the NIHR and partner academia. The key goal was to provide a ‘show and tell’ to enable the rest of the organisation to try to build on the work done at the NIHR CRN. A lesson learnt from this though was simply showing new ways of working or systems does not drive corporate change; we need to keep addressing this to try to achieve the benefits we think we can across the length of the organisation.
We also were able to complete the design phase for CPMS in February, delivered exactly to the planned date agreed in the contract negotiation stage.
March – A big success for us, with the help of Methods Consulting we were able to submit our NHS Information Governance Toolkit submission for the first time and gain a ‘good’ audit result, one that now sets the bar for all subsequent years and allows us to lead the way in how best practice Information Governance can bring about solid improvement to the research journey.
April – For the first time the Information Managers from across the whole Clinical Research Network came together. Being able to do this in Birmingham at the same time as the HC2013 conference enabled not only a great sharing and learning experience but also an element of team building to begin to be ingrained in the structure. The initial seeds of the virtual Business Intelligence Unit were planted and the solution stage of the Open Data Platform (ODP) and relationship with QlikView started.
May – The Senior Management Team of the Informatics Directorate was in need of some time out to build their vision of how we would deliver the strategy that had been jointly developed. A series of sessions to build the team interactions were put in place, not least of which was the opportunity to do ‘Difficult Questions’ Media training with JRR, an experience that taught the team a great deal when it comes to reacting under pressure and working together to build answers.
May was also the month that I managed to get some time away from the office to put the final touches to our wedding plans on the island of Elvissa.
June – The final drafting of the NIHR-wide Information Strategy was completed and approved by the senior team at Department of Health. The governance was altered to reflect this and the whole NIHR Information group could get behind one direction forward that will bring about the most spectacular advances in how clinical research is done in the NHS.
July – Always the month to get to the music festivals and for the first time in quite a few years the sun was out and festivals could be enjoyed lying back and enjoying the music rather than finding a new way to stay dry! Despite the social side of July it was still a busy month, we appointed maternity cover for our Head of Informatics. The team were also able to go live with the first users of ODP in its very early beta stage, testing the benefit realisation and ease of use of the product in a live environment. The user base however was to ramp up extraordinarily quickly even at this beta stage.
I also started writing this blog!
August – The greatest project of my life came to fruition, getting friends and family all to the white island to be there for our wedding! After a year of preparation and planning all went extremely well with lots of happy faces, smiles and great times had by all. Whilst all this was going on the world continued to turn and innovation in disease specific areas continued to bring about benefits. Stroke research in particular discovered a new solution called Capture Stroke, which delivered remarkable benefit to the end site collecting information on patients involved in trials.
September – A focus on the security strategy we need to have in place was an exciting task for the ninth month of the year. Inspired in some part by the two chapter meetings of the Information Security Forum (ISF) in 2014; the Analogies Project presenting at the earlier one in the year and the September meeting being at the spiritual home of computing, Bletchley Park, gave us some great food for thought and enabled us to build on the work we had completed earlier in the year to work through the NHS IG Tool Kit.
October – The NIHR held its Industry Conference, bringing together heads of research from across many Life Sciences partners. I was lucky to be asked to share the stage with leaders from Industry and our CEO to deliver a presentation on the way in which our Information Strategy was coming together to support each and every partner in the delivery of clinical research in the NHS.
Also scheduled for October was a UKTI-led visit to the States with our CEO. A whistle-stop tour, coast to coast, to show the US-based industry teams what we had done and where we were going. A worthwhile visit that has already seen the development of the Reference Data Service (RDS) move from a supporting solution to that of centre stage as industry partners begin to develop connectors for it.
November – The winter started to arrive, but so much later than normal! An invite to the ISF Congress to present on the security of Open Data was an exciting opportunity. At the time I hadn’t realised I would be following Sir Ranulph Fiennes on stage, nor that I would be quizzed about my thoughts on AOL being exposed through access to open data. However despite this we still had a great amount of interest in the concepts of securely opening up data and how we could do it.
December – The end of the year, normally a time to be a little more considered and schedule the planning for the next year, but not for us this year. The first three months of 2014 are going to be about readiness for new systems and readiness for organisational change. So for Information Systems, team planning for a big bang change of multiple systems as we move from March to April in 2014 was the key task for December. Making sure that everyone knows what they are working on and what the priorities are for each team in the first three months has been key to ensuring that everything is ready for that day when we flick the switch and everything starts to work a little differently.
Summary – It has been the best year I have ever had; every month has brought a different challenge, a different opportunity and new experiences. The challenges have been there all year and we have slowly but surely knocked at each one and started to work out how to deliver against it, and we have still enabled some exciting innovations to happen.
And now we simply look forward to 2014 and all that it offers us, a new name for the team and a chance to continue to make a difference.
This is the question posed to our organisation as we go live with our App Centre and the next Open Data Platform (ODP) apps as well as a week of promotion regarding our new Central Portfolio Management System (CPMS).
We have always held with the concept that Business Intelligence is about the use of the data that we collect: that we turn data into information that data delivers insight that enables Business Intelligence. However that doesn’t happen in a ‘just because’ way. We don’t have Business Intelligence just because we collect data.
At a recent event the speaker posed the following analogy:
We can add to that analogy and pose:
But, the question posed was ‘what is Business Intelligence in clinical research?’. For us it is the delivery of a number of elements. Our organisation is charged with enabling the NHS to deliver clinical research effectively, therefore to us, a key output of Business Intelligence is making the most of the swathes of data we have to enable the NHS to create insight with regard to its capacity and capability to deliver clinical research. However, simply having this information does not suddenly enable the NHS to know exactly where to deploy research capacity. The delivery of Business Intelligence requires analytics tools the like of which we have deployed within the newly live App Centre and then a cultural change of mind set as to how information is used. Some of those mind set changes are:
We have always laid claim to the fact that we will know we have imbedded Business Intelligence into our organisation when questions to the Business Intelligence unit are not asked to deliver answers but are asked to create insight to drive the next question. For a period of time organisations have tried to create a culture of continuous improvement that is not linked to or based on the delivery of Business Intelligence. This is something we are changing and more and more we see this changing in the NHS. The capabilities to make an improvement to any service are now there, what needs to follow (and perhaps should have preceded) is the intelligence to know what to change and what the delivery benefit will be in making that change. As Business Intelligence and the analytical capabilities become mainstream so can the ability to deliver continuous improvement effectively, grounded in benefit delivery not the next best, loudest idea.
In our organisation the delivery of Business Intelligence brings about one very key win, the art of prediction. We have managed information to report on performance a quarter or more ago. We now have the capability to manage performance considering yesterday’s outputs and predict and mobilise resource based on this. No longer do we have to manage with well aged and mature data we now can utilise fast and fizzy data fed through a well oiled Business Intelligence ‘engine’.
… and back to the question posed, ‘What is Business Intelligence in clinical research?’ To us it is the single biggest benefit that the delivery of truly effective information systems brings, the ability to unleash the value of big(ish) data to improve the delivery of clinical research in the NHS.
I have been asked to present at a summit of CIOs (http://www.ciouksummit.com/) later this week on the subject above and therefore thought it would be of use, and hopefully of interest to get some of the ideas down on here.
What our organisation needs is the ability to provide interoperable systems, link legacy systems to new shiny systems, and utilise open data standards and capabilities. We have tried to use that word open in a different way by opening up our data to information managers across the research eco-system, allowing them to create open queries that can be shared across the organisation, and therefore providing the catalyst for service improvement.
For those that visit here often you will know something about the systems we are deploying and the legacy we are trying to improve upon, but for new readers and to allow us to take stock of where we are I wanted to try to gather my thoughts on where we are going and what we are starting to achieve.
First a note, this is not an advert for our suppliers, we have however gathered a selection of suppliers around our delivery that are today helping us to make a difference. We have a strategy of not having an enterprise wide supplier and trying to seek out the best systems for each need we have. Our critical infrastructure is a mixture of Oracle and Microsoft and recent additions of Linux, which whilst giving the infrastructure team a headache it does mean we have the most appropriate solution for each of our systems.
We have effectively created our own private cloud solution that is scaled appropriately, it is not the size and capability of going to Amazon or Google but provides us what we need and allows our hosts the University of Leeds to provide a high level of support to the business and provides our system suppliers the ability to deploy systems on to our own infrastructure.
The information systems themselves are a series of integrated modules rather than one solution size that fits all. The entry point into our systems is a solution known as CSP. This is a bespoke system using Oracle platforms. As a solution it provides workflow and reporting support to the NHS as it works through the process of achieving permission to deliver clinical research at a local level. However there is no way we could describe CSP as ‘cutting edge’, when it was built the horrible phrase bleeding edge probably applied as the team tried to shoe horn the most benefit out of new Oracle sub systems, whereas now, two years later, it delivers what it needs to but doesn’t utilise all of the possibilities of the infrastructure it is landed upon.
The next module along the work flow is our new Central Portfolio Management System (CPMS). CPMS acts a central spine for all data collected about clinical research, it has work flow elements integrated to CSP and our other sub systems and will, once live in late January, be the central system for performance management data of clinical research. It is this system that we are changing our Information Systems strategy around, changing users into fans and ensuring that we achieve an organisation that can make the most of its data and the volume of capable users.
Underpinning these two systems is the Reference Data Service (RDS). The RDS is a simple idea realised, the ability to master and expose reference data relating to clinical research in the UK. What has been fascinating about the development of the RDS though has been the external interest in having system to system access to it. This is an interest that has caught us all on the hop a little but one that we can satisfy through the industrialisation of the RDS. Having large organisations from the Life Sciences industry building connectors to the RDS so that they can consume data about structures of the NHS, researchers, resources and even the UK terminology will make it easier for research to be done in the UK, making this truly ‘cutting edge’ in our world.
The system that started us on the path to innovation and the one we pin any conversations about ‘cutting edge’ development and the ‘open organisation’ to the most is the Open Data Platform (ODP). ODP is a series of Apps available from the new NIHR CRN App Centre, the apps associated to ODP are those that allow varying levels of access to the information we collect and enable the user to apply business intelligence tools to the data to develop insight that is gathered from the information we hold.
The infrastructure in place for the ODP enables the organisation to utilise a dispersed capability to develop new apps that can then be used across the UK, delivering specific data based insight into research and enabling the work force to build solutions that meet needs as quickly as the technology can be adopted.
The App Centre itself will, in 2014, become the front door to the tools the organisation has deployed and enable SMEs involved in research in the UK to surface their innovations to clinical researchers, business intelligence leads and perhaps most importantly public and patients interested in clinical research.
A development in pilot today is the ability to surface disease specific trials directly into clinical systems and disease pathways within these systems. Doing this will prompt and enable the clinician to offer access to the clinical trial at the point of care directly to the patient, in theory this will enable a change to the landscape of access to clinical trials, the pilot will provide us with the evidence and therefore the impetus to do this across a wider care setting.
We are becoming an open organisation through the systems we have developed and how they facilitate a change to our culture, information systems are a facilitator or supporting agent to culture change, if they are the catalysts then I would not be sure that they will become imbedded in our business. Information Systems shouldn’t be the reason for cultural change to occur, becoming an open organisation is the need of the organisation, the innovation of systems merely facilities this being possible.
The speed we build and adopt new systems has improved significantly over the last two years and that enables the reaction times of the organisation to adopt new technology where there is an identified business need that brings about an improvement to the service we offer, in other words business led change using technology to adopt change at speed.
Exactly where we want to be!