Why do we keep evaluating “IT Projects”?

The recent “Is Government IT Getting Worse?” provided just enough of an overview of the state of digital government to get me fired up.  

I don’t understand why we keep reviewing these things called “IT Projects”. For as long as I can remember we have been saying “it’s not an IT project, it’s a business project”. I got sick of hearing it not because it wasn’t true but because it was trite.

But we are still assessing “IT projects” and trying to improve the IT parts of change efforts in isolation. 

The inevitable result of this is either over engineered “requirements” focused effort or the popular half-solution to half-the-problem that is “agile”. 

While it wasn’t always the case, over the last 10 years or so every IT department I have worked with has had a more ounerous and often self-imposed discipline around managing change projects than other “departments”.  

You read that right – IT departments are trying desperately to improve their project based discipline and have pushed ahead of most other departmental or cross department program management office (PMO) initiatives. 

I see it everywhere. But because IT departments are taking the time to package change as projects they are subsequently driven into a position where they have to justify spend more elaborately. Also, where they have to justify spend more elaborately this makes IT change more project based.  

On the demand side, PMO initiatives have been dumbed down so much that without the discipline imposed by IT they would have almost no impact.  

Perhaps the worst combination is “the business” trying to take control of IT with the IT department letting them (i.e. “Agile”).  

Of course, there are improvements and a backlash against this approach. Much of the “digital” discussion is trying to solve this too – in ways I don’t always agree with. But while I often don’t agree with Paul Shetler when he talks about fixing procurement as an enabler of better digital outcomes he is spot on.

But in terms of having to justify the activities, IT departments always get a raw deal. It’s very difficult to justify IT activities in the same way you justify activities that have more concrete outcomes. “IT Projects” when defined as such cannot claim or commit to many sorts of benefits because they are by definition just the “IT” parts of the initiative.  

So when you assess “IT Projects” you are assessing an increasingly arbitrary sub-set of the value chain towards outcomes.  

You have to assess value creative, and change initiatives as a whole – not just the IT bits. 

Whatever effort you assess as “IT projects”, you can guarantee there is double that being spent on “IT” in other budgets. Also, you can guarantee some of the service outcomes being attributed to IT spend are from initiatives that never had any IT spend allocated to them in the first place. Or that had insufficient IT spend allocated to them. 

Rather than yet again assessing the “IT Projects” why not assess change initiatives in general? Why not look at:

  • Change initiatives such as new policy deployment, data capture and form changes, new business-lead deployments of technology and see how they were managed
  • Where change initiatives didn’t have IT budget the ownous must be on the initiative to justify why. Surely, every change initiative has IT impacts – at the very least on capacity of standardised IT services
  • Where change initiatives did have IT budget how do they compare in performance to “IT Projects”? (Oops, now I’m doing it)
  • What was the contribution of these change initiatives and how did they impact service?

We live in a world that is capability-based. The functional organisation is completely dead. The idea of an “IT Project” is completely dead. 

By continuing to look at inherently disconnected initiatives like this we are causing more problems than we are solving. 

It also means we have to be careful about delivery approaches. When an “IT Project” is spending 60% of its budget on design thinking workshops to try to change the way service is delivered that is in some ways admirable. But it’s not “IT spend” in the sense that it is guaranteed to improve outcomes that would be attributed to IT. In fact, it might arbitrarily reduce the investment in certain types of quality. Less time coding is less quality – regardless of how much of an improvement there is in what is being coded against what would have been coded.  

Yet another function trying to leverage it’s data and get c-suite attention

There is a massive trend as organisations shift and The Death of the Functional Organisation occurs.  There are three effects of this on every profession / silo / function.  

They all:

1. Normalise to soft platitudes at the top end.  Each function says it “… wont succeed without executive support”, “… is all about collaboration and leadership”, etc

2. Focus on getting others to recognise the value of their datasets.  “… integrate our data into operations”

3. Try to build a comprehensive theory of the firm where there profession is the key.  Eg. “Organisations are really all about change”

The latest example is the “tax” function of all things:

https://www.strategy-business.com/article/The-Marriage-of-Tax-and-Strategy

As usual there are some great points in this article – but they are part of a trend towards capability-based governance rather than about the importance of the tax function specifically.  

Re: A Pattern is no Best Practice Yet!

Great article on patterns by Kris Meukens here

My slightly self-indulgent reply is below. I’ve always been fascinated by how our understanding of IT and organisational design in general seems to follow the same path of Christopher Alexander’s works on the design and architecture of the built environment. 

— 

I think it’s interesting to see the parallel and delayed timeline between “patterns” as they evolve in built architecture theory, versus patterns in IT. 

I’m not an expert in either but I see the history of patterns in built architecture through the lens of Christopher Alexander:

  • Notes on the Synthesis of Form (1964, year 1). Starts to describe what later came to be known as “patterns”. 
  • Notes on the Synthesis of Form – Preface to the first paperback edition (1973, year 9). Already starting to rebel against those who focused on “design methods” as a meta study and assets “I reject the whole idea of design methods as a subject of study, as I think it is absurd to seperate the study of design from the practice of design”. 
  • A Pattern Language (1977, year 13). These are fully formed patterns with the notion that they can be combined to create designs. It’s not a simple mix and match – still leaves room for a design process. 
  • The Nature of Order (2003, year 39). Doesn’t exactly reject patterns but focuses on wholeness, a set of qualities, and a set of structure preserving transformation that help designs unfold. 
  • The Battle for the Life and Beauty of Earth (2012 – however focuses on 1985, year 21). This focuses on two world views that he saw as in battle – one of which was opposed to his style of building. 

Reading “Battle” in particular makes you feel our current understanding of patterns and our obsessions with Agile, Design Thinking, etc mean we are in the equivalent of year 25. 

Reading the above article feels like we’re heading towards the equivalent of year 30. I mean this as a compliment. 

The IT Department of the Future… doesn’t exist 

Good article, including the simple fact:

In the industrial company of the future, there won’t be a separate IT department.

From: http://www.strategy-business.com/article/The-Thought-Leader-Interview-Bill-Ruh?gko=9ae51

Data quality analogy – Prove you own your house

I’m well known for not liking analogies. I find they generally give people comfort that they understand something without actually changing how much something is understood.  

So if I’m forced to use an analogy I’ll at least try to use one that hasn’t been used before, and to use it until it breaks by folding backwards on the analyogy so it no longer makes sense.  My data quality assurance analology at the moment is:

Imagine you’re asked to prove that you own your house.  

This is an analogous to the regulatory trajectory in financial services – where increasingly data provided to regulators must be assested to met certain data quality criteria.  

So again, imagine somebody has asked you to prove that you own your house. You can do this by presenting a deed of title. You might also make a homourous distinction between you owning your house versus you owning a mortgage. Because really the bank owns the house, am I right? 

But within this distinction you can make a fairly precise statement about how much of your home you own. You might need to rely on estimates regarding what it’s worth, but you can get the percentages of ownership pretty accurate.

But imagine if deeds of title didn’t exist. Imagine mortgages didn’t exist. Imagine plans that show houses appearing on lots with specific boundaries and reference points for context didn’t exist.

Imagine again being asked to prove that you own your house without the benefit of deeds, mortgages, plans, addresses, and other context. It’s still possible to prove ownership. Now you have to lean on concepts like homesteading; and create a narrative chain of ownership based on the initial claiming and working of the land, through secsessive transfers of ownership to your own claim. You also have to devise your own way of identifying your house – perhaps using a flag with your family crest. 

The problem with this approach to proving ownership is that it’s different for each home. Everybody would need to tell the entire story of how this particular home has came to be on this particular block of land, and who participated at every step of construction and transfer of ownership.

The depth and level of corroboration for this story of ownership would mean we’d need to bring in many of the people who are characters in the narrative and confirm their roles and recollections. Some of these people would disagree with particular points in the story enough to open up doubt or all least require further alternative corroboration.

Once some of the people in the narrative die, or even if they just refuse to turn up for each successive re-telling of the ownership narrative, you lose the ability to prove ownership. This type of approach is therefore clumbersum – requiring a complex narrative that is different for each house – and ultimately inconsistent in the level of assurance it can provide.

The level of assurance is itself dependent on the unique and total narrative around ownership. If, for a particular home, part of the ownership story contains the unsolved murder of the owner and subsequent homesteading by a mysterious stranger, then the certainty of ownership is different than for an ownership story that doesn’t contain that feature. So the idea of a proof with 95% certainty cannot be committed to in general.

The alternative – when you don’t need a completely different narrative ownership story per individual home – can’t be designed by any individual home owner. Instead it has to be built up, shared, agreed, and sustained by the community.  

The system for proving home ownership that we have now, that allows for proof of ownership, and even allows as to manage precise percentages of ownership, is the analogy I use for data quality. Because information passes through the community like the ownership of a house, there needs to be a framework agreed by the community so data quality can be consistently understood.

When somebody visits your house for dinner, it is enough that you answer the door to prove sufficient ownership of this house to not expect dinner to be interrupted. Sufficient ownership for this purpose isn’t even real ownership – it could just be a rental agreement. Whereas other assertions of ownership require further proof.  

If your organisation doesn’t have artefacts that describe the structure and flow of information it’s like not having house plans that show which property we are talking about. Likewise, if the community doesn’t agree to a specific, potentially costly, process of verification of data as it is transported across the organisation, this is like not having title deeds that you can depend on.

Still with me on this analogy? No, me neither – which is why I don’t like analogies.  

Thoughts on The End of Information Management: Everbody’s Responsibilty

When I began my career in the 1990s I quickly got frustrated with the idea that “quality is everybody’s responsibility”. If you remember corporate environments at that time you’ll remember this expression. To me it didn’t mean much. To me this was like saying your happiness is your own responsibility. It was self-evidently true but I had to wonder why I would waste my time listening to somebody tell me that. I naturally wondered what was in it for them.I don’t hear that phase as much anymore. It started to disappear when it was replaced by “the customer is at the centre of everything we do”. Of course, I had issues with this phrase too. Like the quality people, I couldn’t really understand how people got paid to say such trivial platitudes. I didn’t even think it was helpful to think that organisations should consider the customer to be the centre of everything they did. In fact, to me it sounded too internally focused. When I engaged at all it was to declare that “the customer is at the centre of everything they do – that’s what customer centric actually means!”.

What does this have to do with information management? Well, you might have heard recently that data is really important. It’s the latest craze. It didn’t start with big data but that’s certainly when it went mainstream. You know that’s when it went mainstream because the time between when it was cool to talk about big data and when it was cool to diss big data was about 3 months. Every big data article now proudly declares how contrarian they are by saying it’s not really how big your data is it’s how you use it – or some such chant. For some it’s supposedly really all about “fast data”. Others will say it’s not about data it’s about information. It’s all semantics – and those in information management should find that ironic.  

My personal attempt at the anti- big data spin was to call it “cheap data”. This is no less obnoxious than the others and I apologise for it. But cheap data at least explains why their is so much of it about. With so much data of course comes so much data management. The real disciplines of data and information management are quite mature. I’ve worked with people who have been information management professionals for 25 years. There are deep knowledge bases around how to manage information of both the practical and academic variety.  

Real information management professionals have a deep and complex relationship with all things information. The field is highly specialised and filled with professionals who have their own specialised language and techniques . 

My career is basically 25 years of technology enabled business transformation – starting with the modest business transformation of how wooden pallets were tracked for a small fruit shop that I wrote a tracking program for when I was 17. However, from an information management professional perspective I’m not allowed to say I have 25 years experience. Instead, I have about 5 years experience. This is because information management has a long history and the specialisation is deep. 

I’d also suggest my 5 years experience is only half “real” information management experience because the other half was spent stripping out all of the information management jargon and breaking up endless arguments between information management professionals. That is to say, I spent a lot of time trying to stop information management people talking and getting them listening. But this process – while a nessesary and important part of the mass consumerisation of information management – is unfair on those “real” information management professionals.  

Information management is a mature specialised discipline. But it’s also at the same point that the quality movement was when it endlessly declared “quality is everybody’s responsibility”. When data got cheap, and became the biggest story in town, information management was suddenly everywhere. But that meant it had to appeal to a broad audience. Which meant that deep and specialised language had to go out the window.  

While many concepts deep in the details all of those information management skills are still useful – just like being able to look up a book in a library catalogue is still useful – ultimately the level of broad communication about information management that most organisations can tolerate before they switch off is basically “data is everybody’s responsibility”.  

As I’ve said often before, the discipline of general management is unkind to specialisation – wishing and hoping that complexity and nuance is unceremoniously removed from all things for the convenience of centralised decision-making and at the expense of distributed decision-making power (and ultimately decision effectiveness). For all the intolerance to specialisation that general management has this is nothing compared to the compromises that must be made when appealing to the masses. Data management is the new quality management and is changing so it appeals to the masses. This is in many ways a penultimate step in the evolution of any intellectual centre.  

Short of building another organisational silo and trying to move all of your data management into it, you’re basically in the territory of culture change if you want to broadly impact how your organisation uses data.  But once you’re appealing to something as broad as cultural change you’re out of the realm of specialisation and closer to the realm of politics – for better and for worse.  

Information management is a rich and specialised set of disciplines that help you manage your information once you’re willing to accept you have a problem managing your information. It also includes a number of governance and discovery disciplines that help you identity that you have a problem if you’re willing to invest in information the same way you invest in other assets. 

This is all well and good by why do that? I understand that when information is wrong it could lead to misunderstanding of risk. Or that when information is wrong it might impact customer experience. I consult in this area so I’m happy to tell you that you might not be meeting your regulatory obligations unless you are both managing information according to certain standards and able to attest to the accuracy of that information.  

But why view this as an information problem? How is this different to “just focusing on the technology”? I’ve seen many initiatives fail with a retrospective 20-20 hindsight assessment that they failed because they only focused on the technology. It’s true that initiatives must focus on more than just the technology. Great – we should do that. We should also avoid using negations to describe what we should do. You can’t focus on “not the technology” you have to focus on something specific. Equally, you can’t use “the business” for shorthand. There isn’t such a thing.  

Just like only focusing on the technology will give poor outcomes. Only focusing on the information will give poor outcomes.  

This idea that a kind of functional excellence in information management is holding organisations back is a fallacy. The real problems have nothing to do with a lack of functional excellence in a seperate information function. The real issues are general management, accountability for details, mis-investment in technology, and apathy at the margin.

Everybody likes to say that “the business owns the information”. This is absolutely true. Why are we even talking about this? We can reinforce our message by saying “the information isn’t owned by IT”. Again, this is absolutely fine. I personally could imagine an organisation where “information is owned by IT”. This seems sacrilegious and against all good information management principles but I don’t see why an organisation wouldn’t be allowed to operate this way. IT is in fact part of this mythical amalgamation known as “the business” and IT does stand for “information technology”. You could interpret information technology to mean the techniques and tools to manage information. And you tend to only manage things you own. Perhaps the IT department would make a good custodian for all an organisation’s information assets.  In fact, in many organisations this is the case. The only thing missing is recognition that data in one of those pesky little details you your IT department has been building capability in for years without due recognition.  

When I started work my payslips were hand-written by a clerk in payroll. That process was owned by the head of payroll. The accountability for delivering my payslip was with the head of payroll but the responsibility for creating the payslip was with that clerk. We’ve all heard this language before. But the fact is, today an IT system prints my payslip. If the printer breaks a tech support person fixes it. Often they don’t know how to fix the problem – so they learn. If the system crashes when they try to run payroll you can bet the email will say “payroll has been delayed because of a computer issue”. The person who needs to fix this issue might have only started in the job the day before – so they’ll have to learn many things before they can fix it – so they’ll start learning those things. And yet, the “business owner” for that system is still the head of payroll.  

In the information management world, that ownership of the payroll system is a different beast to the ownership and accuracy of the payroll information itself. This is a powerful concept in information management. It’s particularly powerful for information assets that have a more complex lifecycle than payroll information. But so what? We have all the accountability for getting payroll right that we need. Even for more complex information assets, if accountability is in place for the outcomes that rely on that information what more accountability do we need?  Just as “… including accuracy of data”.

The truth is that when we manage accountability for outcomes many organisations have operated on an implicit assumption that that accountability excluded the data. Somehow the data was determined to be just another one of those minor details that were considered above the general management discipline where accountability is defined. So the only change is to explicitly change this implicit assumption that accountability excludes data details.

So I guess what I’m saying is…

“Data is everybody’s responsibility”

Westpac Launches Databank

“Organisations will be able to store their customer identity data in Databank, in the knowledge that it stays with the Bank, even during multi-party data shares. This significantly reduces the risk of identity theft, customer data breaches, or security, privacy and consent issues that can occur with identity data storage and sharing.”

From: Westpac Media Release

Avoiding the B.A.I.T. view of Business Capabilities

Reference material added here:

Breaking free of B.A.I.T. -based Capabilities

Management is a Technology

I’ve been thinking of management as a technology for some time.

Now this.  And the related data set.

The Management Myth

The Management Myth

“One thing that cannot be said of the “new” organization, however, is that it is new.

In 1983, a Harvard Business School professor, Rosabeth Moss Kanter, beat the would-be revolutionaries of the nineties to the punch when she argued that rigid “segmentalist” corporate bureaucracies were in the process of giving way to new “integrative” organizations, which were “informal” and “change-oriented.” But Kanter was just summarizing a view that had currency at least as early as 1961, when Tom Burns and G. M. Stalker published an influential book criticizing the old, “mechanistic” organization and championing the new, “organic” one. In language that eerily anticipated many a dot-com prospectus, they described how innovative firms benefited from “lateral” versus “vertical” information flows, the use of “ad hoc” centers of coordination, and the continuous redefinition of jobs. The “flat” organization was first explicitly celebrated by James C. Worthy, in his study of Sears in the 1940s, and W. B. Given coined the term “bottom-up management” in 1949. And then there was Mary Parker Follett, who in the 1920s attacked “departmentalized” thinking, praised change-oriented and informal structures, and—Rosabeth Moss Kanter fans please take note—advocated the “integrative” organization.”

Page 1 of 20

Powered by WordPress & Theme by Anders Norén