Data quality analogy – Prove you own your house

I’m well known for not liking analogies. I find they generally give people comfort that they understand something without actually changing how much something is understood.  

So if I’m forced to use an analogy I’ll at least try to use one that hasn’t been used before, and to use it until it breaks by folding backwards on the analogy so it no longer makes sense.  My data quality assurance analogy at the moment is:

Imagine you’re asked to prove that you own your house.  

This is an analogous to the regulatory trajectory in financial services – where increasingly data provided to regulators must be attested to met certain data quality criteria.  

So again, imagine somebody has asked you to prove that you own your house. You can do this by presenting a deed of title. You might also make a humorous distinction between you owning your house versus you owning a mortgage. Because really the bank owns the house, am I right? 

But within this distinction you can make a fairly precise statement about how much of your home you own. You might need to rely on estimates regarding what it’s worth, but you can get the percentages of ownership pretty accurate.

But imagine if deeds of title didn’t exist. Imagine mortgages didn’t exist. Imagine plans that show houses appearing on lots with specific boundaries and reference points for context didn’t exist.

Imagine again being asked to prove that you own your house without the benefit of deeds, mortgages, plans, addresses, and other context. It’s still possible to prove ownership. Now you have to lean on concepts like homesteading; and create a narrative chain of ownership based on the initial claiming and working of the land, through successive transfers of ownership to your own claim. You also have to devise your own way of identifying your house – perhaps using a flag with your family crest. 

The problem with this approach to proving ownership is that it’s different for each home. Everybody would need to tell the entire story of how this particular home has come to be on this particular block of land, and who participated at every step of construction and transfer of ownership.

The depth and level of corroboration for this story of ownership would mean we’d need to bring in many of the people who are characters in the narrative and confirm their roles and recollections. Some of these people would disagree with particular points in the story enough to open up doubt or all least require further alternative corroboration.

Once some of the people in the narrative die, or even if they just refuse to turn up for each successive re-telling of the ownership narrative, you lose the ability to prove ownership. This type of approach is therefore clumbersum – requiring a complex narrative that is different for each house – and ultimately inconsistent in the level of assurance it can provide.

The level of assurance is itself dependent on the unique and total narrative around ownership. If, for a particular home, part of the ownership story contains the unsolved murder of the owner and subsequent homesteading by a mysterious stranger, then the certainty of ownership is different than for an ownership story that doesn’t contain that feature. So the idea of a proof with 95% certainty cannot be committed to in general.

The alternative – when you don’t need a completely different narrative ownership story per individual home – can’t be designed by any individual home owner. Instead it has to be built up, shared, agreed, and sustained by the community.  

The system for proving home ownership that we have now, that allows for proof of ownership, and even allows as to manage precise percentages of ownership, is the analogy I use for data quality. Because information passes through the community like the ownership of a house, there needs to be a framework agreed by the community so data quality can be consistently understood.

When somebody visits your house for dinner, it is enough that you answer the door to prove sufficient ownership of this house to not expect dinner to be interrupted. Sufficient ownership for this purpose isn’t even real ownership – it could just be a rental agreement. Whereas other assertions of ownership require further proof.  

If your organisation doesn’t have artefacts that describe the structure and flow of information it’s like not having house plans that show which property we are talking about. Likewise, if the community doesn’t agree to a specific, potentially costly, process of verification of data as it is transported across the organisation, this is like not having title deeds that you can depend on.

Still with me on this analogy? No, me neither – which is why I don’t like analogies.  

Thoughts on The End of Information Management: Everbody’s Responsibilty

When I began my career in the 1990s I quickly got frustrated with the idea that “quality is everybody’s responsibility”. If you remember corporate environments at that time you’ll remember this expression. To me it didn’t mean much. To me this was like saying your happiness is your own responsibility. It was self-evidently true but I had to wonder why I would waste my time listening to somebody tell me that. I naturally wondered what was in it for them.I don’t hear that phase as much anymore. It started to disappear when it was replaced by “the customer is at the centre of everything we do”. Of course, I had issues with this phrase too. Like the quality people, I couldn’t really understand how people got paid to say such trivial platitudes. I didn’t even think it was helpful to think that organisations should consider the customer to be the centre of everything they did. In fact, to me it sounded too internally focused. When I engaged at all it was to declare that “the customer is at the centre of everything they do – that’s what customer centric actually means!”.

What does this have to do with information management? Well, you might have heard recently that data is really important. It’s the latest craze. It didn’t start with big data but that’s certainly when it went mainstream. You know that’s when it went mainstream because the time between when it was cool to talk about big data and when it was cool to diss big data was about 3 months. Every big data article now proudly declares how contrarian they are by saying it’s not really how big your data is it’s how you use it – or some such chant. For some it’s supposedly really all about “fast data”. Others will say it’s not about data it’s about information. It’s all semantics – and those in information management should find that ironic.  

My personal attempt at the anti- big data spin was to call it “cheap data”. This is no less obnoxious than the others and I apologise for it. But cheap data at least explains why their is so much of it about. With so much data of course comes so much data management. The real disciplines of data and information management are quite mature. I’ve worked with people who have been information management professionals for 25 years. There are deep knowledge bases around how to manage information of both the practical and academic variety.  

Real information management professionals have a deep and complex relationship with all things information. The field is highly specialised and filled with professionals who have their own specialised language and techniques . 

My career is basically 25 years of technology enabled business transformation – starting with the modest business transformation of how wooden pallets were tracked for a small fruit shop that I wrote a tracking program for when I was 17. However, from an information management professional perspective I’m not allowed to say I have 25 years experience. Instead, I have about 5 years experience. This is because information management has a long history and the specialisation is deep. 

I’d also suggest my 5 years experience is only half “real” information management experience because the other half was spent stripping out all of the information management jargon and breaking up endless arguments between information management professionals. That is to say, I spent a lot of time trying to stop information management people talking and getting them listening. But this process – while a nessesary and important part of the mass consumerisation of information management – is unfair on those “real” information management professionals.  

Information management is a mature specialised discipline. But it’s also at the same point that the quality movement was when it endlessly declared “quality is everybody’s responsibility”. When data got cheap, and became the biggest story in town, information management was suddenly everywhere. But that meant it had to appeal to a broad audience. Which meant that deep and specialised language had to go out the window.  

While many concepts deep in the details all of those information management skills are still useful – just like being able to look up a book in a library catalogue is still useful – ultimately the level of broad communication about information management that most organisations can tolerate before they switch off is basically “data is everybody’s responsibility”.  

As I’ve said often before, the discipline of general management is unkind to specialisation – wishing and hoping that complexity and nuance is unceremoniously removed from all things for the convenience of centralised decision-making and at the expense of distributed decision-making power (and ultimately decision effectiveness). For all the intolerance to specialisation that general management has this is nothing compared to the compromises that must be made when appealing to the masses. Data management is the new quality management and is changing so it appeals to the masses. This is in many ways a penultimate step in the evolution of any intellectual centre.  

Short of building another organisational silo and trying to move all of your data management into it, you’re basically in the territory of culture change if you want to broadly impact how your organisation uses data.  But once you’re appealing to something as broad as cultural change you’re out of the realm of specialisation and closer to the realm of politics – for better and for worse.  

Information management is a rich and specialised set of disciplines that help you manage your information once you’re willing to accept you have a problem managing your information. It also includes a number of governance and discovery disciplines that help you identity that you have a problem if you’re willing to invest in information the same way you invest in other assets. 

This is all well and good by why do that? I understand that when information is wrong it could lead to misunderstanding of risk. Or that when information is wrong it might impact customer experience. I consult in this area so I’m happy to tell you that you might not be meeting your regulatory obligations unless you are both managing information according to certain standards and able to attest to the accuracy of that information.  

But why view this as an information problem? How is this different to “just focusing on the technology”? I’ve seen many initiatives fail with a retrospective 20-20 hindsight assessment that they failed because they only focused on the technology. It’s true that initiatives must focus on more than just the technology. Great – we should do that. We should also avoid using negations to describe what we should do. You can’t focus on “not the technology” you have to focus on something specific. Equally, you can’t use “the business” for shorthand. There isn’t such a thing.  

Just like only focusing on the technology will give poor outcomes. Only focusing on the information will give poor outcomes.  

This idea that a kind of functional excellence in information management is holding organisations back is a fallacy. The real problems have nothing to do with a lack of functional excellence in a seperate information function. The real issues are general management, accountability for details, mis-investment in technology, and apathy at the margin.

Everybody likes to say that “the business owns the information”. This is absolutely true. Why are we even talking about this? We can reinforce our message by saying “the information isn’t owned by IT”. Again, this is absolutely fine. I personally could imagine an organisation where “information is owned by IT”. This seems sacrilegious and against all good information management principles but I don’t see why an organisation wouldn’t be allowed to operate this way. IT is in fact part of this mythical amalgamation known as “the business” and IT does stand for “information technology”. You could interpret information technology to mean the techniques and tools to manage information. And you tend to only manage things you own. Perhaps the IT department would make a good custodian for all an organisation’s information assets.  In fact, in many organisations this is the case. The only thing missing is recognition that data in one of those pesky little details you your IT department has been building capability in for years without due recognition.  

When I started work my payslips were hand-written by a clerk in payroll. That process was owned by the head of payroll. The accountability for delivering my payslip was with the head of payroll but the responsibility for creating the payslip was with that clerk. We’ve all heard this language before. But the fact is, today an IT system prints my payslip. If the printer breaks a tech support person fixes it. Often they don’t know how to fix the problem – so they learn. If the system crashes when they try to run payroll you can bet the email will say “payroll has been delayed because of a computer issue”. The person who needs to fix this issue might have only started in the job the day before – so they’ll have to learn many things before they can fix it – so they’ll start learning those things. And yet, the “business owner” for that system is still the head of payroll.  

In the information management world, that ownership of the payroll system is a different beast to the ownership and accuracy of the payroll information itself. This is a powerful concept in information management. It’s particularly powerful for information assets that have a more complex lifecycle than payroll information. But so what? We have all the accountability for getting payroll right that we need. Even for more complex information assets, if accountability is in place for the outcomes that rely on that information what more accountability do we need?  Just as “… including accuracy of data”.

The truth is that when we manage accountability for outcomes many organisations have operated on an implicit assumption that that accountability excluded the data. Somehow the data was determined to be just another one of those minor details that were considered above the general management discipline where accountability is defined. So the only change is to explicitly change this implicit assumption that accountability excludes data details.

So I guess what I’m saying is…

“Data is everybody’s responsibility”

Westpac Launches Databank

“Organisations will be able to store their customer identity data in Databank, in the knowledge that it stays with the Bank, even during multi-party data shares. This significantly reduces the risk of identity theft, customer data breaches, or security, privacy and consent issues that can occur with identity data storage and sharing.”

From: Westpac Media Release

Avoiding the B.A.I.T. view of Business Capabilities

Reference material added here:

Breaking free of B.A.I.T. -based Capabilities

Management is a Technology

I’ve been thinking of management as a technology for some time.

Now this.  And the related data set.

The Management Myth

The Management Myth

“One thing that cannot be said of the “new” organization, however, is that it is new.

In 1983, a Harvard Business School professor, Rosabeth Moss Kanter, beat the would-be revolutionaries of the nineties to the punch when she argued that rigid “segmentalist” corporate bureaucracies were in the process of giving way to new “integrative” organizations, which were “informal” and “change-oriented.” But Kanter was just summarizing a view that had currency at least as early as 1961, when Tom Burns and G. M. Stalker published an influential book criticizing the old, “mechanistic” organization and championing the new, “organic” one. In language that eerily anticipated many a dot-com prospectus, they described how innovative firms benefited from “lateral” versus “vertical” information flows, the use of “ad hoc” centers of coordination, and the continuous redefinition of jobs. The “flat” organization was first explicitly celebrated by James C. Worthy, in his study of Sears in the 1940s, and W. B. Given coined the term “bottom-up management” in 1949. And then there was Mary Parker Follett, who in the 1920s attacked “departmentalized” thinking, praised change-oriented and informal structures, and—Rosabeth Moss Kanter fans please take note—advocated the “integrative” organization.”

The Management Myth

The Management Myth

“That Taylorism and its modern variants are often just a way of putting labor in its place need hardly be stated: from the Hungarians’ point of view, the pig iron experiment was an infuriatingly obtuse way of demanding more work for less pay. That management theory represents a covert assault on capital, however, is equally true. (The Soviet five-year planning process took its inspiration directly from one of Taylor’s more ardent followers, the engineer H. L. Gantt.) Much of management theory today is in fact the consecration of class interest—not of the capitalist class, nor of labor, but of a new social group: the management class.”

Mega Projects Redux: Projects, Programmes, and Work streams (Part 1 & 2)

This is an article I wrote around 2002 that was eventually published on the Satyam (new Tech Mahindra) blog.  I can’t find the old Satyam blog now so here it is again.

 

Projects, Programmes, and Work streams (Part 1)

 

MegaProjects are large enough to be called a ‘programme’, but integrated enough to be called a ‘project’.  In the end, all MegaProjects must be run as a programme; but it’s important to understand why.  This is a slightly edited reprint of an article I wrote on delivering technology-enabled business transformation (TEBT) projects which describes the difference between a project and a work stream; and the challenges for project management on MegaProjects. 

 

If you talk to a project manager or a programme manager for long enough you risk getting into a very long conversation about the differences between project management, programme management, and portfolio management.  It’s easiest to define these first in terms of what a project is and secondly in terms of the objectives for grouping projects in certain ways:

 

1) A project utilises resources, has a specified duration, and delivers pre-defined objectives

 

2) A programme is a group of projects which are managed together as they relate to the same objectives and there are interdependencies between each projects’ deliverables

 

3 )A portfolio (of projects) is a group of projects which are managed together because they are part of the same organisation and therefore must have sufficient consistency in the way they are managed in order to manage resource allocation across projects, and risks across that organisation

 

There are added complexities around the definitions of operational vs. strategic programme and portfolio management.   Management of the dependencies between each project is separate from the management of the mix of projects in the programme or portfolio itself.  In other words, there is a difference between running a group of projects right (operational) and investing in the right group of projects (strategic).  For MegaProjects their duration means a certain amount of the strategic process is required, but we are focused here on the operational side of MegaProject management.

 

So is a MegaProject a project, or a programme?  

 

MegaProjects have two important characteristics that make them a unique challenge to the project management discipline.  Firstly, MegaProjects are indeed projects because they share a single set of phases and milestones and all resources will be affected in some way be the overall project lifecycle.  Secondly, MegaProjects must be managed as a programme in order to balance the need to manage accountability for deliverables within each part of the project and the need to manage dependencies across each part of the project.

 

Ultimately, all MegaProjects must be run as a programme.  However, although each MegaProject must be run as a programme this isn’t because MegaProjects are a collection of projects.  MegaProjects must be run as a programme because they will require more than one project manager!  This is a vitally important point that shouldn’t be under-estimated.  MegaProjects have objectives that are significant in scope and therefore are sufficient to require multiple project managers to deliver them.  It is for this reason alone that you must run MegaProjects as a programme.

 

Project managers behave in predictable ways: they want to define the criteria for success at the beginning of the project, they want to manage scope, and they want to freeze requirements at a certain point.  They will also expect access to the valuable time of stakeholders and subject matter experts.  For regular projects, all of these behaviors bound and align a project in order to improve the odds of the project being successful. 

 

These behaviors are the basis of good project management.  However, MegaProjects require additional monitoring and control mechanisms to ensure that these behaviors don’t guarantee success of part of the MegaProject at the expense of failure or bottlenecks in another part of the MegaProject.

 

Projects vs Work Streams

 

One of the keys to a successful MegaProject is correctly identifying which parts of the MegaProject are projects and which parts are work streams.  These projects and work streams can then be aligned through key programme-level processes which cross both types of activity.  

 

Next week’s article focuses on the differences between projects and work streams and how to define these within a MegaProject…

 

 

Projects, Programmes, and Work streams (Part 2)

 

In part 1 we saw that MegaProjects must be run as a programme primarily because multiple project managers will be required in order to execute the project.  Project managers behave in predicable ways and will try to manage the scope of their deliverables in order to achieve success for their part of the MegaProject.  To ensure success of part of the MegaProject isn’t at the expense of other parts of the MegaProject this article introduces the difference between Projects and Work streams.

 

Difference between projects and workstreams….

 

Workstreams

  • Scope tends to exist within only one or two areas of the TEBT programme planning worksheet
  • not directly connected to the objectives of the programme
  • fed work from other part of the programme in terms of level-2 requirements, defects, analysis requires, etc
  • managed as a ‘cost centre’ in that resources are fixed or leveraged as a pay-per-use managed service with a fixed budget
  • responsible for defining resource requirements based on solution

 

Projects 

  • directly accountable to objectives of the programme
  • scope defined across the business transformation objectives and at least 2 layers down the TEBT programme planning worksheet
  • utilises work streams for input or delivery of defined work packages
  • responsible for defining resource requirements based on objectives 

 

The TEBT Programme Planning worksheet also defines (on the left) some functional responsibilities (such as testing and implementation) which are by definition work streams.

 

Mention the programme phases, milestones, and the delivery roadmap… future post.  

 

 

Relationship to SDLC 

 

While it’s true that each piece of code must go through a software development lifecycle; pieces of code will be changed for the duration of the project so the programme itself is never at a stage in the software development lifecycle.  Also, MegaProjects produce many deliverables which are not code but that have relationships with code (and other deliverables that must be managed).

 

 

TEBT Programme Definition Worksheet

 

 

See  www.managewithoutthem.com/tebt/workstreamsandprojects.html

 

 

TEBT Phases 

 

Part of the delivery roadmap for the overall programme

  • Initiation
    • oDefine objectives, budget
    • oDetermine partners and capabilities required
  • Solution Blueprinting and Delivery Roadmap 
    • oDefine a cross-partner solution
    • oDefine a delivery roadmap 
  • Project & Workstream Definition
    • oDevelop project  and work stream accountabilities
    • oDevelop charters per project and work stream
  • Analyse and Planning per Project / Work Stream
    • oAllow initial analysis to complete for each work stream and project
    • oConfirm resource requirements
    • oConfirm dependencies across projects and work streams 
    • oDefine shared programme level processes
  • Development of Deliverables
    • oDevelopment of deliverables for each work stream
    • oExecution of shared ‘Development’ programme level processes
  • Integration of Deliverables
  • Test execution
  • Integrated test management
  • Implementation

 

Vendors may join or leave at various phases.

The No ICT Strategy Organisation

The idea of business / IT alignment is completely at odds with the challenge of business agility.  You can never align all-of-the-business with all-of-the-IT.  You can only ensure that the business capabilities your organisation’s operating model depends on sufficiently utilise information technology in order to ensure competitive levels of productivity, optimal customer experience, and coordination with other business capabilities.  
 
The idea of business / IT alignment is admirable when it implies that in model business operations there is a concept of “business” concerns and their is a concept of “IT” concerns and that they are peers.  The process of business / IT alignment then would be a messy and complex process that might eventually work.  However, business / IT alignment never gets implemented as a process that assumes business and IT are peers.  Even if it was, it’s foolish to break your organisation along the lines of business versus IT – there are other ways of cutting up the organisation that eliminate the need for business / IT alignment altogether.  
 
This is further exacerbated by the shift of IT budget to business units.  Once budget that had traditionally been thought of as IT budget gets shifted into, say, marketing, it would be ridiculous for the marketing department to then raise a concern about the business / IT alignment challenges they were having when spending their new increased budget.  Once you’re responsible for both why complain about alignment?  If you own the budget you have nobody to complain about business/IT alignment to.
 
I’ve written before about how much of what people in the so-called “business” think of as “IT issues” are really related to information, complexity, or simply willingness to spend time on the details.  When a business process is automated – does it then become an IT problem?  It is a sign that our understanding of the dynamics involved in the implementation of information systems – which it is now trendy to call “digitisation” – has certainly outpaced our popular understanding of how organisations are designed and governed when these simple questions still have complex answers.
 
We have a number of real problems governing our organisations.  Business and IT concerns aren’t ever broken down to specifics, the whole concept of splitting business and IT places barriers to true organisational agility, and there still isn’t an understanding that in the modern world the high-level concept of “the business” and “IT” don’t exist.  This separation is make for the convenience of executive leadership and have limited organisational value.  
 
I’ve previously proposed the simplest change might be to stop creating ICT strategies.  I’m not saying an overall strategy for certain aspects of IT isn’t required.  I’m simply saying that an overall strategy for the organisation is more important.  If you have a business strategy and an ICT strategy is it any wonder you have business / IT alignment issues?  Of course you have alignment issues!  You have two seperate strategies. 
 
This is exacerbated by the fact that your business / corporate strategy  probably doesn’t contain the sorts of things the folks developing your ICT strategy expect to see anyway.  They probably have to go to individual business unit strategies to get the information they need and ultimately the ICT  folks are left to try and align the inconsistencies that will inevitably exist between all of these business unit strategies.  
 
A strategy development and strategy deployment process grounded in business capabilities is of course the answer.  

The path from academic to mainstream – Cognitive bias

Interesting to see the progression of ideas from academia through to government.  

Take for example “cognitive biases”: 

  • 1972Academic work – “The notion of cognitive biases was introduced by Amos Tversky and Daniel Kahneman in 1972” from here.
  • 2011Popular work (~40 years later) – ““Thinking, Fast and Slow” spans all three of these phases. It is an astonishingly rich book: lucid, profound, full of intellectual surprises and self-help value.” – from here.
  • 2015Mainstream in business (?) (4 years later) – “research suggests that there are a number of cognitive stumbling blocks that affect your behaviour, and they can prevent you from acting in your own best interests” – from here.
  • 2016Government (impressively, only 1 year later) – “As human beings, we think we make rational decisions every day, but in fact, we’re all seeing the world under a set of behavioral illusions that can really muck up our decision making. These are called cognitive biases” – from here.

Well done to the DTO folks for making that last leap impressively fast.

Page 3 of 21

Powered by WordPress & Theme by Anders Norén