Rediscovering my curiosity at Creative Melbourne

Send to Kindle

As I write this I am somewhere over the middle of Australia, flying back to Perth after participating in a 3 day event that was fun, challenging and highly insightful. The conference was Creative Melbourne, and I am proud to say I was one of the inaugural speakers. If they want me back again, I will do it in a heartbeat, and I hope a lot of you come along for the ride.

CreativeMelbourne-1

The premise: practical co-creation…

First the background… I have known the conference organiser, Arthur Shelley, for a few years. We first met at a Knowledge Management conference in Canberra and though I have no recollection of how we got talking, I do recall we clicked fairly quickly. At the time I was starting to explore the ideas around ambiguity, which eventually formed my second book. Back then I had a chip on my shoulder about how topics like complexity, Design Thinking and collaboration were being taught to students. I felt that the creative and fun parts glossed over the true stress and cognitive overload of wicked problems. This would produce highly idealistic students who would fall flat on their face once they hit a situation that was truly wicked. I therefore questioned whether anything was being built into students mental armory for the inevitable pain to come.

Now for some people who operate and teach in this space, making such a statement immediately and understandably gets their defenses up. But not Arthur – he listened to everything I had to say, and showed me examples of how he structured his courses and teachings to deal with this challenge. It was impressive stuff: every time his students thought they had a handle on things, Arthur would introduce a curveball or a change they were not anticipating. In other words, while teaching the techniques, he was building their capacity for handling ambiguous situations. Little did I know his conference was about to do the same to me…

One thing about Arthur that blows me away constantly is his incredible network of practitioners in this space. Arthur has long had a vision for bringing a constellation of such practitioners together and he hand-picked a bunch of us from all over the world. The premise, was to create an event that had a highly practical focus. He wanted practitioners to help attendees “Discover creative techniques to enhance performance and engage your team back at the office to increase productivity.”

Now where did I leave my curiosity?

While I am a sensemaking practitioner, I’ll admit straight up that I get irritated at the “fluffiness” and rampant idealism in this space. A good example is Design Thinking in this respect. While I like it and apply ideas from it to my practice, I dislike it when Design Thinking proponents claim it to be suited to wicked problems. The reality is the examples and case studies often cited are rarely wicked at all (at least in the way the term was originally conceived). When I see this sort of thing happening, it leaves me wondering if proponents have truly been in a complex, contingent situation and had the chance to stress test their ideas.

Now I don’t apologise for critically examining the claims made by anyone, but I do apologise for the unfortunate side effect – becoming overly contrarian. In my case, after all these years of research, reading and practice in this field, I am at the point where I see most new ideas as not actually new and are rediscoveries of past truths. Accordingly, it has been a long time since I felt that sense of exhilaration from having my mental molecules rearranged from a new idea. It makes sense right? I mean, the more you learn about something, the more your mental canvas has been painted on. In my case I already have a powerful arsenal of useful tools and approaches that I call upon when needed and more importantly, I was never on a spiritual quest for the one perfect answer to the mysteries of organsiational life anyway.

In short, I have what I need to do what I do. The only problem is somewhere along the line I lost the very sense of curiosity that started me along the path in the first place. It took Arthur, fellow presenters like Stuart French, Jamie Bartie, Jean-Charles Cailliez, Meredith Lewis, Brad Adriaanse, Vadim Shiryaev and a diverse group of participants to help me rediscover it…

Disrupting the disruptor…

Imagine someone like me participating in day 1, where we did things like build structures out of straws, put on silly hats, used the metaphor of zoo animals to understand behaviors, arm-wrestled to make a point about implicit assumptions and looked at how artists activate physical space and what we could learn from it when designing collaborative spaces. There was some hippie stuff going on here and my contrarian brain would sometimes trigger a reflexive reaction. I would suddenly realise I was tense and have to tell myself to relax. Sometimes my mind would instinctively retort with something like “Yeah right… try that in a politicised billion dollar construction project…” More than once I suppressed that instinct, telling myself “shut up brain – you are making assumptions and are biased. Just be quiet, listen, be present and you might learn something.”

That evening I confided to a couple of people that I felt out of place. Perhaps I was better suited to a “Making decisions in situations of high uncertainty and high cognitive overload” conference instead. I was a little fearful that I would kill the positive vibe of day 1 once I got to my session. No-one wants to be the party pooper…

Day 2 rolled around and when it was my turn to present. I held back a little on the “world according to Paul” stuff. I wanted to challenge people but was unsure of their tolerance for it – especially around my claims of rampant idealism that I mentioned earlier. I needn’t have worried though, as the speaker after me, Karuna Ramanathan from Singapore, ended up saying a lot of what I wanted to say and did a much better job. My talk was the appetizer to his “reality check” main course. He brilliantly articulated common organsiational archetypes and why some of the day 1 rhetoric often hits a brick wall. It was this talk that validated I did belong in this community after all. Arthur had indeed done his homework with his choice of speakers.

That same afternoon, we went on a walking tour of Melbourne with Jamie Bartie, who showed us all sorts of examples of cultural gems in Melbourne that were hiding in plain sight. The moral of the story was similar to day 1… that we often look past things and have challenge ourselves to look deeper. This time around my day 1 concerns had evaporated and I was able to be in the moment and enjoy it for what it was. I spoke to Jamie at length that evening and we bonded over a common childhood love of cult shows like Monkey Magic. I also discovered another kung-fu movie fan in Meredith Lewis, who showed me a whole new way to frame conversations to get people to reveal more about themselves, and develop richer personal relationships along the way.

Petcha Kucha – Getting to a point…

Day 3 was a bit of a watershed moment for me for two reasons. Months prior, I had accepted an invitation from Stuart French to participate in his Petcha Kucha session. At the time I said “yes” without really looking into what it entailed. The gist is you do a presentation of 20 slides, with 20 seconds per slide, all timed so they change whether you are ready or not. This forces you to be incredibly disciplined with delivering your talk, which I found very hard because I was so used to “winging it” in presentations. Despite keynoting conferences with hundreds of people in the room, doing a Petcha Kucha to a smaller, more intimate group was much more nerve-racking. I had to forcibly switch off my tangential brain because as soon as I had a thought bubble, the slides would advance and I would fall behind and lose my momentum. It took a lot of focus for me to suppress my thought bubbles but it was worth it. In short, a Petcha Kucha is a fantastic tool to test one’s mental muscles and enforce discipline. I highly recommend that everyone give it a go – especially creative types who tend to be a bit “all over the place”. It was a master-stoke from Stuart to introduce the technique to this audience and I think it needs to be expanded next time.

I presented the first Petcha Kucha, followed by Stuart and then Brad Adriaanse, who described the OODA Loop philosophy. OODA stands for observe, orient, decide, and act, providing a way to break out of one’s existing dogma and reformulate paradigms, allowing you to better adapt to changing circumstances. Dilbert cartoons aptly shows us that we all have incomplete (and often inconsistent) world views which should be continually refined and adapted in the face of new observations. Brad put it nicely when he said OODA was about maintaining a fluid cognitive state and that assumptions can be a straightjacket and dogma can blind us. This really hit home for me, based on how I reacted at times on day 1. Brad also said that the OODA loop can be internalised by adopting a lifelong learning mindset, being curious and become more and more comfortable with ambiguity.

It was at this exact moment where I rediscovered my latent curiosity and understood why I felt the way I did on day 1 and 2. It was also at this moment that I realised Arthur Shelley’s genius in why he made this event happen, who he brought together and what he has created in this event. All attendees need to be disrupted. Some need their idealism challenged, and some, like me, need a reminder of what started us on this path in the first place.

I have returned a better practitioner for it… Thankyou Arthur

 

Paul Culmsee

p.s Arthur Shelley is still a giant hippie

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Demystifying SharePoint Performance Management Part 5 – So what is latency anyway?

This entry is part 5 of 11 in the series Perf
Send to Kindle

Hi all

Welcome to part 5 in my attempt to make SharePoint performance management a little more accessible. Now that we have dealt with the world of request per second in parts two, three and four, we will focus our attention somewhere different for a post or three.

To set the scene, we are going to take a bit of an end to end look at what it takes to load a SharePoint page. I suspect some readers do not have the full picture on just how many components interact together just to load the SharePoint home page. Things are much more complex in reality than the typical architectural view that adorns SharePoint blogs. A typical SharePoint diagram will list the servers and their roles, but what about all…

  • the network gear like routers, switches, reverse proxies and firewalls that are part of the mix?
  • the VMWare or HyperV virtual hosts that provide the virtualised servers? And
  • the storage area network and its associated paraphernalia that these virtual servers make use of for disk infrastructure?

Make no mistake people, configurations these days are hugely complex and have many moving parts. If any of the various components listed above were to fail or become a bottleneck, the performance of the entire system suffers. Therefore, we need assurance that each component has been optimised to ensure overall function.

This brings us onto the topic of latency. If you are not sure what latency is, I can guarantee that you actually do know. You see, if you have ever experienced a jittery skype call, or your pornography is slow to load, or you have watched a roving reporter respond several seconds after being asked a question from the studio, you are experiencing latency.

Now, the important point to make straight up is that latency is unavoidable because of the laws of physics. Take the example of one of the rovers that NASA sent to Mars. All radio signals to Mars travel at the speed of light (which despite Star Trek’s best efforts to persuade us otherwise, is the absolute speed limit of the universe). The speed of light is around 300,000 kilometres per second and the distance to Mars is currently around 150 million kilometres from Earth. So doing some basic math, we find that it takes a little over 8 minutes for a signal to get from Earth to Mars.

  • 150,000,000 / 300,000 = 500 seconds
  • 500 / 60 = 8.3 minutes

In this example of latency, no matter what  happens, there will always be around 8 minutes of latency between the time an instruction is sent to a rover, to the time it receives and acts on it. Unless Einstein was wrong, this isn’t about to change in a hurry either.

A “lean” view of latency…

Latency is a concept that extends beyond the forces of nature. Let me give you another form of latency that I am sure you have experienced, using Microsoft as the straw man. Let’s say you have a problem with SharePoint and you log a call with Microsoft or your support provider. You call the technical support line and after twiddling your thumbs in the telephone queue for an eternity, you get an inexperienced level 1 tech, who doesn’t understand your problem at all and is hell bent on closing your call anyway because someone higher up in the organisation actually believed that call-time is an indicator of happy customers. You repeat yourself each and every time as your call is slowly routed up the tech support hierarchy. Finally, by the time you get to level 3 or 4, you finally get a good tech who gives you the quick answer you were looking for. The problem is that three weeks have passed to get to this point.

This is also a form of latency. But unlike the first example. It was not the laws of nature this time, but man made laws that caused wasted time. I will call it organisational latency. Addressing this form of latency is a multi billion dollar industry, and keeps an army of organisational/process improvement consultants busy, trying to reduce wastage and improve customer outcomes (now you know what Lean is all about if you hear people taking about it).

So, returning to the SharePoint context – we have a lot of moving parts. We know we cannot alter the laws of physics, but how do we know whether all of the various components are working to their optimum level? Is there any man-made latency that we could reduce or eliminate?

Oh, yes, indeedie there is… and to put some context  to it, let’s utilise the musical genius that is the Wiggles. I found that their rendition of the old folk song “Dem bones” serves my purpose nicely.

 

The Wiggles, teaching us about SharePoint latency 🙂

When you perform the seemingly benign task of requesting a page with your browser, an amazing number of things have to happen. The browser forms a HTTP request and then passes this to the TCPIP stack on your PC, which takes the HTTP request and breaks it up into IP packets. These packets are passed to your network card driver that turns these packets into Ethernet frames and sends them over the wire. Each network device (switch, router, etc.) has to process each frame or IP packet and to work out where to forward it. Eventually the request finds it way to the destination server where the Ethernet frames are stripped, the IP packets are reassembled into the original HTTP request, passed to IIS and SharePoint then acts on it.

Now all I described above was the task of getting a HTTP request from a browser to a server. To see the entire picture, let’s all sing along with the Wiggles shall we? We will assume a two server deployment, utilising a VMware based virtual web front end SharePoint server and a physical SQL Server. Both servers use a Storage Area Network (SAN) for disk. Cue the melody from “Dem Bones”…

  • Your PC connects to a distribution switch
  • The distribution switch is connected to the core switch
  • The core switch connects to the HyperV host
  • The HyperV host connects to the virtual Web Front End Virtual Machine

… okay so we have managed to get from our browser to the SharePoint web front end but at this point, the web front end hasn’t really done anything yet.  In terms of latency, we had to get through the switches, as well as the virtualisation infrastructure to the virtual SharePoint web front end box. The switches had very little latency at all – probably around 1-2 microseconds (which translates about 0.001 to 0.002 milliseconds) for a network packet to go in one port and out the other. The virtualisation infrastructure also introduced some latency, because there is overhead in running a virtual machine within a physical machine. However, assuming it is well configured and that there aren’t too many virtual machines competing for physical resources like CPU and memory, then that latency is fairly negligible.

Now the virtual web front end server needs to actually deal with the request from your PC. This involves pulling data from the disk infrastructure, so back to the Wiggles we go…

  • the Web Front End Virtual Machine connects to the HyperV host
  • The HyperV host connects to the SAN Switch
  • The SAN Switch connects to the Storage Array
  • The Storage Array connects to the Web Front End disk
  • The Web Front End disk returns data to the SAN Switch
  • The SAN switch returns data to the HyperV host
  • The HyperV host returns data to the Web Front End Virtual Machine

…at this point, the web front end server has retrieved any data it needs to from the disk subsystem. There was definitely latency here. The SAN switches have a similar latency to network switches which is negligible, but the physical disks on the SAN are another story (which we will get to soon). But wait a second – that just loads the stuff the web front end server stores or caches locally, as well as writing to the IIS and SharePoint logs. What about all those sexy web parts you have on the front page that aggregate the latest news feed? That stuff needs to pull information from the SharePoint content database on the SQL Server. So let’s continue, now incorporating the connection between the virtual web front end and SQL Server (Remember, I am assuming the SQL box is not virtualised).

  • The Web Front End Virtual Machine connects to SQL box (via the network on TCPIP port 1433)
  • The SQL Box connects to the SAN Switch
  • The San Switch connects to the Storage Array
  • The Storage Array connects to the SQL disk
  • The SQL disk returns data to the SAN Switch
  • The SAN switch connects to the SQL box
  • The SQL Box connects to Web Front End Virtual Machine (via the network on TCP port 1433)
  • The Web Front End Server returns the page to your PC (via the network on TCP port 80)

Now at this point, non tech oriented readers might be thinking, “Bloody hell! I didn’t realise there were that many interactions.” For you guys… now you know why tech guys are the way they are. Tech guys reading this would know full well that I glossed over many things. For example, I did not include the authentication process in the sequence above, nor did I describe important virtualisation aspects such as VM memory compression. On top of that I glossed big-time over the full gamut of SAN I/O paths.

There is a form of man-made latency that can occur in any of these steps outlined above as a result of the complexity. It is very easy to overlook an important aspect, or to misconfigure something or to assume the default configuration is optimal. In my consulting experience I have seen sub optimal configuration in all of the above touchpoints, but out of all of them, there is one area that is far more likely to have latency issues than any of the other areas: The disk infrastructure.

We will round out this post by taking a fairly high level view at disk infrastructure and why it is latency prone.

Understanding disk latency

Below is a Wikipedia picture that shows the essential components of most hard drives. This type of hard drive is really not that different from its original design in 1954. It is called a rotational hard drive and the spindle rotates, while the actuator arm moves the head to the right position to read data off the platter. As you can imagine, this happens pretty fast too. Most high end hard drives spin the platter at 15000RPM – dizzying, eh?

 

But to put disk performance in perspective, consider my previous example of a network switch with a 1-2 microsecond latency to process an Ethernet frame as it transits through one network port to another. By comparison, a modern hard drive takes a hell of a lot longer to do what it needs to do. As a simple example, the time taken just for the drive to rotate the disk platter takes around 2 milliseconds (or 2000 microseconds). Not only is this a staggering 2000 times slower than the network switch but it does not take into account the time it takes for the hard drive’s read/write head to then be positioned over the sector (this is called seek time and can take anywhere between 3 and 15 milliseconds).

This latency clearly is problematic, and vendors compensate by utilising multiple sets of disks and liberal use of cache technology to mitigate it. Imagine putting 10 hard disks together and when data is saved, parts of it is written to each hard disk. Now you have reduced latency because each drive is handling a smaller part instead of a single drive handling it all. It is important to note that we have done nothing about laws of physics latency per single drive (thanks Robert Bogue for pointing that out) , but throughput induced latency has reduced by using them all. It is just like when you are out the supermarket and there are ten check-out operators working instead of 1. The wait times are much shorter because there are more check out operators available to service the request. (This is the essence of RAID technology and should be familiar to most readers).

But there is still more to the latency story than disks taking time to do their thing. At the operating system level, there are various layers and drivers doing stuff. I won’t go too much into this is except to suggest that if the world of the Class Drivers, Port Drivers, Device Miniport Drivers and Disk Subsystems rock your world then Jeff Hughes has a great writeup where he describes the whole Windows disk IO system in detail.

A note on SSD

I would be remiss not to make a point about these newfangled Solid State Drives (You might have heard them mentioned as SSD). This is a newer technology for hard drives that do not employ any moving mechanical components, like platters and movable read/write heads. Solid State Drives have some seriously improved performance in terms of latency, because they store the data in persistent memory. Wikipedia cites that SSD latency is around 0.1 millisecond compared to rotational drives being around 5-10 milliseconds. The downside is that they are more expensive than traditional rotational disks. According to a May 2012 article, SSDs cost approximately US$0.65 per GB whereas traditional hard disks cost about US$0.05 per GB. Expect prices to continue to fall and for them to appear in more and more solutions.

Then there are SANs

In terms of disk infrastructure and latency aspects, most organisation’s utilise a Storage Area Network (SAN) topology. I previously mentioned the idea of RAID configurations that make use of multiple disks to improve latency (among other things). SANs take the RAID idea and abstracts it further as shown below.

image

(credit for this image is Orbis solutions: http://orbissolutionsinc.com/blog/tag/storage-arrays/)

I sometimes describe SANs to people as a “fridge full of hard drives connected to multiple servers”. What the above diagram shows is that the disks are physically not connected to the servers that use them. Instead they are connected to a storage array via cables, with a switch or three in between. Each server has some disk space reserved for it on the SAN. So the result is we have one centralised high performing disk array where we can take advantage of all of the disks housed within.

But it’s important to understand here that each time data is read from or written to disk, it passes across those cables and through the switches. Like an internet connection, the SAN switch and cables not only have bandwidth limitations, but are prone to misconfiguration. Imagine 50 servers writing data at the same time. If things are not well configured, the SAN switch infrastructure might become overwhelmed like a freeway during peak hour. Direct attached storage (i.e. – the hard drive or RAID array is plugged into the server directly) typically have a higher bandwidth. This quote from a nice sqlteam.com article on SAN performance explains it well.

For instance, if a server is equipped with two older 1-Gbps host bus adapters (HBAs), its MBps throughput would be capped at about 200MB per second no matter how powerful the rest of the SAN is. Replacing the 1-Gbps HBAs with two newer 4-Gbps HBAs or adding more HBAs may improve the throughput, if the HBAs are indeed the throughput bottleneck. But the SAN drive throughput could also be limited by the maximum throughput of the inter-switch links in the SAN switched fabric. Further down the I/O paths, the front-side adapter ports on the disk array, the cache in the disk array, the disk controllers, and the disk spindles can all become the bottleneck limiting the MBps throughput of the SAN drive.

Conclusion and coming next…

Okay… at this point let’s take a breather. For the tech guys reading this post, none of what I covered may seem particularly earth shattering, but it was important to set the context for a deeper dive into disk latency in the next couple of posts. If you are not normally of the tech persuasion, then I hope that this post has opened your eyes a little to just how complicated deployments can be and accordingly, how hard it can sometimes be to pinpoint latency issues when they occur.

In the next post, we will take a deeper look at disk latency and its relationship to the indicators of IOPS and MBPS. We will then examine tools to measure latency and how to best use it as a lead indicator.

Until then, thanks for reading and be sure to check out my recent business book “The Heretics Guide to Best Practices

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why I’ve been quiet…

Send to Kindle

As you may have noticed, this blog has been a bit of a dead zone lately. There are several very good reasons for this – one being that a lot of my creative energy has been going into co-writing a book – and I thought it was time to come clean on it.

So first up, just because I get asked this all the time, the book is definitely *not* “A humble tribute to the leave form – The Book”! In fact, it’s not about SharePoint per se, but rather the deeper dark arts of team collaboration in the face of really complex or novel problems.

It was late 2006 when my own career journey took an interesting trajectory, as I started getting into sensemaking and acquiring the skills necessary to help groups deal with really complex, wicked problems. My original intent was to reduce the chances of SharePoint project failure but in learning these skills, now find myself performing facilitation, goal alignment and sensemaking in areas miles away from IT. In the process I have been involved with projects of considerable complexity and uniqueness that make IT look pretty easy by comparison. The other fringe benefit is being able to sit in a room and listen to the wisdom of some top experts in their chosen disciplines as they work together.

Through this work and the professional and personal learning that came with it, I now have some really good case studies that use unique (and I mean, unique) approaches to tackling complex problems. I have a keen desire to showcase these and explain why our approaches worked.

My leanings towards sensemaking and strategic issues would be apparent to regular readers of CleverWorkarounds. It is therefore no secret that this blog is not really much of a technical SharePoint blog these days. The articles on branding, ROI, and capacity planning were written in 2007, just before the mega explosion of interest in SharePoint. This time around, there are legions of excellent bloggers who are doing a tremendous job on giving readers a leg-up onto this new beast known as SharePoint 2010.

BBP (3)

So back to the book. Our tentative title is “Beyond Best Practices” and it’s an ambitious project, co-authored with Kailash Awati – the man behind the brilliant eight to late blog. I had been a fan of Kailash’s work for a long time now, and was always impressed at the depth of research and effort that he put into his writing. Kailash is a scarily smart guy with two PHD’s under his belt and to this day, I do not think I have ever mentioned a paper or author to him that he hasn’t read already. In fact, usually he has read it, checked out the citations and tells me to go and read three more books!

Kailash writes with the sort of rigour that I aspire to and will never achieve, thus when the opportunity of working with him on a book came up, I knew that I absolutely had to do it and that it would be a significant undertaking indeed.

To the left is a mock-up picture to try and convey where we are going with this book. See the guy on the right? Is he scratching his head in confusion, saluting or both? (note, this is our mockup and the real thing may look nothing like this)

This book dives into the seedy underbelly of organisational problem solving, and does so in a way that no other book has thus far attempted. We examine why the very notion of “best practices” often makes no sense and have such a high propensity to go wrong. We challenge some mainstream ideas by shining light on some obscure, but highly topical and interesting research that some may consider radical or heretical. To counter the somewhat dry nature of some of this research (the topics are really interesting but the style in which academics write can put insomniacs to sleep), we give it a bit of the cleverworkarounds style treatment and are writing in a conversational style that loses none of the rigour, but won’t have you nodding off on page 2. If you liked my posts where I use odd metaphors like boy bands to explain SharePoint site collections, the Simpsons to explain InfoPath or death metal to explain records versus collaborative document management, then you should enjoy our journey through the world of cognitive science, memetics, scientific management and Willy Wonka (yup – Willy Wonka!).

Rather than just bleat about what the problems with best-practices are, we will also tell you what you can do to address these issues. We back up this advice by presenting a series of practical case studies, each of which illustrates the techniques used to address the inadequacies of best practices in dealing with wicked problems. In the end, we hope to arm our readers with a bunch of tools and approaches that actually work when dealing with complex issues. Some of these case studies are world unique and I am very proud of them.

Now at this point in the writing, this is not just an idea with an outline and a catchy title. We have been at this for about six months, and the results thus far (some 60-70,000 words) have been very, very exciting. Initially, we really had no idea whether the combination of our writing styles would work – whether we could take the degree of depth and skill of Kailash with my low-brow humour and my quest for cheap laughs (I am just as likely to use a fart joke if it helps me get a key point across)…

… But signs so far are good so stay tuned 🙂

Thanks for reading

 

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The one best practice to rule them all – Part 1

Send to Kindle

image

This is a post or three that I have really been looking forward to writing, and it is a long time in the making for various reasons. Some of you, after reading it, will no doubt wonder if I have been taking magic mushrooms or something similar, but if the feedback from the SharePoint Best Practices conference is anything to go by, then maybe a couple of readers will have the same sense of realisation and clarity that I had.

I am going to tell you the first best practice that you should master. If you master this, all of the other best practices will fall into place. It goes beyond SharePoint too. Failure to do this, and all of your other best practices may not necessarily save you. In fact they can actually work against you. Hence the “Lord of the Rings” inspired title of this post.

Before we begin, I have to make a confession. I am not a 100% full time SharePoint consultant anymore. Don’t get me wrong. I still do, and will continue to perform a *heck* of a lot of SharePoint advisory and pick through the wreckage of many a chaotic installation. But I have worked hard to develop a new skill over the last year that has proven to be particularly powerful and profound in my SharePoint practice. The response of those SharePoint clients with whom I have used this craft has been overwhelmingly positive. The thing is though, this craft has started to take on a life of its own and now I am being called on to use it outside the SharePoint realm – despite SharePoint being the whole reason I found it in the first place.

So to explain this craft I really need to explain how I came to find it.

“I don’t know what I am delivering anymore”

Back in  late 2006, I was the infrastructure architect at a mid sized MOSS early adopter. This organisation came from a place of pretty low maturity around their document, knowledge and information management practices. As I have subsequently come to understand and recognise, many organisations coming from this place have a tendency to try to boil the ocean, via a phenomena that I previously termed the “panacea effect“. At one level, a big ambitious SharePoint platform was brilliant learning for me personally because I got to put in a multi-server farm as well as the IBM SAN storage, clustered SQL, network load balanced web front end servers, extranet config, custom authentication, publishing and just about everything in between. All in all the perfect site for the tech geek to learn the guts of SharePoint infrastructure and develop an early instinct for governance at that level.

But that wasn’t the problem area – that was actually the *tame* part of the SharePoint project. This project started to unwind pretty quickly for other reasons. Under pressure and eager to produce, the Microsoft enamoured sponsor pushed hard. Each stakeholder had *radically* different world views of what we were doing, and pinning down scope and requirements was an exercise in futility, project time estimates were crashed by more than half because they were more than the sponsors original naive estimate that went to the board of directors. The thoroughly frustrated project manager said to me one day “I don’t know what I am delivering anymore”.

CleverWorkarounds’ Hindsight Rating: Stop now – just stop now.

Around this time I decided to have a chat with some of the major stakeholders because I was really worried about this project to the point that I was thinking of resigning. It seemed that the various stakeholders never actually spoke to each other, instead using the project manager as a kind of proxy. I thought that maybe a few one-on-one, more casual meetings might break a few deadlocks and frustrations.

So, sitting in a coffee place with one particular stakeholder, I was asked the question that was the catalyst for where I am today.

“Paul, can you tell me the difference between SharePoint and Skype?”

(When I told the audience this at the Best Practice conference, I was met with disbelieving laughter. I can tell you that at the time I didn’t laugh – I was so taken aback by the question I just about choked on my double-shot latte!). 

“Well”, he explained, “I can collaborate with anybody in the world using Skype for free, and even call regular land lines very cheaply. Why should I pay half a million bucks for SharePoint to collaborate?”

CleverWorkarounds’ Hindsight Rating: Project participants can hide a lack of understanding longer than you think. Have you ever been in a meeting where you are unsure or do not feel fully informed? It is very common for people to sit quietly rather than stop and say “Sorry, I don’t understand this.”

I spent a lot of time with this stakeholder after that, and little by little I was able to get across various SharePoint basics like libraries, lists, columns and views. What happened next though was that this stakeholder started to suggest we do things that were already in the project plan (he never read it originally because he didn’t understand it). Later he gave me a records based taxonomy from a company that he used to work for in the mid nineties. It was one of those library inspired, record centric taxonomies like what I described in the document management/death metal article. He had decided that all document libraries farm-wide should use 5 common site columns – no more.

He said another thing to me around this time which was also very influential.

“We have one shot to get this right. If we get this wrong, we are going to set the company back years”.

You will understand the significance of this comment soon enough…

Off to the cave…

I think I have told enough of this story that you already know the outcome. There were other stakeholders of course with their own peculiar views of the world, and there were various things we could have done better at all levels. But fundamentally, I was dealing with a guy who’s understanding of the problem clearly changed, the more he learned and the more he thought about the collaboration problem. All of the other stakeholders went through this thought/learning process as well.

This project was something that stuck in my mind for a long time after and I was determined to never ever let this happen again. I mean, we all know SharePoint is technically complex, but the “SharePoint vs Skype” conversation for me was a watershed moment. If I were the PM, how could I have seen this coming and mitigated it early? How could we have gotten into the implementation phase for someone to ask such a question? He sat in all the meetings with everyone else and he saw the famous SharePoint pie chart like everyone else. What was wrong with our processes? Did we need to use a best-practice methodology? Did I need to learn to train people better?

It was time to go off to the metaphorical cave and meditate for a while. (Jeremy Thake once called me the theory master – now you know why).

I dug out my PMBOK books. “Maybe the PMO was implemented too rigidly or with too much dogma?” I thought. But after re-reading that stuff I still couldn’t satisfactorily reconcile the Skype question. We spent days and days developing the project management plan – it was a good plan for its time and anticipated a lot of stuff. It was clear that at least one of the stakeholders never read it beyond a basic skim or perhaps the executive summary.

So I looked at COBiT as it was supposed to be about controls and oversight. I really liked COBiT, especially the RACI charts and the maturity model. To this day I think it is one of the best designed and most elegantly constructed standards out there, but it suffers from being *so* high level and abstract that is really only useful at CIO/board level. In fact COBiT really is an umbrella that sits across all of the other ones, so by itself I think it is next to useless. Thus, it really wasn’t going to be a practical help in dealing with this problem of stakeholder understanding.

What about ISO9001? I mean, it is all about quality right? Maybe we had a quality issue? Maybe some insights were to be found there? Would a quality management plan have helped? Maybe a little bit – I mean I learned the fun you can have with the non conformance clauses. But the issue was *not* what to do once I found a quality problem. The fact that it had become a quality issue means that by definition, something went unnoticed or ignored and then caused some unwanted event to occur. Thus, I needed to recognise it much, much earlier than when it becomes a quality issue.

(ISO9001, if you have not yet read it, will cure even the most hard-core insomniac – guaranteed).

Hmmm, perhaps the answer lies in process improvement? Maybe if we used a best-practice methodology to map out and understand our processes, it would have resulted in a more optimal information architecture exercise. I had watched teams argue over process and accountabilities when we started talking to them about metadata and workflows during the information architecture sessions. So I hit the books on process improvement and business process modelling methodologies – a very crowded space with many standards and theories.

Three things popped up here. IBM’s BPMN (Business Process Modelling Notation), the process improvement methodology Six Sigma (and its variants) and a great book by Geary Rummler called “Improving Performance: How to Manage the White Space in the Organization Chart (Jossey Bass Business and Management Series)“.

I became excited as this was definitely getting closer to what I was looking for. As I read more, I saw potential. BPMN was simply a method to create consistent, easy to understand process flow diagrams and in fact, one of my colleagues has become a master at this craft. But that didn’t address the ‘art’ of process improvement. I found that often when trying to map a process with a group, the group would often start to recognise flaws or flat out disagree on how the process ran within the organisation. Inevitably, as a process mapper, you would sit back and “process debates” would take over the meeting. Clearly there was a step missing.

I also liked the emphasis on data centric decision making of Six Sigma and the emphasis on measurement. I went very low-brow and bought Six Sigma for Dummies and devoured it. As I read it, I started to remember my old high school maths because Six Sigma is very analytical and data driven. Much of what Six Sigma teaches is very, very good from a philosophical standpoint, but it did seem so “epic” and seemed to be geared around “big bang” change.

All of process improvement methodologies were process-heavy and structure-heavy. I feared that the same dogma that I had seen derail the good intentions of PMBOK would affect Six Sigma in the sorts of organisations that I had involvement with. I read a great online document later that suggested Six Sigma in real life had more of a two sigma success rate which I found unsurprising.

I also looked at Lean/Kaizen and a zillion variants. In the end I started to get lost and frustrated. Process improvement (and by association, strategy theory) is an insanely crowded place and some other time, I will write about the various fads as they have come and gone.

The Eureka Moment

Okay so I read a lot of stuff (and now have forgotten most of it). All of the methodologies and practices that I studied had some excellent parts to them, and in fact, most *encourage* you to take the bits that make logical sense for you. As I wrote in Part 8 of the “SharePoint Project Failure…” series, I found it ironic that implementation of many of these methodologies fell victim to the same sorts of “people” issues that derail the original project. The whole ‘big bang’ style approach, whether it was a software project or a best-practice methodology seemed to suffer the same sort of hit-or-miss fate.

Then in one of those times when you randomly surf wikipedia (the fountain of all authoritive knowledge 😉 ), I came across the term “Wicked problems” and the work of Horst Rittel from the early 1970’s. In Rittel’s era, he was talking about a particular class of social policy and planning problems like “What shall we do about the global financial crisis” or “What should we do about global warming”, “What should we do to solve the Palistinian issue?”. Such problems are insanely tough to solve. But as I read about the *characteristics* of a wicked problem as described by Rittel, and subsequently by his student Conklin, I realised that both were describing *exactly* my first MOSS 2007 project.

While I am not suggesting a MOSS project is a wicked problem the way that Rittel or Conklin envision it to be, those “characteristics” or “properties” of wicked problems were so applicable to my experience that is was scary.

Phew!

Since I have been waffling on far too long, I’ll stop things now, and in the next post, I will delve into wicked problems in more detail, both Rittel and Conklin’s definitions, as well as my own definition, that I used at my Best Practices SharePoint Conference session.

 

Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle