Rewriting the knowledge management rulebook… The story of “Glyma” for SharePoint

Send to Kindle

“If Jeff ever leaves…”

I’m sure you have experienced the “Oh crap” feeling where you have a problem and Jeff is on vacation or unavailable. Jeff happens to be one of those people who’s worked at your organisation for years and has developed such a deep working knowledge of things, it seems like he has a sixth sense about everything that goes on. As a result, Jeff is one of the informal organisational “go to guys” – the calming influence amongst all the chaos. An oft cited refrain among staff is “If Jeff ever leaves, we are in trouble.”

In Microsoft’s case, this scenario is quite close to home. Jeff Teper, who has been an instrumental part of SharePoint’s evolution is moving to another area of Microsoft, leaving SharePoint behind. The implications of this are significant enough that I can literally hear Bjorn Furuknap’s howls of protest all the way from here in Perth.

So, what is Microsoft to do?

Enter the discipline of knowledge management to save the day. We have SharePoint, and with all of that metadata and search, we can ask Jeff to write down his knowledge “to get it out of his head.” After all, if we can capture this knowledge, we can then churn out an entire legion of Jeffs and Microsoft’s continued SharePoint success is assured, right?

Right???

There is only one slight problem with this incredibly common scenario that often underpins a SharePoint business case… the entire premise of “getting it out of your head” is seriously flawed. As such, knowledge management initiatives have never really lived up to expectations. While I will save a detailed explanation as to why this is so for another post, let me just say that Nonaka’s SECI model has a lot to answer for as it is based on a misinterpretation of what tacit knowledge is all about.

Tacit knowledge is expert knowledge that is often associated with intuition and cannot be transferred to others by writing it down. It is the “spider senses” that experts often seem to have when they look at a problem and see things that others do not. Little patterns, subtleties or anomalies that are invisible to the untrained eye. Accordingly, it is precisely this form of knowledge that is of the most value in organisations, yet is the hardest to codify and most vulnerable to knowledge drain. If tacit knowledge could truly be captured and codified in writing, then every project manager who has ever studied PMBOK would have flawless projects, because the body of knowledge is supposed to be all the codified wisdom of many project managers and the projects they have delivered. There would also be no need for Agile coaches, Microsoft’s SharePoint documentation should result in flawless SharePoint projects and reading Wictor’s blog would make you a SAML claims guru.

The truth of tacit knowledge is this: You cannot transfer it, but you acquire it. This is otherwise known as the journey of learning!

Accountants are presently scratching their heads trying to figure out how to measure tacit knowledge. They call it intellectual capital, and the reason it is important to them is that most of the value of organisations today is classified on the books as “intangibles”. According to the book Balanced Scorecard, a company’s physical assets accounted for 62% of its market value in 1982, 38% of its market value in 1992 and only 21% in 2003. This is in part a result of the global shift toward knowledge economies and the resulting rise in the value of intellectual capital. Intellectual capital is the sum total of the skills, knowledge and experience of staff and is critical to sustaining competitiveness, performance and ultimately shareholder value. Organisations must therefore not only protect, but extract maximum value from their intellectual capital.

image

Now consider this. We are in an era where baby boomers are retiring, taking all of their hard-earned knowledge with them. This is often referred to as “the knowledge tsunami”, “the organisational brain drain” and the more nerdy “human capital flight”. The issue of human capital flight is a major risk area for organisations. Not only is the exodus of baby boomers an issue, but there are challenges around recruitment and retention of a younger, technologically savvy and mobile workforce with a different set of values and expectations. One of the most pressing management problems of the coming years is the question of how organisations can transfer the critical expertise and experience of their employees before that knowledge walks out the door.

The failed solutions…

After the knowledge management fad of the late 1990’s, a lot of organisations did come to realise that asking experts to “write it down” only worked in limited situations. As broadband came along, enabling the rise of rich media services like YouTube, a digital storytelling movement arose in the early 2000’s. Digital storytelling is the process by which people share stories and reflections while being captured on video.

Unfortunately though, digital storytelling had its own issues. Users were not prepared to sit through hours of footage of an expert explaining their craft or reflecting on a project. To address this, the material was commonly edited down to create much smaller mini-documentaries lasting a few minutes – often by media production companies, so the background music was always nice and inoffensive. But this approach also commonly failed. One reason for failure was well put by David Snowden when he saidInsight cannot be compressed”. While there was value in the edited videos, much of the rich value within the videos was lost. After all, how can one judge ahead of time what someone else finds insightful. The other problem with this approach was that people tended not to use them. There was little means for users to find out these videos existed, let alone watch them.

Our Aha moment

In 2007, my colleagues and I started using a sensemaking approach called Dialogue Mapping in Perth. Since that time, we have performed dialogue mapping across a wide range of public and private sector organisations in areas such as urban planning, strategic planning, process reengineering, organisational redesign and team alignment. If you have read my blog, you would be familiar with dialogue mapping, but just in case you are not, it looks like this…

Dialogue Mapping has proven to be very popular with clients because of its ability to make knowledge more explicit to participants. This increases the chances of collective breakthroughs in understanding. During one dialogue mapping session a few years back, a soon-to-be retiring, long serving employee relived a project from thirty years prior that he realised was relevant to the problem being discussed. This same employee was spending a considerable amount of time writing procedure manuals to capture his knowledge. No mention of this old project was made in the manuals he spent so much time writing, because there was no context to it when he was writing it down. In fact, if he had not been in the room at the time, the relevance of this obscure project would never have been known to other participants.

My immediate thought at the time when mapping this participant was “There is no way that he has written down what he just said”. My next thought was “Someone ought to give him a beer and film him talking. I can then map the video…”

This idea stuck with me and I told this story to my colleagues later that day. We concluded that the value of asking our retiring expert to write his “memoirs” was not making the best use of his limited time. The dialogue mapping session illustrated plainly that much valuable knowledge was not being captured in the manuals. As a result, we seriously started to consider the value of filming this employee discussing his reflections of all of the projects he had worked on as per the digital storytelling approach. However, rather than create ‘mini documentaries’, utilise the entire footage and instead, visually map the rationale using Dialogue Mapping techniques. In this scenario, the map serves as a navigation mechanism and the full video content is retained. By clicking on a particular node in the map, the video is played from the time that particular point was made. We drew a mock-up of the idea, which looked like the picture below.

image

While thinking the idea would be original and cool to do, we also saw several strategic advantages to this approach…

  • It allows the user to quickly find the key points in the conversation that is of value to them, while presenting the entire rationale of the discussion at a glance.
  • It significantly reduces the codification burden on the person or group with the knowledge. They are not forced to put their thoughts into writing, which enables more effective use of their time
  • The map and video content can be linked to the in-built search and content aggregation features of SharePoint.
    • Users can enter a search from their intranet home page and retrieve not only traditional content such as documents, but now will also be able to review stories, reflections and anecdotes from past and present experts.
  • The dialogue mapping notation when stored in a database, also lends itself to more advanced forms of queries. Consider the following examples:
    • “I would like any ideas from lessons learnt discussions in the Calgary area”
    • “What pros or cons have been made about this particular building material?”
  • The applicability of the approach is wide.
    • Any knowledge related industry could take advantage of it easily because it fits into exiting information systems like SharePoint, rather than creating an additional information silo.

This was the moment the vision for Glyma (pronounced “glimmer”) was born…

Enter Glyma…

Glyma (pronounced ‘glimmer’) is a software platform for ‘thought leaders’, knowledge workers, organisations, and other ‘knowledge economy participants’ to capture and trade their knowledge in a way that reduces effort but preserves rich context. It achieves this by providing a new way for users to visually capture and link their ideas with rich media such as video, documents and web sites. As Glyma is a very visually oriented environment, it’s easier to show Glyma rather than talk to it.

Ted

image

What you’re looking at in the first image above are the concepts and knowledge that were captured from a TED talk on education augmented with additional information from Wikipedia. The second is a map that brings together the rationale from a number of SPC14 Vegas videos on the topic of Hybrid SharePoint deployments.

Glyma brings together different types of media, like geographical maps, video, audio, documents etc. and then “glues” them together by visualising the common concepts they exemplify. The idea is to reduce the burden on the expert for codifying their knowledge, while at the same time improving the opportunity for insight for those who are learning. Glyma is all about understanding context, gaining a deeper understanding of issues, and asking the right questions.

We see that depending on your focus area, Glyma offers multiple benefits.

For individuals…

As knowledge workers our task is to gather and learn information, sift through it all, and connect the dots between the relevant information. We create our knowledge by weaving together all this information. This takes place through reading articles, explaining on napkins, diagramming on whiteboards etc. But no one observes us reading, people throw away napkins, whiteboards are wiped clean for re-use. Our journey is too “disposable”, people only care about the “output” – that is until someone needs to understand our “quilt of information”.

Glyma provides end users with an environment to catalogue this journey. The techniques it incorporates helps knowledge workers with learning and “connecting the dots”, or as we know it synthesising. Not only does it help us with doing these two critical tasks, it then provides a way for us to get recognition for that work.

For teams…

Like the scenario I started this post with, we’ve all been on the giving and receiving end of it. That call to Jeff who has gone on holiday for a month prior to starting his promotion and now you need to know the background to solving an issue that has arisen on your watch. Whether you were the person under pressure at the office thinking, “Jeff has left me nothing of use!”, or you are Jeff trying to enjoy your new promotion thinking, “Why do they keep on calling me!”, it’s an uncomfortable situation for all involved.

Because Glyma provides a medium and techniques that aid and enhance the learning journey, it can then act as the project memory long after the project has completed and the team members have moved onto their next challenge. The context and the lessons it captures can then be searched and used both as a historical look at what has happened and, more importantly, as a tool for improving future projects.

For organisations…

As I said earlier, intangible assets now dominate the balance sheets of many organisations. Where in the past, we might have valued companies based on how many widgets they sold and how much they have in their inventory, nowadays intellectual capital is the key driver of value. Like any asset, organisations need to extract maximum value from intellectual capital and in doing so, avoid repeat mistakes, foster innovation and continue growth. Charles G. Sieloff summed this up well in the name of his paper, “if only HP knew what HP knows”.

As Glyma aids, enhances, and captures an individual’s learning journey, that journey can now be shared with others. With Glyma, learning is no longer a silo, it becomes a shared journey. Not only does it do this for individuals but it extends to group work so that the dynamics of a group’s learning is also captured. Continuous improvement of organisational processes and procedures is then possible with this captured knowledge. With Glyma, your knowledge assets are now tangible.

Lemme see it!

So after reading this post this far, I assume that you would like to take a look. Well as luck would have it, we put out a public Glyma site the other day that contains some of my own personal maps. The maps on the SP2013 apps model and hybrid SP2013 deployments in particular represent my own learning journey, so hopefully should help you if you want a synthesis of all the pros and cons of these issues. Be sure to check the videos on the getting started area of the site, and check the sample maps! Smile

glymasite

I hope you like what you see. I have a ton of maps to add to this site, and very soon we will be inviting others to curate their own maps. We are also running a closed beta, so if you want to see this in your organisation, go to the site and then register your interest.

All in all, I am super proud of my colleagues at Seven Sigma for being able to deliver on this vision. I hope that this becomes a valuable knowledge resource for the SharePoint community and that you all like it. I look forward to seeing how history judges this… we think Glyma is innovative, but we are biased! 🙂

 

Thanks for reading…

Paul Culmsee

www.glyma.co

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud isn’t the problem–Part 5: Server huggers and a crisis of identity

This entry is part 5 of 6 in the series Cloud
Send to Kindle

Hi all

Welcome to my fifth post that delves into the irrational world of cloud computing. After examining the not-so-obvious aspects of Microsoft, Amazon and the industry more broadly, its time to shift focus a little. Now the appeal of the cloud really depends on your perspective. To me, there are three basic motivations for getting in on the act…

  1. I can make a buck
  2. I can save a buck
  3. I can save a buck (and while I am at it, escape my pain-in-the-ass IT department)

If you haven’t guessed it, this post will examine #3, and look at what the cloud means for the the perennial issue of the IT department and business disconnect. I recently read an article over at CIO magazine where they coined the term “Server Huggers” to describe the phenomenon I am about to describe. So to set the flavour for this discussion, let me tell you about the biggest secret in organisational life…

We all have an identity crisis (so get over it).

In organizations, there are roles that I would call transactional (i.e. governed by process and clear KPI’s) and those that are knowledge-based (governed by gut feel and insight). Whilst most roles actually entail both of these elements, most of us in SharePoint land are the latter. In fact we actually spend a lot of time in meeting rooms “strategizing” the solutions that our more transactionally focused colleagues will be using to meet their KPI’s. Beyond SharePoint, this also applies to Business Analysts, Information Architects, Enterprise Architects, Project Managers and pretty much anyone with the word “senior”, “architect”, “analyst”  or “strategic” in their job title.

But there is a big, fat, elephant in the “strategizing room” of certain knowledge worker roles that is at the root of some irrational organisational behaviour. Many of us are suffering a role-based identity crisis. To explain this, lets pick a straw-man example of one of the most conflicted roles of all right now: Information Architects.

One challenge with the craft of IA is pace of change, since IA today looks very different from its library and taxonomic roots. Undoubtedly, it will look very different ten years from now too as it gets assailed from various other roles and perspectives, each believing their version of rightness is more right. Consider this slightly abridged quote from Joshua Porter:

Worse, the term “information architecture” has over time come to encompass, as suggested by its principal promoters, nearly every facet of not just web design, but Design itself. Nowhere is this more apparent than in the latest update of Rosenfeld and Morville’s O’Reilly title, where the definition has become so expansive that there is now little left that isn’t information architecture […] In addition, the authors can’t seem to make up their minds about what IA actually is […] (a similar affliction pervades the SIGIA mailing list, which has become infamous for never-ending definition battles.) This is not just academic waffling, but evidence of a term too broadly defined. Many disciplines often reach out beyond their initial borders, after catching on and gaining converts, but IA is going to the extreme. One technologist and designer I know even referred to this ever-growing set of definitions as the “IA land-grab”, referring to the tendency that all things Design are being redefined as IA.

You can tell when a role is suffering an identity crisis rather easily too. It is when people with the current role start to muse that the title no longer reflects what they do and call for new roles to better reflect the environment they find themselves in. Evidence for this exists further in Porter’s post. Check out the line I marked with bold below:

In addition, this shift is already happening to information architects, who, recognizing that information is only a byproduct of activity, increasingly adopt a different job title. Most are moving toward something in the realm of “user experience”, which is probably a good thing because it has the rigor of focusing on the user’s actual experience. Also, this as an inevitable move, given that most IAs are concerned about designing great things. IA Scott Weisbrod, sees this happening too: “People who once identified themselves as Information Architects are now looking for more meaningful expressions to describe what they do – whether it’s interaction architect or experience designer

So while I used the example of Information Architects as an example of how pace of change causes an identity crisis, the advent of the cloud doesn’t actually cause too many IA’s (or whatever they choose to call themselves) to lose too much sleep. But there are other knowledge-worker roles that have not really felt the effects of change in the same way as their IA cousins. In fact, for the better part of twenty years one group have actually benefited greatly from pace of change. Only now is the ground under their feet starting to shift, and the resulting behaviours are starting to reflect the emergence of an identity crisis that some would say is long overdue.

IT Departments and the cloud

At a SharePoint Saturday in 2011, I was on a panel and we were asked by an attendee what effect Office 365 and other cloud based solutions might have on a traditional IT infrastructure role. This person asking was an infrastructure guy and his question was essentially around how his role might change as cloud solutions becomes more and more mainstream. Of course, all of the SharePoint nerds on the panel didn’t want to touch that question with a bargepole and all heads turned to me since apparently I am “the business guy”. My reply was that he was sensing a change – commoditisation of certain aspects of IT roles. Did that mean he was going to lose his job? Unlikely, but nevertheless when  change is upon us, many of us tend to place more value on what we will lose compared to what we will gain. Our defence mechanisms kick in.

But lets take this a little further: The average tech guy comes in two main personas. The first is the tech-cowboy who documents nothing, half completes projects then loses interest, is oblivious to how much they are in over their head and generally gives IT a bad name. They usually have a lot of intellectual intelligence (IQ), but not so much emotional intelligence (EQ). Ben Curry once referred to this group as “dumb smart guys.” The second persona is the conspiracy theorist who had to clean up after such a cowboy. This person usually has more skills and knowledge than the first guy, writes documentation and generally keeps things running well. Unfortunately, they also can give IT a bad name. This is because, after having to pick up the pieces of something not of their doing, they tend to develop a mother hen reflex based on a pathological fear of being paged at 9pm to come in and recover something they had no part in causing. The aforementioned cowboys rarely last the distance and therefore over time, IT departments begin to act as risk minimisers, rather than business enablers.

Now IT departments will never see it this way of course, instead believing that they enable the business because of their risk minimisation. Having spent 20 years being a paranoid conspiracy theorist, security-type IT guy, I totally get why this happens as I was the living embodiment of this attitude for a long time. Technology is getting insanely complex while users innate ability to do some really risky and dumb is increasing. Obviously, such risk needs to be managed and accordingly, a common characteristic of such an IT department is the word “no” to pretty much any question that involves introducing something new (banning iPads or espousing the evils of DropBox are the best examples I can think of right now). When I wrote about this issue in the context of SharePoint user adoption back in 2008, I had this to say:

The mother hen reflex should be understood and not ridiculed, as it is often the user’s past actions that has created the reflex. But once ingrained, the reflex can start to stifle productivity in many different ways. For example, for an employee not being able to operate at full efficiency because they are waiting 2 days for a helpdesk request to be actioned is simply not smart business. Worse still, a vicious circle emerges. Frustrated with a lack of response, the user will take matters into their own hands to improve their efficiency. But this simply plays into the hands of the mother hen reflex and for IT this reinforces the reason why such controls are needed. You just can’t trust those dog-gone users! More controls required!

The long term legacy of increasing technical complexity and risk is that IT departments become slow-moving and find it difficult to react to pace of change. Witness the number of organisations still running parts of their business on Office 2003, IE6 and Windows XP. The rest of the organisation starts to resent using old tools and the imposition of process and structure for no tangible gain. The IT department develops a reputation of being difficult to deal with and taking ages to get anything done. This disconnect begins to fester, and little by little both IT and “the business” develop a rose-tinged view of themselves (which is known as groupthink) and a misguided perception of the other.

At the end of the day though, irrespective of logic or who has the moral high ground in the debate, an IT department with a poor reputation will eventually lose. This is because IT is no longer seen as a business enabler, but as a cost-center. Just as organisations did with the IT outsourcing fad over the last decade, organisational decision makers will read CIO magazine articles about server huggers look longingly to the cloud, as applications become more sophisticated and more and more traditional vendors move into the space, thus legitimising it. IT will be viewed, however unfairly, as a burden where the cost is not worth the value realised. All the while, to conservative IT, the cloud represents some of their worst fears realised. Risk! risk! risk! Then the vicious circle of the mother-hen reflex will continue because rogue cloud applications will be commissioned without IT knowledge or approval. Now we are back to the bad old days of rogue MSAccess or SharePoint deployments that drives the call for control based governance in the first place!

<nerd interlude>

Now to the nerds reading this post who find it incredibly frustrating that their organisation will happily pump money into some cloud-based flight of fancy, but whine when you want to upgrade the network, I want you to take take note of this paragraph as it is really (really) important! I will tell you the simple reason why people are more willing to spend more money on fluffy marketing than IT. In the eyes of a manager who needs to make a profit, sponsoring a conference or making the reception area look nice is seen as revenue generating. Those who sign the cheques do not like to spend capital on stuff unless they can see that it directly contributes to revenue generation! Accordingly, a bunch of servers (and for that matter, a server room) are often not considered expenditure that generates revenue but are instead considered overhead! Overhead is something that any smart organisation strives to reduce to remain competitive. The moral of the story? Stop arguing cloud vs. internal on what direct costs are incurred because people will not care! You would do much better to demonstrate to your decision makers that IT is not an overhead. Depending on how strong your mother hen reflex is and how long it has been in place, that might be an uphill battle.

</nerd interlude>

Defence mechanisms…

Like the poor old Information Architect, the rules of the game are changing for IT with regards to cloud solutions. I am not sure how it will play out, but I am already starting to see the defence mechanisms kicking in. There was a CIO interviewed in the “Server Huggers” article that I referred to earlier (Scott Martin) who was hugely pro-cloud. He suggested that many CIO’s are seeing cloud solutions as a threat to the empire they have built:

I feel like a lot of CIOs are in the process of a kind of empire building.  IT empire builders believe that maintaining in-house services helps justify their importance to the company. Those kinds of things are really irrational and not in the best interest of the company […] there are CEO’s who don’t know anything about technology, so their trusted advisor is the guy trying to protect his job.

A client of mine in Sydney told me he enquired to his IT department about the use of hosted SharePoint for a multi-organisational project and the reply back was a giant “hell no,” based primarily on fear, uncertainty and doubt. With IT, such FUD is always cloaked in areas of quite genuine risk. There *are* many core questions that we must ask cloud vendors when taking the plunge because to not do so would be remiss (I will end this post with some of those questions). But the key issue is whether the real underlying reason behind those questions is to shut down the debate or to genuinely understand the risks and implications of moving to the cloud.

How can you tell an IT department is likely using a FUD defence? Actually, it is pretty easily because conservative IT is very predictable – they will likely try and hit you with what they think is their slam-dunk counter argument first up. Therefore, they will attempt to bury the discussion with the US Patriot Act Issue. I’ve come across this issue and and Mark Miller at FPWeb mentioned to me that this comes up all the time when they talk about SharePoint hosting to clients. (I am going to cover the Patriot Act issue in the next post because it warrants a dedicated post).

If the Patriot Act argument fails to dent unbridled cloud enthusiasm, the next layer of defence is to highlight cloud based security (identity, authentication and compliance) as well as downtime risk, citing examples such as the September outage of Office 365, SalesForce.com’s well publicized outages, the Amazon outage that took out Twitter, Reddit, Foursquare, Turntable.fm, Netflix and many, many others. The fact that many IT departments do not actually have the level of governance and assurance of their systems that they aspire to will be conveniently overlooked. 

Failing that, the last line of defence is to call into question the commercial viability of cloud providers. We talked about the issues facing the smaller players in the last post, but It is not just them. What if the provider decides to change direction and discontinue a service? Google will likely be cited, since it has a habit of axing cloud based services that don’t reach enough critical mass (the most recent casualty is Google health being retired as I write this).  The risk of a cloud provider going out of business or withdrawing a service is a much more serious risk than when a software supplier fails. At least when its on premise you still have the application running and can use it.

Every FUD defence is based on truth…

Now as I stated above, all of the concerns listed above are genuine things to consider before embarking on a cloud strategy. Prudent business managers and CIOs must weigh the pros and cons of cloud offering before rushing into a deployment that may not be appropriate for their organisation. Equally though, its important to be able to see through a FUD defence when its presented. The easiest way to do this is do some of your own investigations first.

To that end, you can save yourself a heap of time by checking out the work of Richard Harbridge. Richard did a terrific cloud talk at the most recent Share 2011 conference. You can view his slide deck here and I recommend really going through slides 48-81. He has provided a really comprehensive summary of considerations and questions to ask. Among other things, he offered a list of questions that any organisation should be asking providers of cloud services. I have listed some of them below and encourage you to check out his slide deck as it is really comprehensive and covers way more than what I have covered here.

Security Storage Identity & Access
Who will have access to my data?
Do I have full ownership of my data?
What type of employee / contractor screening you do, before you hire them?
How do you detect if an application is being attacked (hacked), and how is that
reported to me and my employees?
How do you govern administrator access to the service?
What firewalls and anti-virus technology are in place?
What controls do you have in place to ensure safety for my data while it is
stored in your environment?
What happens to my data if I cancel my service?
Can I archive environments?
Will my data be replicated to any other datacenters around the world (If
yes, then which ones)?
Do you offer single sign-on for your services?
Active directory integration?
Do all of my users have to rely on solely web based tools?
Can users work offline?
Do you offer a way for me to run your application locally and how quickly I can revert to the local installation?
Do you offer on-premise, web-based, or mixed environments?
     
Reliability & Support Performance  
What is your Disaster Recovery and Business Continuity strategy?
What is the retention period and recovery granularity?
Is your Cloud Computing service compliant with [insert compliance regime here]?
What measures do you provide to assist compliance and minimize legal risk?
What types of support do you offer?
How do you ensure we are not affected by upgrades to the service?
What are your SLAs and how do you compensate when it is not met?
How fast is the local network?
What is the storage architecture?
How many locations do you have and how are they connected?
Have you published any benchmark scores for your infrastructure?
What happens when there is over subscription?
How can I ensure CPU and memory are guaranteed?
 

Conclusion and looking forward…

For some organisations, the lure of cloud solutions is very seductive. From a revenue perspective, it saves a lot of capital expenditure. From a time perspective, it can be deployed very quickly and and from a maintenance perspective, takes the burden away from IT. Sounds like a winner when put that way. But the real issue is that the changing cloud paradigm potentially impacts the wellbeing of some IT professionals and IT departments because it calls into question certain patterns and practices within established roles. It also represents a loss of control and as I said earlier, people often place a higher value on what they will lose compared to what they will gain.

Irrespective of this, whether you are a new age cloud loving CIO or a server hugger, any decision to move to the cloud should be about real business outcomes. Don’t blindly accept what the sales guy tells you. Understand the risks as well as the benefits. Leverage the work Richard has done and ask the cloud providers the hard questions. Look for real world stories (like my second and third articles in this series) which illustrate where the services have let people down.

For some, cloud will be very successful. For others, the gap between expectations and reality will come with a thud.

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud is not the problem–Part 1: Has it been here all along?

This entry is part 1 of 6 in the series Cloud
Send to Kindle

Hiya

I have been meaning to write a post or three on cloud computing, and its benefits, challenges and eventual legacy. I’ve finally had some time to do so. This series will span over a few posts (not sure how many at this stage) and will focus mainly on SharePoint. In short, I think the cloud is a shining example of innovation, combined with human irrationality, poorly thought out process with a dash of organisational dysfunction. In this first post, I will give you a little cloud history lesson, through the eyes of a slightly jaded IT infrastructure person. To that end, I will try and do the following throughout this series:

  • Educate readers to some conceptual aspects of cloud computing and why it matters
  • Highlight aspects to cloud computing that are current being conveniently overlooked by proponents (and opponents)
  • Look at what the real challenges are, not just for organisations utilising it, but for the organisations providing cloud services
  • Highlight what the future might look like from a couple of perspectives
  • As always, take a relatively dry topic and try and make this entertaining enough that you will want to read it through 🙂

So let’s roll the clock back a decade or so and set the scene…

In the beginning…

In the height of the dotcom boom of 2000, I took a high paying contract position for a miner-turned-ISP. You see, back then it was all the rage for “penny stock” mining companies – who had never actually dug anything of value out of the ground – to embrace “The interweb” by becoming an Internet Service Provider. Despite having no idea whatsoever about what it entailed to be an ISP, instantly they would enjoy at least a fiftyfold increase in stock price and all the adulation of those dotcom investors who actually believed that there was money to be made.

Lured from my stable job by the hubris-funded per-hour rate and a cooler job title, I designed and ran an ISP from late 1999 till late 2004, doing all things security, Linux, Cisco and Microsoft. Back then, the buzzword of choice was “hosting”. Of course, the dotcom bubble popped big time and the market collapsed back to cold hard reality pretty quickly. Like all organisations that rode the wave, we then had to survive the backwash of a pretty severe bear market. Accordingly, my hourly rate went down and our ISP sales guys dutifully sold “hosting solutions” to clients that were neither useful nor appropriate. The best example of this is when someone sold a hosted exchange server to a company of 300 staff with no consideration whatsoever of bandwidth, security and authentication (remember that this was the era of Exchange 2000, immature Active Directory deployments and 1.5/256 megabit ADSL connections).

We actually learnt a lot from dumbass stuff like this (and we went through a seemingly endless number of sales guys as a result). By the end of the journey, we did some good work and had a few success stories. The net result of riding the highs and lows of the dotcom boom, was my conclusion that if you had a public IP address and a communications rack with decent air conditioning, you were pretty much a hosting provider.

Then in 2004 I took a different job with a different company. They hired me because they had just acquired a fairly well-known “hosting provider” who had gone through some tough times. I was tasked with migrating the hosting infrastructure – and the sites hosted on it – to the parent company premises and integrate it with the existing infrastructure. So imagine my shock when on day one, I arrive onsite to see that the infrastructure of this hosting provider was essentially a store room, full of clone PC’s with panels removed, sitting in a couple of communications racks, with a cheap portable fan blowing onto it all to keep it cool and with no redundant power (in fact one power cord was sticky taped to the floor and led out the room to the nearest outlet). As it happened, some very high profile websites ran on this infrastructure.

This period I describe as “my bitter and twisted days” as I had a limited time to somehow migrate this mess to the more robust infrastructure of the parent company. This was the period where I became a bit of an IT control freak and used to take a dim view of web developers who dared to ask me a dumb question. I also subsequently revised my view of hosting. I decided that if you had a public IP address and a comms rack with completely crap air conditioning, you were pretty much a hosting provider. After all, when you access a website, did you ever stop to consider where it physically might reside?

…and henceforth came “the cloud”

Before SharePoint 2010 came out, I used to do talks where I put up the SharePoint 2007 pie and asked people what buzzword was missing. Many hands would rise and the answer was always “cloud”. Cognisant of this, I redrew Microsoft’s marketing diagram to try and capture the essence of this this new force in enterprise IT. I suggested that Microsoft would jump on the cloud big-time with SharePoint 2010. How do you think I did? Smile

 

image

As it turned out, Microsoft for some reason opted not to use my suggested logo and instead went with that blue Frisbee with fresh buzzwords to replace the 2007 ones that had reached their saturation point. Nevertheless, the picture above did turn out to be prophetic: The era of the cloud is most definitely upon us, along with the gushing praise that often accompanies any flavour of the year technology.

Now in one sense, nothing much has changed from the days of web hosting. If you have an IP address with a webserver on the end of it, you can pretty much call yourself a cloud provider. This is because at the end of the day, we are still using the core ingredients of TCPIP, DNS, HTTP, communications racks and supposedly good air conditioning. When you access something in “the cloud”, you have no visibility as to the quality of the infrastructure on the other end. For all you know, it could be a store room being kept cool with a dodgy fan and some sticky tape :-).

But while that’s a cynical view, its is also naively simplistic. Like all fads that come and go, things are always changed as a result. The truth is that there has been changes from the days of web hosting that will change the entire face of IT in the coming years.

The major difference between this era and the last is the advancement in technology beyond those core ingredients of TCPIP, DNS and HTTP. Bandwidth has became significantly cheaper, faster and more reliable. Virtualisation of servers (and services) not only gained momentum, but is now a mature technology. My own evidence for this fact is that I haven’t put SharePoint web front end servers onto non-virtualised infrastructure for a couple of years now. Add to that the fact that the tools and systems that we use to build web solutions are now much more powerful and sophisticated. As a result, “cloud” applications now reflect a level of sophistication and features way beyond their web based email origins. Look at Office 365 as a case in point. Microsoft have bet big-time on this type of offering. I’m sure that most architectural diagrams currently drawn all over Microsoft whiteboards for SharePoint vNext, will be all about reworking the plumbing to create feature parity between on-premise SharePoint and it’s cloud based equivalent.

It’s interesting stuff indeed.

Now, perhaps because I had an ISP/hosting ringside seat,  I could see all of this happening way back in 2000 – more than a decade ago. Not only could I see it, I experienced the pain of early adopters trying to do it (witness the example of the hosted Exchange 2000 “solution” I started this post with). But a decade later, cloud based infrastructure now realises the sort of capabilities that I was able to foresee in my ISP days. We have access to unlimited storage and scalability. With it, I can save massive time and effort to get complex systems up and running. In this fast-moving age we find ourselves in, being able to mobilise resources and be productive quickly is hugely important. Recognising this, companies like Amazon, Google and Microsoft leverage their incredible economies of scale, as well as the sheer depth of technical expertise to make some rather compelling offerings. Bean counters (i.e. CFO’s and CIO’s with tight budgets) suddenly realised that the cost to “jack-in” to a cloud based solution is way less costly than the traditional manner of up-front costs of hardware, licensing, procurement and configuration.

The cloud offers minimal entry cost because for the most part, it is based on a pay-for-use model. You stop paying for it when you stop using it. Buying servers are forever, but the cloud is apparently not. Furthermore, the economies of scale that the big boys of the cloud space offer, usually far exceeds what can be done via internal IT resources anyway. This extends past sheer hardware scalability and includes security, reliability and performance monitoring. As a cloud provider customer, you will not just expect, but assume that companies like Microsoft, Amazon and Google can use their deep pockets to hire the best of the best engineers, architects and security practitioners. Organisational decision makers look increasingly longingly at the cloud, in the face of internal IT costs being high.

Even the most traditional on-premise IT vendors are getting in on the act. Consider SAP, previously a bastion of the “on-premise” model. Their American division just shelled out US$3.4 billion to buy a cloud provider called SuccessFactors (3.4 billion = 50% premium to SuccessFactors share price.) Why did they do this? According to Paul Hamerman (the bold areas are mine).

“SAP’s cloud strategy has been struggling with time-to-market issues, and its core on-premise HR management software has been at competitive disadvantage with best-of-breed solutions in areas such as employee performance, succession planning and learning management. By acquiring SuccessFactors, SAP puts itself into a much stronger competitive position in human resources applications and reaffirms its commitment to software-as-a-service as a key business model.”

If that wasn’t enough, consider some of Gartner’s predictions for 2012 and beyond. One notable predictions is that by year-end 2016, more than 50 percent of Global 1000 companies will have stored customer-sensitive data in the public cloud. Closer to home for me, I have a client who has a ten-year BHAG (known as a Big, Hairy Audacious Goal). While I can’t tell you what this goal is, I can tell you that they have identified a key success metric that currently takes them around 12 months to achieve. Their BHAG is to reduce this time from 12 months to 4 weeks and achieve this within a decade. Essentially they have a time-to-market issue – similar to what Hamerman outlined with SAP. By utilising cloud technology and being able to procure the necessary scalability at the click of a button and the swipe of a credit card, I was able to save them one month almost straight away and make a massive inroad to their organisation-wide strategic goal.

So it seems that in the rational world of key performance indicators and return on investment, and given the market trends of large, mainstream vendors going “cloud”, it would seem that we are in the midst of a revolution that has an unstoppable momentum. But of course, the world is not rational is it? If it were, then someone would be able to explain to me why the US still uses the imperial system given that every other country (save for Liberia and Myanmar) has now changed to metric (yes my US readers, the UK is actually metric).

The irrational road ahead…

In this first post I have painted a picture of the “new reality” – the realisation of what I first saw in 2000 is now upon us. While this first post might sound like gushing praise of all things cloud, rest assured that this is not the case. I deliberately titled this post “the cloud is not the problem” because we are going to dive into the seedy underbelly of this brave new cloudy world we find ourselves in. My contention is that cloud computing is an adaptive challenge, which by definition, questions certain established ways of doing things. Therefore it has an effect on the roles, beliefs, assumptions and values behind the established order. In the next post or three, we are going to explore some of the less rational sides of “the cloud” at a number of levels. Furthermore, the irrationality often tends to be dressed up as rationality, so we have to look behind the positive and negative straw-man arguments we are currently hearing about, to what is really going on. Along the way I hope to develop your “cloud computing strawman argument” radar, so you can smell manure when its inevitably dished out to you 🙂

The general breakdown of this series will be as follows:

I’ll start by chronicling my experience with Microsoft’s new Software as a Service (Saas) offering: Office 365, as well as Amazon’s Platform as a Service Offering (EC2). Both are terrific offerings, but are let down by things that have nothing to do with the technology. From there we will move into looking at some of the existing roles and paradigms that are impacted by the move to cloud solutions, and the defence mechanisms that will be employed to counter it. I’ll end the series by taking a look at the cloud from a longer term perspective, based on the notion of systems theory (which despite its drop-dead boring sounding premise is actually quite interesting).

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

How to use Charlie Sheen to improve your estimating…

Send to Kindle

Monte Carlo simulations are cool – very cool. in this post I am going to try and out-do Kailash Awati in trying to explain what they are. You see, I am one of these people who’s eyes glaze over the minute you show me any form of algebra. Kailash recent wrote a post to explain Monte Carlo to the masses, but he went and used a mathematical formula (he couldn’t help himself), and thereby lost me totally. Mind you, he used the example of a drunk person playing darts. This I did like a lot and gave me the inspiration for this post.

So here is my attempt to explain what Monte Carlo is all about and why it is so useful.

I have previously stated, that vaguely right is better than precisely wrong. If someone asks me to make an estimate on something, I offer them a ranged estimate, based on my level of certainty. Thus for example, if you asked me to guess how many beers per day Charlie Sheen has been knocking back lately, I might offer you an estimate of somewhere between 20 and 50 pints. I am not sure of the exact number (and besides, it would vary on a daily basis anyway) , so I would rather give you a range that I feel relatively confident with, than a single answer that is likely to be completely off base.

Similarly, if you asked me how much a SharePoint project to “improve collaboration” would cost, I would do a similar thing. The difference between SharePoint success and Charlie Sheen’s ability to keep a TV show afloat is that with SharePoint, there are more variables to consider. For example, I would have to make ranged estimates for the cost of:

  • Hardware and licensing
  • Solution envisioning and business analysis
  • Application development
  • Implementation
  • Training and user engagement

Now here is the problem. A CFO or similar cheque signer wants certainty. Thus, if you give them a list of ranged estimates, they are not going to be overly happy about it. For a start, any return on investment analysis is by definition, going to have to pick a single value from each of your estimates to “run the numbers”. Therefore if we used the lower estimate (and therefore lower cost) for each variable, we would inevitably get a great return on investment. If we used the upper limit to each range, we are going to get a much costlier project.

So how to we reconcile this level of uncertainty?

Easy! Simply run the numbers lots and lots (and lots) of times – say, 100,000 times, picking random values from each variable that goes into the estimate. Count the number of times that your simulation is a positive ROI compared to a negative one. Blammo – that’s Monte Carlo in a nutshell. It is worth noting that in my example, we are assuming that all values between the upper and lower limits are equally likely. Technically this is called a uniform distribution – but we will get to the distribution thing in a minute.

As a very crappy, yet simple example, imagine that if SharePoint costs over $250,000 it will be considered a failure. Below are our ranged estimates for the main cost areas:

Item Lower Cost Upper Cost
Hardware and licensing $50,000 $60,000
Solution envisioning and business analysis  $20,000 $70,000
Application development $35,000 $150,000
Implementation $25,000 $55,000
Training and User engagement $10,000 $100,000
Total $140,000 $435,000

If you add up my lower estimates we get a total of $140,000 – well within our $250,000 limit. However if my upper estimates turn out to be true we blow out to $435,000 – ouch!

So why don’t we pick a random value from each item, add them up, and then repeat the exercise 100,000 times. Below I have shown 5 of 100,000 simulations.

Item Simulation 1 Simulation 2 Simulation 3 Simulation 4 [snip] Simulation 100,000
Hardware and licensing 57663 52024 53441 58432 51252
Solution envisioning and business analysis 21056 68345 42642 37456 64224
Application development 79375 134204 43566 142998 103255
Implementation 47000 25898 25345 51007 35726
Training and User engagement 46543 73554 27482 87875 13000
Total Cost 251637 354025 192476 377768 267457

So according to this basic simulation, only 2 out of 5 shown are below $250,000 and therefore a success according to my ROI criteria. Therefore we were successful only only 40% of the time (2/5 = .4). By that measure, this is a risky project (and we haven’t taken into account discounting for risk either).

“Thats it?”, I hear you say? Essentially yes. All we are doing is running the numbers over and over again and then looking at the patterns that emerge from this. But that is not the key bit to understand. Instead, the most important thing to understand with Monte Carlo properly is to understand probability distributions. This is the bit that people mess up on and the bit that people are far too quick to jump into mathematical formulae.

But random is not necessarily random

Let’s use Charlie Sheen again to understand probability distributions. If we were to consider the amount of crack he smokes on a daily basis, we could conclude it is between 0 grams  and 120 grams. The 120g upper limit is based on what Charlie Sheen could realistically tolerate (which is probably three times the amount of normal humans). If we plotted this over time, it might look like the example below (which is the last 31 days):

image

So to make a best guess at the amount he smokes tomorrow, should we pick random values between 0 and 120 grams?  I would say not. Based on observing the chart above, you would be likely to choose values from the upper end of the range scale (lately he has really been hitting things hard and we all know what happens when he hangs with starlets from the adult entertainment industry).

That’s the trick to understanding a probability distribution. If we simply chose a random value it would likely not be representative of the recent range of values. We still have to pick a value from a range of possibilities, but some values are more likely than others. We are not truly picking random values at all.

The most common probability distribution people use is the old bell curve – you probably saw it in high school. For many variables that go into a monte carlo, it is a perfectly fine distribution. For example, the average height of a human male may be 5 foot 6. Some people will be larger and some will be smaller, but you would find that there would be more people closer to this mid-point than far away from it, hence the bell shape.

Let’s see what Charlie Sheen’s distribution looks like. Since we have our range of values, for each days amount of crack usage, let’s divide up crack usage into ranges of grams and see how much Charlie has consumed. The figure is below:

Amount Daily occurrences %
0-10g 16 50%
10-20g 6 19%
20-30g 4 13%
30-40g 1 3%
40-50g 1 3%
50-60g 0 0%
60-70g 2 6%
70-80g 1 3%
80-90g 0 0%
90-100g 1 3%
100-120g 0 0%

As you can see, according to the 50% of the time Charlie was not hitting the white stuff particularly hard. There 16 occurrences where Charlie ingested less than 10 grams. What sort of curve does this make? The picture below illustrates it.

image

Interesting huh? If we chose random numbers according to this probability distribution, chances are that 50% of the time, we would get a value between 0 and 10 grams of crack being smoked or shovelled up his nasal passages. Yet when we look at the trend of the last 10 days, one could reasonably expect that its likely that tomorrows value would be significantly higher than zero. In fact there were no occurrences at all of less than 10 grams in a single day in the last 10 days.

Now let’s change the date range, and instead look at Charlie’s last 9 days of crack usage. This time the distribution looks a bit more realistic based on recent trends. Since he has not been well behaved lately, there were no days at all where his crack usage was less than 10 grams. In fact 4 of the 9 occurrences were over 60 grams.

Amount Daily occurrences %
0-10g 0 0%
10-20g 3 33%
20-30g 1 11%
30-40g 0 0%
40-50g 1 11%
50-60g 0 0%
60-70g 2 22%
70-80g 1 11%
80-90g 0 0%
90-100g 1 11%
100-120g 0 0%

image

This time, utilising a different set of reference points (9 days instead of 31), we get very different “randomness”. This gets to one of the big problems with probability distributions which Kailash tells me is called the Reference class problem. How can you pick a representative sample? In some situations completely random might actually be much better than a poorly chosen distribution.

Back to SharePoint…

So imagine that you have been asked to estimate SharePoint costs and you only have vague, ranged estimates. Lets also assume that for each of the variables that need to be assigned an estimate, you have some idea of their distribution. For example if you decide that SharePoint hardware and licensing really could be utterly random between $50000-$60000 then pick a truly random value (a uniform distribution) from the range with each iteration of the simulation. But if you decide that its much more likely to come in at $55000 than it is $50000, then your “random” choice will be closer to the middle of the range more often than not – a normal distribution.

So the moral of the story? Think about the sort of distribution that each variable uses. It’s not always a bell curve. its also not completely random either. In fact you should strive for a distribution that is the closest representation of reality. Kailash tells me that a distribution “should be determined empirically – from real data – not fitted to some mathematically convenient fiction (such as the Normal or Unform distributions). Further, one should be absolutely certain that the data is representative of the situation that is being estimated.”

Since SharePoint often relies on some estimations that offer significant uncertainty, a Monte Carlo simulation is a good way to run the numbers – especially if you want to see how many variables with different probability distributions combine to produce a result. Run the simulation enough times, you will produce a new probability distribution that represents all of these variables.

Just remember though – Charlie Sheen reliably demonstrates that things are not often predictable and that past values are no reliable indicator of future values. Thus a simulation is only as good as your probability distributions in the first place

 

Thanks for reading

 

Paul Culmsee

www.sevensigma.com.au

 

p.s A huge thanks to Kailash for checking this post, offering some suggestions and making sure I didn’t make an arse of myself.

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

Technorati Tags: , , ,

Send to Kindle

A different kind of SharePoint Governance Master Class in London and Dublin

Send to Kindle

The background

Over the last three years, my career trajectory had altered somewhat where I spent half my time as a SharePoint practitioner, doing all of the things that us SharePoint practitioners do, and the other half was spent in a role that I would call sensemaking. Essentially group facilitation work, on some highly complex, non IT problems. These ranged from areas such as city planning, (envisioning and community engagement) to infrastructure delivery (think freeways, schools and hospitals), to mental health, team and relationship building, performance management, board meetings and various other scenarios.

Imagine how much of a different world this is, where a group is coming together from often very different backgrounds and base positions, to come to grips with a complex set of interlocking problems and somehow try and align enough to move forward. We cannot simply throw a “SharePoint” at these problems and think it will all be better. By their very nature, we have to collaborate on them to move forward – true collaboration in all its messy, sometimes frustrating glory.

As a result of this experience, I’ve also learned many highly effective collaborative techniques and approaches that I have never seen used in my 20+ years of being an IT practitioner. Additionally, I’ve had the opportunity to work with (and still do), some highly skilled people who I learned a huge amount from. This is “standing on the shoulders of giants” stuff. As you can imagine, this new learning has had a significant effect on how Seven Sigma now diagnoses and approaches SharePoint projects and has altered the lens through which I view problem solving with SharePoint.

It also provided me the means to pinpoint a giant blind spot in the SharePoint governance material that’s out there, and what to do about it.

The first catalyst – back injury

In January this year, my family and I went on a short holiday, down to the wine country of Western Australia called the Margaret River region. On the very first day of that trip, I was at the beach, watching my kids run amok, when I totally put my back out (*sigh* such an old man). Needless to say, I could barely move for the next week or two after. My family, ever concerned for my welfare, promptly left me behind at the chalet and took off each day to sample wines, food and generally do the things that tourists do.

Left to my own devices, and not overly mobile I had little to do but ponder – and ponder I did (even more than my usual pondering – so this was an Olympic class ponder). Reflecting on all of my learning and experiences from sensemaking work, my use of it within SharePoint projects, as well as the subsequent voracious reading in a variety of topics, I came to realise that SharePoint governance is looked through a lens that clouds some of the most critical success factors. I knew exactly how to lift that fog, and had a vision for a holistic view of SharePoint governance that at the same time, simplifies it and makes it easy for people to collectively understand.

So I set to work, distilling all of this learning and experience and put it into something coherent, rigorous and accessible. After all, SharePoint is a tool that is an enabler for “improved collaboration”, and I had spent half of my time on deeply collaborative non IT scenarios where to my knowledge, no other SharePoint practitioner has done so. Since sensemaking lies in all that ‘softer’ stuff that traditionally IT is a bit weaker on, I thought I could add some dimensions to SharePoint governance in a way that could be made accessible, practical and useful.

By the end of that week I still had a sore back, but I had the core of what I wanted to do worked out, and I knew that it would be a rather large undertaking to finish it (if it ever could be finished).

The second catalyst – Beyond Best Practices

I also commenced writing a non SharePoint book on this topic area with Kailash Awati from the Eight to Late blog, called Beyond Best Practices. This book examines why most best practices don’t work and what can be done about them. The plethora of tools, systems and best practices that are generally used to tackle organisational problems rarely help and when people apply these methods, they often end up solving the wrong problem. After all, if best practices were best, then we would all follow them and projects would be delivered on time, on budget and with deliriously happy stakeholders right?

The work and research that has gone into this book has been significant. We studied the work of many people who have recognised and written about this, as well as many case studies. The problem these authors had is that these works challenged many widely accepted views, patterns and practices of various managerial disciplines. As a result these ideas have been rejected, ignored or considered outright heretical, and thus languish (largely unread) in journals. The recent emergence of anything x2.0 and a renewed focus on collaboration might seem radical or new for some, but these early authors were espousing very similar things many years ago.

The third catalyst – 3grow

Some time later in the year, 3grow asked me to develop a 4 day SharePoint 2010 Governance and Information Architecture course for Microsoft NZ’s Elite program. I agreed and used my “core” material, as well as some Beyond Best Practice ideas to develop the course. Information Architecture is a bloody tough course to write. It would be easy to cheat and just do a feature dump of every building block that SharePoint has to offer and call that Information Architecture. But that’s the science and not the art – and the science is easy to write about. From my experience, IA is not that much different to the sensemaking work that I do, so I had a very different foundation to base the entire course from.

The IA course took 450 man hours to write and produced an 800 page manual (and just about killed me in the process), but the feedback from attendees surpassed all expectations.  This motivated me to complete the vision I originally had for a better approach to SharePoint governance and this has now been completed as well (with another 200 pages and a CD full of samples and other goodies).

The result

I have distilled all of this work into a master class format, which ranges from 1 to 5 days, suited to Business Analysts, Project and Program Managers, Enterprise and Information Architects, IT Managers and those in strategic roles who have to bridge the gap between organisational aspirations and the effective delivery of SharePoint solutions. I speak the way I write, so if the cleverworkarounds writing style works for you, then you will probably enjoy the manner in which the material is presented. I like rigour, but I also like to keep people awake! 🙂

One of my pet hates is when the course manual is just a printout of the slide deck with space for notes. In this master class, the manual is a book in itself and covers additional topic areas in a deeper level of detail from the class. So you will have some nice bedtime reading after attending.

Andrew Woodward has been a long time collaborator on this work, before we formalised this collaboration with the SamePage Alliance, we had discussed running a master class session in the UK on this material. At the same time, thanks to Michael Sampson, an opportunity arose to conduct a workshop in Ireland. As a result, you have an opportunity to be a part of these events.

Dublin

Storm_long_banner

The first event is terrific as it is a free event in Dublin on November 17, hosted by Storm Technology a Microsoft Gold Partner in Dublin. As a result of the event being free, it is by invitation only and numbers are limited. This is a one day event, focussing on the SharePoint Governance blind spots and what to do about them, but also wicked problems and Dialogue Mapping, as well as learning to look at SharePoint from outside the IT lens, and translate its benefits to a wider audience (ie “Learn to speak to your CFO”).

So if you are interested in learning how to view SharePoint governance in a new light, and are tired of the governance material that rehashes the same tired old approaches that give you a mountain of work to do that still doesn’t change results, then register your interest with Rosemary at the email address in the image above ASAP and she can reserve a spot for you. We will supply a 200 page manual, as well as a CD of sample material for attendees, including a detailed governance plan.

London

SamePage-Rect-BannerMed

In London on November 22 and 23, I will be running a two day master class along side Andrew Woodward on SharePoint Governance and Information Architecture. The first day is similar to the Ireland event, where we focus on governance holistically, shattering a few misconceptions and seeing things in a different light, before switching focus to various facets of Information Architecture for SharePoint. In essence, I have taken the detail of the 4 days of the New Zealand Elite course and created a single day version (no mean feat by the way).

Participants on this course will receive a 400 page manual, chock full of SharePoint Governance and Information Architecture goodness, as well as a CD/USB of sample material such as a SharePoint governance plan, as well as IA maps of various types. Unlike Ireland, this is an open event, available to anyone, and you can find more detail and register at the eventbrite site http://spiamasterclass.eventbrite.com/. In case you are wondering, this event is non technical. Whether you have little hands on experience with SharePoint or a deep knowledge, you will find a lot of value in this event for the very reason that the blind spots I focus on are kind of universally applicable irrespective of your role.

Much of what you will learn is applicable for many projects, beyond SharePoint and you will come away with a slew of new approaches to handle complex projects in general.

So if you are in the UK or somewhere in Europe, look us up. It will be a unique event, and Andrew and I are very much looking forward to seeing you there!

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Share2010 – A new kind of SharePoint conference

Send to Kindle

Having spoken at the odd SharePoint event over the last three years or so, I’ve always lamented on the lack of a purely business focused SharePoint conference. Whilst the conferences I attend do cater for non technology oriented topics – particularly the best practice conferences, there is usually an equal or greater proportion of content aimed at the nerdier aspects of SharePoint.

Sadly though, nerds don’t often sign the cheques. Those who do sign them, are rarely interested in deploying SharePoint via Powershell, or why sandboxed solutions are a good thing or not. They are looking for the ways and means to take SharePoint (the enabler) and work out what the hell SharePoint is enabling and to work out if it has done so properly.

Some time back, via a reference from Kristian Kalsing, I received a call from the organisers of the forthcoming Share2010 in Sydney, asking for feedback on what I would like to see in a good business focused SharePoint conference. In speaking to Steve from Eventful Management and his team, it was clear that something unique was in the making here.

Fast forward several months and after a whole lot of market research and round-table discussions from SharePoint customers (including a couple of our clients), we have a conference that puts many critical topics close to my heart, front and centre, namely governance, user engagement and adaption, business process automation and workflow; information architecture; collaboration; document and records management; resourcing and support; social networking; ROI; security and so on.

I am honoured that I was also asked to participate as a speaker at this conference, along side the likes of Dux Sy, Erica Toelle, Andrew Jolly and Michael Sampson. You will find that speakers from this group have one thing in common: Their focus on the softer areas of SharePoint. There are also speakers from some of Australia’s leading organisations (and some international ones too), who will share their trials, tribulations and lessons learned. This is real problem/real solution type stuff and I am seriously looking forward to being part of it.

I’ll be involved in the initial festivities on the Sunday evening, conducting a special interest kickoff session called SharePoint Governance Home Truths. This session aims to present a lot of my work in a more relaxed, entertaining manner and hopefully, set a good tone for the rest of the event.

I will also be running a special event on Wednesday called “Microsoft SharePoint Governance f-Laws: Handy Hints for Those Who Question Business as Usual”. I am really excited about this. Developing the content for this session has been a labour of love for me since November last year – and is a kind of magnum opus of everything I have learned in my IT and non IT work. I have been very fortunate to work on some very large and complex non IT projects and worked with some amazingly talented people in the areas of project management, cognitive science, facilitation and community engagement. I can absolutely guarantee you that there will be many aspects to this session that would not have been seen before in one place in this distilled form. I am super excited about delivering this in full at Share2010 – there simply could not be a better conference for this type of workshop.

By the way, I used elements of this material in the SharePoint 2010 Governance and Information Architecture course that was developed for the Microsoft NZ/3Grow Elite Program. The feedback from that course speaks for itself.

The outcomes to expect for attendee of this session are:

  • Understand the SharePoint governance lens beyond an IT service delivery focus
  • Develop your ‘wicked problem’ radar and apply appropriate governance practices, tools and techniques accordingly
  • Learn how to align SharePoint projects to broad organisational goals, avoid chasing platitudes and ensure that the problem being solved is the right problem
  • Understand the relationship between governance and assurance, why both are needed and how they affect innovation
  • Understand the underlying, often hidden forces of organisational chaos that underpins projects like SharePoint

There is a large amount of content and activities in this session that has never graced CleverworkArounds. In fact, if I ever get around to posting some of the content, I could blog for months. But more importantly than the content, you will have a lot of practical tools to leverage as well. Attendees to my session will receive a CD containing end-to-end governance artefacts ranging from IBIS maps, goal alignment and performance framework outputs, envisioning workshop sample outputs, Information Architecture mind-maps, BPMN diagrams, wireframes, user engagement tools, ROI calculations and more.

As it happens, I collaborated on a lot of this stuff with Erica Toelle, so it is terrific that she is speaking at the event and her “Don’t reinvent the wheel” talk should not be missed, as well as her Tuesday keynote. If I ask her nicely, she might just pop a few of her goodies onto the CD as well!

You can register here, for this unique event, and let’s hope that there are many more to come. There is opportunity for one on one meetings with speakers like myself as part of the deal.

Thanks for reading

 

 

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

SharePoint ROI Slide Deck and Sample Scenario worksheet published

Send to Kindle

Hot off the press (okay – well SlideShare magic),  I’ve just posted by Best Practices Conference slide deck for the "speak to your CFO" session, along with the ROI spreadsheet for the PMIS scenario that I used during the demonstration. Like the "wicked problems" slide deck, slideshare conversion isn’t quite there, so just contact me if you want a pptx version.

…and the spreadsheet. Just remember you scary MBA and finance types. I *know* this is a simple sheet and you can pick all sorts of holes in it. It is really for training and guidance purposes only. (Therefore see the obligatory "don’t come crying to me if this gets you into trouble" disclaimer below).

THIS CODE IS PROVIDED UNDER THIS LICENSE ON AN “AS IS” BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COVERED CODE IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED CODE IS WITH YOU. SHOULD ANY COVERED CODE PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF ANY COVERED CODE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER

Use at your own risk!

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

More on the Best Practice SharePoint Conference – Feb 2-4 2009 in San Diego!

Send to Kindle

Hi all

I have been extremely quiet on the blogging front lately, because I have been extremely busy, splitting my time between working on my two presentations for the up-coming Best Practices SharePoint Conference, as well as wearing my undies on the outside (ala superman), deep in the bowels of some unhealthy SharePoint farms, nailing various technical and governance issues and helping organisations regain some lost assurance. On top of that, I’ve also been doing a lot of non IT related work in a group facilitation discipline.

image

I thought it’s about time I emerged from this big mushroom I find myself under to let you know more about what I will be speaking about, as well as some of the other speakers and topics that I really looking forward to. Seriously, we are in the company of giants with this conference. The caliber and quality of the speakers has me wondering what the hell I am doing there!

I mean we have all the "A list" big kids of the SharePoint world there. Gary Lapointe is a freakin’ bona fide superstar! – via his STSADM extensions, he has saved the asses of more SharePoint admins and developers than even Joel has. Robert Bogue is an even better all-rounder than Andrew Symonds (sorry non cricketing countries you won’t get that analogy) and touches on a wider variety of topics than anyone else I have ever come across. Then there the likes of Andrew Woodward, Ben Curry, Bob Mixon, Eric Shupps, fellow metalhead Mike Watson, Ruven Gotz and Todd Bleeker just to name a few!

Somehow I have to squeeze in a beer with all of them yet stay sober enough to present. That’s a tough ask!

Anyway, both of my sessions are in the CIO stream and I think are rather topical given the current financial crisis crap that is happening around the world.

My first session is called "How to avoid SharePoint becoming a wicked problem". This is a pet topic of mine – something that I have spent a lot of time on, and developing new skills in (hence the aforementioned facilitation work). For the record, I didn’t make up the term "wicked problem" – its been a subject of academic research since the term was first coined in the early 1970’s. This session is going to cover a lot of what I have learned on this topic including how to spot SharePoint wickedness early, recognise it for what it is, and apply the *right* sort of tools and techniques to mitigate it.

I do worry that people will find some of my stuff a little too left field, but I do have the results to attest to the value and power of these techniques and I am really looking forward to sharing my methods and comparing with what has worked for other presenters and attendees.

The second topic is on the topic of good old SharePoint Return on Investment (ROI). I’m one of these people that believe most things can be measured or quantified. I’ve always wanted to return to my series on "How to Speak to your CFO" and continue down that road. Given we have entered once in a lifetime era of falling profit, plummeting asset prices, reduced budgets, costlier finance and great uncertainty, my quest for bringing a lasting peace to the cold war between managers and geeks moves to San Diego 🙂

My aim for this session is to allow non SharePoint people to understand where some of the hidden costs are SharePoint, as well as show SharePoint people the basic financial tools for ROI modelling and secondly, I will explain how to build an ROI decision model and provide a scenario that we will try out some different assumptions with.

As for the rest of the veritable *buffet* of topics – where do you start? First up, I am torn between Bill’s "Aligning your Information and Findability Architectures using SharePoint Server 2007 Technologies" and Yoda Bogue’s "Selling Governance in your Organization". If I go to Bill’s session, then I’ll definitely be attending Robert’s Governing Development in SharePoint session.

In the afternoon, it gets even harder! You have "Transform the My Site into an Information Hub" by Mark Eichenberger, Bob Mixon’s "Learn why Taxonomies are the Most Important Part of any Document or Information Asset Management System, How to Facilitate the Government out of Governance by Virgin Carrol and Nuts and Bolts Governance- Practical Application of the Concepts

.. and that’s just day one!

Seriously people, no matter that sort of SharePoint sub disciplines push your buttons, you are going to get extreme value for money here. You will come away with an amazing amount of material that will result in real and tangible cost savings across various areas of the SharePoint realm.

If you live in California or anywhere in the US – there is no excuse 🙂 If *I* have to spend 25+ hours cooped up in  plane just to get there and survive the jet-lag to present, then you should come on down and join the fun.

Hope to see you there!

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle