On the decay (or remarkable recurrence) of knowledge

Send to Kindle

“That’s only 10%…”

One of my mentors who is mentioned in the book I wrote with Kailash (Darryl) is a veteran project manager in the construction and engineering industry. He has been working as a project manager more than 30 years, is a fellow of the Institute of Engineers and marks exams at the local university for those studying a Masters Degree in Project Management. His depth of knowledge and experience is abundantly clear when you start working with him and I have learned more about collaborative project delivery from him than anyone else.

Recently I was talking with him and he said something really interesting. He was telling some stories from the early days of alliancing based project delivery in Australia (alliancing is a highly interesting collaborative project governance approach that we devote a chapter to in our book). He stated that alliancing at its core is the application of good project management practice. Now I know Darryl pretty well and knew what he meant by that, but commented to him that when you say the word “project management practice,” some would associate that statement with (among other things) a well-developed Gantt chart listing activities with names, tasks and times.

His reply was unsurprising: “at best that’s only 1/10th of what project management is really about.”

Clearly Darryl has a much deeper and holistic view of what project management is than many other practitioners I’ve worked with. Darryl argues that those who criticise project management are actually criticising a small subset of the discipline, based on their less than complete view of what the discipline entails. Thus by definition, the remedies they propose are misinformed or solve a problem that has already been solved.

Whether you agree with Darryl or not, there is a pattern here that occurs continually in organisation-land. Fanboys of a particular methodology, framework model or practice (me included) will waste no time dumping on whatever they have grown to dislike and swear that their “new approach” addresses the gaps. Those with a more holistic view like Darryl might argue that crusaders aren’t really inventing anything new and that if a gap exists, it’s a gap in the knowledge of those doing the criticising.

As Ambrose Bierce said, “There is nothing new under the sun but there are lots of things we don’t know.”

From project management to systems thinking…

Now with that in mind, here’s a little anecdote. A few weeks back I joined a Design Thinking group on LinkedIn. I had read about Design Thinking during its hype phase a couple of years ago and my immediate thought was “Isn’t this just systems thinking reinvented?” You see, I more or less identify myself as a bit of a pragmatic systems thinker, in that I like to broaden a discussion, but I also actually get shit done. So I was curious to understand how design thinkers see themselves as different from systems thinkers.

I followed several threads on the LinkedIn group as the question had been discussed a few times. Unfortunately, no-one could really put their finger on the difference. Eventually I found a recent paper by Pourdehnad, Wexler and Wilson which went into some detail on the two disciplines and offered some distinctions. I won’t bother you with the content, except to say it was a good read, and left me with the following choices about my understanding of systems and design thinking:

  • That my understanding of systems thinking is wrong and I am in fact a design thinker after all
  • That I am indeed a systems thinker and design thinking is just systems thinking with a pragmatic bent

Of course being a biased human, I naturally believe the latter point is more correct. clip_image002

From systems to #stoos

Like the Snowbird retreat that spawned the agile manifesto, the recent stoos movement has emerged from a group of individuals who came together to discuss problems they perceive in existing management structures and paradigms. Now this would have been an exhilarating and inspiring event to be at – a bunch of diverse people finding emergent new understandings of organisations and how they ought to be run. Much tacit learning would have occurred.

But a problem is that one has to have been there to truly experience it. Any published output from this gathering cannot convey the vibe and learning (the tacit punch) that one would get from experiencing the event in the flesh. This is the effect of codifying knowledge into the written form. Both myself and Kailash were fully cognisant of this when we read the material on the stoos website and knew that for us, some of it would cover old ground. Nevertheless, my instinctive first reaction to what I read was “I bet someone will complain that this is just design thinking reinvented.”

Guess what… a short time later that’s exactly what happened too. Someone tweeted that very assertion! Presumably this opinion was offered by a self-identified design thinker who felt that the stoos crowd was reinventing the wheel that design thinkers had so painstakingly put together. My immediate urge was to be a smartarse and send back a tweet telling this person that design thinking is just pragmatic systems thinking anyway so he was just as guilty as the #stoos crowd. I then realised I might be found guilty of the same thing and someone might inform me of some “deeper knowing” than systems thinking. Nevertheless I couldn’t resist and made a tweet to that effect.

The decay (or remarkable recurrence) of knowledge…

(At this point I discussed this topic with Kailash and have looped him into the conversation)

Both of us see a pattern of a narrow focus or plain misinterpretation of what has come before. As a result, it seems there is a tendency to reinvent the wheel and slap a new label on claiming it to be unique or profound. We wonder therefore, how much of the ideas of new groups or movements are truly new.

Any corpus of knowledge is a bunch of memes – “ideas, behaviours or styles that spread from person to person within a culture.” Indeed, entire disciplines such as project management can be viewed as a bunch of memes that have been codified into a body of knowledge. Some memes are “sticky,” in that they are more readily retained and communicated, while others get left behind. However, stickiness is no guarantee of rightness. Two examples of such memes that we covered in our book are the waterfall methodology and the PERT scheduling technique Though both have murky origins and are of questionable utility, they are considered to be stock standard in the PM world, at least in certain circles. While it would take us too far afield to recount the story here (and we would rather you read our book Smile ) the point is that some techniques are widely taught and used despite being deeply flawed. Clearly the waterfall meme had strong evolutionary characteristics of survival while the story of its rather nuanced beginnings have been lost until recently.

A person indoctrinated in a standard business school curriculum sees real-life situations through the lens of the models (or memes!) he or she is familiar with. To paraphrase a well-known saying – if one is familiar only with a hammer, every problem appears as a nail. Sometime (not often enough!) the wielder of the metaphorical hammer eventually realises that not all problems yield to hammering. In other words, the models they used to inform their actions were incomplete, or even incorrect. They then cast about for something new and thus begin a quest for a new understanding. In the present day world one doesn’t have to search too hard because there are several convenient corpuses of knowledge to choose from. Each supply ready-made models of reality that make more sense than the last and as an added bonus, one can even get a certification to prove that one has studied it.

However, as demonstrated above with the realisation that not all problems yield to hammering, reality can truly be grasped only through experience, not models. It is experience that highlights the difference between the real-world and the simplistic one that is captured in models. Reality consists of complex, messy situations and any attempt to capture reality through concepts and models will always be incomplete. In the light of this it is easy to see why old knowledge is continually rediscovered, albeit in a different form. Since models attempt to grasp the ungraspable, they will all contain many similarities but will also have some differences. The stoos movement, design thinking and systems thinking are rooted in the same reality, so their similarities should not be surprising.

Coming back to Darryl – his view of project management with 30 years experience includes a whole bunch of memes and models, that for whatever reason, tend to be less sticky than the ones we all know so well. Why certain memes are less successful than others in being replicated from person to person is interesting in its own right and has been discussed at length in our book. For now, we’ll just say that those who come up with new labels to reflect their new understandings are paradoxically wise and narrow minded at the same time. They are wise in that they are seeking better models to understand the reality they encounter, but at the same time likely trashing some worthwhile ones too. Reality is multifaceted and cannot be captured in any particular model, so the finders of a new truth should take care that they do not get carried away by their own hyperbole.

Thanks for reading

Paul Culmsee (with Kailash Awati)

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud isn’t the problem–Part 5: Server huggers and a crisis of identity

This entry is part 5 of 6 in the series Cloud
Send to Kindle

Hi all

Welcome to my fifth post that delves into the irrational world of cloud computing. After examining the not-so-obvious aspects of Microsoft, Amazon and the industry more broadly, its time to shift focus a little. Now the appeal of the cloud really depends on your perspective. To me, there are three basic motivations for getting in on the act…

  1. I can make a buck
  2. I can save a buck
  3. I can save a buck (and while I am at it, escape my pain-in-the-ass IT department)

If you haven’t guessed it, this post will examine #3, and look at what the cloud means for the the perennial issue of the IT department and business disconnect. I recently read an article over at CIO magazine where they coined the term “Server Huggers” to describe the phenomenon I am about to describe. So to set the flavour for this discussion, let me tell you about the biggest secret in organisational life…

We all have an identity crisis (so get over it).

In organizations, there are roles that I would call transactional (i.e. governed by process and clear KPI’s) and those that are knowledge-based (governed by gut feel and insight). Whilst most roles actually entail both of these elements, most of us in SharePoint land are the latter. In fact we actually spend a lot of time in meeting rooms “strategizing” the solutions that our more transactionally focused colleagues will be using to meet their KPI’s. Beyond SharePoint, this also applies to Business Analysts, Information Architects, Enterprise Architects, Project Managers and pretty much anyone with the word “senior”, “architect”, “analyst”  or “strategic” in their job title.

But there is a big, fat, elephant in the “strategizing room” of certain knowledge worker roles that is at the root of some irrational organisational behaviour. Many of us are suffering a role-based identity crisis. To explain this, lets pick a straw-man example of one of the most conflicted roles of all right now: Information Architects.

One challenge with the craft of IA is pace of change, since IA today looks very different from its library and taxonomic roots. Undoubtedly, it will look very different ten years from now too as it gets assailed from various other roles and perspectives, each believing their version of rightness is more right. Consider this slightly abridged quote from Joshua Porter:

Worse, the term “information architecture” has over time come to encompass, as suggested by its principal promoters, nearly every facet of not just web design, but Design itself. Nowhere is this more apparent than in the latest update of Rosenfeld and Morville’s O’Reilly title, where the definition has become so expansive that there is now little left that isn’t information architecture […] In addition, the authors can’t seem to make up their minds about what IA actually is […] (a similar affliction pervades the SIGIA mailing list, which has become infamous for never-ending definition battles.) This is not just academic waffling, but evidence of a term too broadly defined. Many disciplines often reach out beyond their initial borders, after catching on and gaining converts, but IA is going to the extreme. One technologist and designer I know even referred to this ever-growing set of definitions as the “IA land-grab”, referring to the tendency that all things Design are being redefined as IA.

You can tell when a role is suffering an identity crisis rather easily too. It is when people with the current role start to muse that the title no longer reflects what they do and call for new roles to better reflect the environment they find themselves in. Evidence for this exists further in Porter’s post. Check out the line I marked with bold below:

In addition, this shift is already happening to information architects, who, recognizing that information is only a byproduct of activity, increasingly adopt a different job title. Most are moving toward something in the realm of “user experience”, which is probably a good thing because it has the rigor of focusing on the user’s actual experience. Also, this as an inevitable move, given that most IAs are concerned about designing great things. IA Scott Weisbrod, sees this happening too: “People who once identified themselves as Information Architects are now looking for more meaningful expressions to describe what they do – whether it’s interaction architect or experience designer

So while I used the example of Information Architects as an example of how pace of change causes an identity crisis, the advent of the cloud doesn’t actually cause too many IA’s (or whatever they choose to call themselves) to lose too much sleep. But there are other knowledge-worker roles that have not really felt the effects of change in the same way as their IA cousins. In fact, for the better part of twenty years one group have actually benefited greatly from pace of change. Only now is the ground under their feet starting to shift, and the resulting behaviours are starting to reflect the emergence of an identity crisis that some would say is long overdue.

IT Departments and the cloud

At a SharePoint Saturday in 2011, I was on a panel and we were asked by an attendee what effect Office 365 and other cloud based solutions might have on a traditional IT infrastructure role. This person asking was an infrastructure guy and his question was essentially around how his role might change as cloud solutions becomes more and more mainstream. Of course, all of the SharePoint nerds on the panel didn’t want to touch that question with a bargepole and all heads turned to me since apparently I am “the business guy”. My reply was that he was sensing a change – commoditisation of certain aspects of IT roles. Did that mean he was going to lose his job? Unlikely, but nevertheless when  change is upon us, many of us tend to place more value on what we will lose compared to what we will gain. Our defence mechanisms kick in.

But lets take this a little further: The average tech guy comes in two main personas. The first is the tech-cowboy who documents nothing, half completes projects then loses interest, is oblivious to how much they are in over their head and generally gives IT a bad name. They usually have a lot of intellectual intelligence (IQ), but not so much emotional intelligence (EQ). Ben Curry once referred to this group as “dumb smart guys.” The second persona is the conspiracy theorist who had to clean up after such a cowboy. This person usually has more skills and knowledge than the first guy, writes documentation and generally keeps things running well. Unfortunately, they also can give IT a bad name. This is because, after having to pick up the pieces of something not of their doing, they tend to develop a mother hen reflex based on a pathological fear of being paged at 9pm to come in and recover something they had no part in causing. The aforementioned cowboys rarely last the distance and therefore over time, IT departments begin to act as risk minimisers, rather than business enablers.

Now IT departments will never see it this way of course, instead believing that they enable the business because of their risk minimisation. Having spent 20 years being a paranoid conspiracy theorist, security-type IT guy, I totally get why this happens as I was the living embodiment of this attitude for a long time. Technology is getting insanely complex while users innate ability to do some really risky and dumb is increasing. Obviously, such risk needs to be managed and accordingly, a common characteristic of such an IT department is the word “no” to pretty much any question that involves introducing something new (banning iPads or espousing the evils of DropBox are the best examples I can think of right now). When I wrote about this issue in the context of SharePoint user adoption back in 2008, I had this to say:

The mother hen reflex should be understood and not ridiculed, as it is often the user’s past actions that has created the reflex. But once ingrained, the reflex can start to stifle productivity in many different ways. For example, for an employee not being able to operate at full efficiency because they are waiting 2 days for a helpdesk request to be actioned is simply not smart business. Worse still, a vicious circle emerges. Frustrated with a lack of response, the user will take matters into their own hands to improve their efficiency. But this simply plays into the hands of the mother hen reflex and for IT this reinforces the reason why such controls are needed. You just can’t trust those dog-gone users! More controls required!

The long term legacy of increasing technical complexity and risk is that IT departments become slow-moving and find it difficult to react to pace of change. Witness the number of organisations still running parts of their business on Office 2003, IE6 and Windows XP. The rest of the organisation starts to resent using old tools and the imposition of process and structure for no tangible gain. The IT department develops a reputation of being difficult to deal with and taking ages to get anything done. This disconnect begins to fester, and little by little both IT and “the business” develop a rose-tinged view of themselves (which is known as groupthink) and a misguided perception of the other.

At the end of the day though, irrespective of logic or who has the moral high ground in the debate, an IT department with a poor reputation will eventually lose. This is because IT is no longer seen as a business enabler, but as a cost-center. Just as organisations did with the IT outsourcing fad over the last decade, organisational decision makers will read CIO magazine articles about server huggers look longingly to the cloud, as applications become more sophisticated and more and more traditional vendors move into the space, thus legitimising it. IT will be viewed, however unfairly, as a burden where the cost is not worth the value realised. All the while, to conservative IT, the cloud represents some of their worst fears realised. Risk! risk! risk! Then the vicious circle of the mother-hen reflex will continue because rogue cloud applications will be commissioned without IT knowledge or approval. Now we are back to the bad old days of rogue MSAccess or SharePoint deployments that drives the call for control based governance in the first place!

<nerd interlude>

Now to the nerds reading this post who find it incredibly frustrating that their organisation will happily pump money into some cloud-based flight of fancy, but whine when you want to upgrade the network, I want you to take take note of this paragraph as it is really (really) important! I will tell you the simple reason why people are more willing to spend more money on fluffy marketing than IT. In the eyes of a manager who needs to make a profit, sponsoring a conference or making the reception area look nice is seen as revenue generating. Those who sign the cheques do not like to spend capital on stuff unless they can see that it directly contributes to revenue generation! Accordingly, a bunch of servers (and for that matter, a server room) are often not considered expenditure that generates revenue but are instead considered overhead! Overhead is something that any smart organisation strives to reduce to remain competitive. The moral of the story? Stop arguing cloud vs. internal on what direct costs are incurred because people will not care! You would do much better to demonstrate to your decision makers that IT is not an overhead. Depending on how strong your mother hen reflex is and how long it has been in place, that might be an uphill battle.

</nerd interlude>

Defence mechanisms…

Like the poor old Information Architect, the rules of the game are changing for IT with regards to cloud solutions. I am not sure how it will play out, but I am already starting to see the defence mechanisms kicking in. There was a CIO interviewed in the “Server Huggers” article that I referred to earlier (Scott Martin) who was hugely pro-cloud. He suggested that many CIO’s are seeing cloud solutions as a threat to the empire they have built:

I feel like a lot of CIOs are in the process of a kind of empire building.  IT empire builders believe that maintaining in-house services helps justify their importance to the company. Those kinds of things are really irrational and not in the best interest of the company […] there are CEO’s who don’t know anything about technology, so their trusted advisor is the guy trying to protect his job.

A client of mine in Sydney told me he enquired to his IT department about the use of hosted SharePoint for a multi-organisational project and the reply back was a giant “hell no,” based primarily on fear, uncertainty and doubt. With IT, such FUD is always cloaked in areas of quite genuine risk. There *are* many core questions that we must ask cloud vendors when taking the plunge because to not do so would be remiss (I will end this post with some of those questions). But the key issue is whether the real underlying reason behind those questions is to shut down the debate or to genuinely understand the risks and implications of moving to the cloud.

How can you tell an IT department is likely using a FUD defence? Actually, it is pretty easily because conservative IT is very predictable – they will likely try and hit you with what they think is their slam-dunk counter argument first up. Therefore, they will attempt to bury the discussion with the US Patriot Act Issue. I’ve come across this issue and and Mark Miller at FPWeb mentioned to me that this comes up all the time when they talk about SharePoint hosting to clients. (I am going to cover the Patriot Act issue in the next post because it warrants a dedicated post).

If the Patriot Act argument fails to dent unbridled cloud enthusiasm, the next layer of defence is to highlight cloud based security (identity, authentication and compliance) as well as downtime risk, citing examples such as the September outage of Office 365, SalesForce.com’s well publicized outages, the Amazon outage that took out Twitter, Reddit, Foursquare, Turntable.fm, Netflix and many, many others. The fact that many IT departments do not actually have the level of governance and assurance of their systems that they aspire to will be conveniently overlooked. 

Failing that, the last line of defence is to call into question the commercial viability of cloud providers. We talked about the issues facing the smaller players in the last post, but It is not just them. What if the provider decides to change direction and discontinue a service? Google will likely be cited, since it has a habit of axing cloud based services that don’t reach enough critical mass (the most recent casualty is Google health being retired as I write this).  The risk of a cloud provider going out of business or withdrawing a service is a much more serious risk than when a software supplier fails. At least when its on premise you still have the application running and can use it.

Every FUD defence is based on truth…

Now as I stated above, all of the concerns listed above are genuine things to consider before embarking on a cloud strategy. Prudent business managers and CIOs must weigh the pros and cons of cloud offering before rushing into a deployment that may not be appropriate for their organisation. Equally though, its important to be able to see through a FUD defence when its presented. The easiest way to do this is do some of your own investigations first.

To that end, you can save yourself a heap of time by checking out the work of Richard Harbridge. Richard did a terrific cloud talk at the most recent Share 2011 conference. You can view his slide deck here and I recommend really going through slides 48-81. He has provided a really comprehensive summary of considerations and questions to ask. Among other things, he offered a list of questions that any organisation should be asking providers of cloud services. I have listed some of them below and encourage you to check out his slide deck as it is really comprehensive and covers way more than what I have covered here.

Security Storage Identity & Access
Who will have access to my data?
Do I have full ownership of my data?
What type of employee / contractor screening you do, before you hire them?
How do you detect if an application is being attacked (hacked), and how is that
reported to me and my employees?
How do you govern administrator access to the service?
What firewalls and anti-virus technology are in place?
What controls do you have in place to ensure safety for my data while it is
stored in your environment?
What happens to my data if I cancel my service?
Can I archive environments?
Will my data be replicated to any other datacenters around the world (If
yes, then which ones)?
Do you offer single sign-on for your services?
Active directory integration?
Do all of my users have to rely on solely web based tools?
Can users work offline?
Do you offer a way for me to run your application locally and how quickly I can revert to the local installation?
Do you offer on-premise, web-based, or mixed environments?
     
Reliability & Support Performance  
What is your Disaster Recovery and Business Continuity strategy?
What is the retention period and recovery granularity?
Is your Cloud Computing service compliant with [insert compliance regime here]?
What measures do you provide to assist compliance and minimize legal risk?
What types of support do you offer?
How do you ensure we are not affected by upgrades to the service?
What are your SLAs and how do you compensate when it is not met?
How fast is the local network?
What is the storage architecture?
How many locations do you have and how are they connected?
Have you published any benchmark scores for your infrastructure?
What happens when there is over subscription?
How can I ensure CPU and memory are guaranteed?
 

Conclusion and looking forward…

For some organisations, the lure of cloud solutions is very seductive. From a revenue perspective, it saves a lot of capital expenditure. From a time perspective, it can be deployed very quickly and and from a maintenance perspective, takes the burden away from IT. Sounds like a winner when put that way. But the real issue is that the changing cloud paradigm potentially impacts the wellbeing of some IT professionals and IT departments because it calls into question certain patterns and practices within established roles. It also represents a loss of control and as I said earlier, people often place a higher value on what they will lose compared to what they will gain.

Irrespective of this, whether you are a new age cloud loving CIO or a server hugger, any decision to move to the cloud should be about real business outcomes. Don’t blindly accept what the sales guy tells you. Understand the risks as well as the benefits. Leverage the work Richard has done and ask the cloud providers the hard questions. Look for real world stories (like my second and third articles in this series) which illustrate where the services have let people down.

For some, cloud will be very successful. For others, the gap between expectations and reality will come with a thud.

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why can’t people find stuff on the intranet?–Final summary

Send to Kindle

Hi

Those of you who get an RSS feed of this blog might have noticed it was busy over last week. This is because I pushed out 4 blog posts that showed my analysis using IBIS of a detailed linear discussion on LinkedIn. To save people getting lost in the analysis, I thought I’d quickly post a bit of an executive summary from the exercise.

To set context, Issue Mapping is a technique of visually capturing rationale. It is graphically represented using a simple, but powerful, visual structure called IBIS (Issue Based Information System). IBIS allows all elements and rationale of a conversation to be captured in a manner that can be easily reflected upon. Unlike prose, which is linear, the advantage of visually representing argument structure is it helps people to form a better mental model of the nature of a problem or issue. Even better, when captured this way, makes it significantly easier to identify emergent themes or key aspects to an issue.

You can find out all about IBIS and Dialogue Mapping in my new book, at the Cognexus site or the other articles on my blog.

The challenge…

On the Intranet Professionals group on LinkedIn recently, the following question was asked:

What are the main three reasons users cannot find the content they were looking for on intranet?

In all, there were more than 60 responses from various people with some really valuable input. I decided that it might be an interesting experiment to capture this discussion using the IBIS notion to see if it makes it easier for people to understand the depth of the issue/discussion and reach a synthesis of root causes.

I wrote 4 posts, each building on the last, until I had covered the full conversation. For each post, I supplied an analysis of how I created the IBIS map and then exported the maps themselves. You can follow those below:

Part 1 analysis: http://www.cleverworkarounds.com/2012/01/15/why-cant-users-find-stuff-on-the-intranet-in-ibis-synthesispart-1/
Part 2 analysis: http://www.cleverworkarounds.com/2012/01/15/why-cant-users-find-stuff-on-the-intranet-an-ibis-synthesispart-2/
Part 3 analysis: http://www.cleverworkarounds.com/2012/01/16/why-cant-users-find-stuff-on-the-intranet-an-ibis-synthesispart-3/
Part 4 analysis: http://www.cleverworkarounds.com/2012/01/16/why-cant-users-find-stuff-on-the-intranet-an-ibis-synthesispart-4/

Final map: http://www.cleverworkarounds.com/maps/findstuffpart4/Linkedin_Discussion__192168031326631637693.html

For what its worth, the summary of themes from the discussion was that there were 5 main reasons for users not finding what they are looking for on the intranet.

  1. Poor information architecture
  2. Issues with the content itself
  3. People and change aspects
  4. Inadequate governance
  5. Lack of user-centred design

Within these areas or “meta-themes” there were varied sub issues. These are captured in the table below.

Poor information architecture Issues with content People and change aspects Inadequate governance Lack of user-centred design
Vocabulary and labelling issues

· Inconsistent vocabulary and acronyms

· Not using the vocabulary of users

· Documents have no naming convention

Poor navigation

Lack of metadata

· Tagging does not come naturally to employees

Poor structure of data

· Organisation structure focus instead of user task focussed

· The intranet’s lazy over-reliance on search

Old content not deleted

Too much information of little value

Duplicate or “near duplicate” content

Information does not exist or an unrecognisable form

People with different backgrounds, language, education and bias’ all creating content

Too much “hard drive” thinking

People not knowing what they want

Lack of motivation for contributors to make information easier to use

Google inspired inflated expectations on search functionality on intranet

Adopting social media from a hype driven motivation

Lack of governance/training around metadata and tagging

Not regularly reviewing search analytics

Poor and/or low cost search engine is deployed

Search engine is not set up properly or used to full potential

Lack of “before the fact” coordination with business communications and training

Comms and intranet don’t listen and learn from all levels of the business.

Ambiguous, under-resourced or misplaced Intranet ownership

The wrong content is being managed

There are easier alternatives available

Content is structured according to the view of the owners rather than the audience

Not accounting for two types of visitors… task-driven and browse-based

No social aspects to search

Not making the search box available enough

A failure to offer an entry level view

Not accounting for people who do not know what they are looking for versus those who do

Not soliciting feedback from a user on a failed search about what was being looked for

So now you have seen the final output, be sure to visit the maps and analysis and read about the journey on how this table emerged. One thing is for sure, it sure took me a hell of a lot longer to write about it than to actually do it!

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why can’t users find stuff on the intranet? An IBIS synthesis–Part 4

Send to Kindle

Hi and welcome to my final post on the linkedin discussion on why users cannot find what they are looking for on intranets. This time the emphasis is on synthesis… so let’s get the last few comments done shall we?

Michael Rosager • @ Simon. I agree.
Findability and search can never be better than the content available on the intranet.
Therefore, non-existing content should always be number 1
Some content may not be published with the terminology or language used by the users (especially on a multilingual intranet). The content may lack the appropriate meta tags. – Or maybe you need to adjust your search engine or information structure. And there can be several other causes…
But the first thing that must always be checked is whether they sought information / data is posted on the intranet or indexed by the search engine.

Rasmus Carlsen • in short:
1: Too much content (that nobody really owns)
2: Too many local editors (with less knowledge of online-stuff)
3: Too much “hard-drive-thinking” (the intranet is like a shared drive – just with a lot of colors = a place you keep things just to say that you have done your job)

Nick Morris • There are many valid points being made here and all are worth considering.
To add a slightly different one I think too often we arrange information in a way that is logical to us. In large companies this isn’t necessarily the same for every group of workers and so people create their own ‘one stop shop’ and chaos.
Tools and processes are great but somewhere I believe you need to analyse what information is needed\valued and by whom and create a flexible design to suit. That is really difficult and begins to touch on how organisations are structured and the roles and functions of employees.

Taino Cribb • Hi everyone
What a great discussion! I have to agree to any and all of the above comments. Enabling users to find info can definately be a complicated undertaking that involves many facets. To add a few more considerations to this discussion:
Preference to have higher expectations of intranet search and therefore “blame” it, whereas Google is King – I hear this too many times, when users enter a random (sometimes misspelled) keyword and don’t get the result they wish in the first 5 results, therefore the “search is crap, we should have Google”. I’ve seen users go through 5 pages of Google results, but not even scroll down the search results page on the intranet.
Known VS Learned topics – metadata and user-tagging is fantastic to organise content we and our users know about, but what about new concepts where everyone is learning for the first time? It is very difficult to be proactive and predict this content value, therefore we often have to do so afterwards, which may very well miss our ‘window of opportunity’ if the content is time-specific (ie only high value for a month or so).
Lack of co-ordination with business communications/ training etc (before the fact). Quite often business owners will manage their communications, but may not consider the search implications too. A major comms plan will only go so far if users cannot search the keywords contained in that message and get the info they need. Again, we miss our window if the high content value is valid for only a short time.
I very much believe in metadata, but it can be difficult to manage in SP2007. Its good to see the IM changes in SP2010 are much improved.

Of the next four comments most covered old ground (a sure sign the conversation is now fairly well saturated). Nick says he is making a “a slightly different” point, but I think issues of structure not suiting a particular audience has been covered previously. I thought Taino’s reply was interesting because she focused on the issue of not accounting for known vs. learned topics and the notion of a “window of opportunity” in relation to appropriate tagging. Perhaps this reply was inspired by what Nick was getting at? In any event, adding it was a line call between governance and information architecture and for now, I chose the latter (and I have a habit of changing my mind with this stuff :-).

image_thumb[12]

I also liked Taino’s point about user expectations around the “google experience” and her examples. I also loved earlier Rasmus’s point about “hard-drive thinking” (I’m nicking that one for my own clients Rasmus Smile). Both of these issues are clearly people aspects, so I added them as examples around that particular theme.

image_thumb[14]

Finally, I added Taino’s “lack of co-ordination” comments as another example of inadequate governance.

image_thumb[18]

Anne-Marie Low • The one other thing I think missing from here (other than lack of metadata, and often the search tool itself) is too much content, particularly out of date information. I think this is key to ensuring good search results, making sure all the items are up to date and relevant.

Andrew Wright • Great discussion. My top 3 reasons why people can’t find content are:
* Lack of meta data and it’s use in enabling a range of navigation paths to content (for example, being able to locate content by popularity, ownership, audience, date, subject, etc.) See articles on faceted classification:
http://en.wikipedia.org/wiki/Faceted_classification
and
Contextual integration
http://cibasolutions.typepad.com/wic/2011/03/contextual-integration-how-it-can-transform-your-intranet.html#tp
* Too much out-of-date, irrelevant and redundant information
See slide 11 from the following presentation (based on research of over 80 intranets)
http://www.slideshare.net/roowright/intranets2011-intranet-features-that-staff-love
* Important information is buried too far down in the hierarchy
Bonus 2 reasons 🙂
* Web analytics and measures not being used to continuously improve how information is structured
* Over reliance on Search instead of Browsing – see the following article for a good discussion about this
Browse Versus Search: Stumbling into the Unknown
http://idratherbewriting.com/2010/05/26/browse-versus-search-organizing-content-9/

Both Anne and Andrew make good points and Andrew supplies some excellent links too, but all of these issues have been covered in the map so nothing more has been added from this part of the discussion.

Juan Alchourron • 1) that particular, very important content, is not yet on the intranet, because “the” director don’t understand what the intranet stands for.
2) we’re asuming the user will know WHERE that particular content will be placed on the intranet : section, folder and subfolder.
3) bad search engines or not fully configured or not enough SEO applied to the intranet

John Anslow • Nowt new from me
1. Search ineffective
2. Navigation unintuitive
3. Useability issues
Too often companies organise data/sites/navigation along operational lines rather than along more practical means, team A is part of team X therefore team A should be a sub section of team X etc. this works very well for head office where people tend to have a good grip of what team reports where but for average users can cause headaches.
The obvious and mostly overlooked method of sorting out web sites is Multi Variant Testing (MVT) and with the advent of some pretty powerful tools this is no longer the headache that it once was, why not let the users decide how they want to navigate, see data, what colour works best, what text encourages them to follow what links, in fact how it works altogether?
Divorcing design, usability, navigation and layout from owners is a tough step to take, especially convincing the owners but once taken the results speak for themselves.

Most of these points are already well discussed, but I realised I had never made a reference to John’s point about organisational structures versus task based structures for intranets. I had previously captured rationale around the fact that structures were inappropriate, so I added this as another example to that argument within information architecture…

image

Edwin van de Bospoort • I think one of the main reasons for not finding the content is not poor search engines or so, but simply because there’s too much irrelevant information disclosed in the first place.
It’s not difficult to start with a smaller intranet, just focussing on filling out users needs. Which usually are: how do I do… (service-orientated), who should I ask for… (corporate facebok), and only 3rd will be ‘news’.
So intranets should be task-focussed instead if information-focussed…
My 2cnts 😉

Steven Kent • Agree with Suzanne’s suggestion “Old content is not deleted and therefore too many results/documents returned” – there can be more than one reason why this happens, but it’s a quick way to user frustration.

Maish Nichani • It is interesting to see how many of us think metadata and structure are key to finding information on the intranet. I agree too. But come to think of it, staff aren’t experts in information management. It’s all very alien to them. Not too long ago, they had their desktops and folders and they could find their information when they wanted. All this while it was about “me and my content”. Now we have this intranet and shared folders and all of a sudden they’re supposed to be thinking about how “others” would like to find and use the information. They’ve never done this before. They’ve never created or organized information for “others”. Metadata and structure are just “techie” stuff that they have to do as part of their publishing, but they don’t know why they’re doing it or for what reason. They real problem, in my opinion, is lack of empathy.

Barry Bassnett • * in establishing a corporate taxonomy.1. Lack of relevance to the user; search produces too many documents.3. Not training people in the concept that all documents are not created by the individual for the same individual but as a document that is meant to be shared. e.g. does anybody right click PDFs to add metadata to its properties? Emails with a subject line stat describe what is in it.

Luc de Ruijter • @Maish. Good point about information management.
Q: Who’d be responsible to oversee the management of information?
Shouldn’t intranet managers/governors have that responsibility?
I can go along with (lack of) empathy as an underlying reason why content isn’t put away properly. This is a media management legacy reason: In media management content producers never had to have empathy with participating users, for there were only passive audiences.
If empathy is an issue. Then it proves to me that communication strategies are still slow to pick up on the changes in communication behaviour and shift in mediapower, in the digital age.
So if we step back from technological reasons for not finding stuff (search, meta, office automation systems etc.) another big reason looks around the corner of intranet management: those responsible for intranet policies and strategy.

Most of this discussion covers stuff already represented in the map, although I can see that in this part of the conversation there is a preoccupation with content and its relevance. Maish also makes a couple of good points. First up he makes the point that staff are not experts in information management and don’t tend to think about how someone else might wish to find the information later. He also concludes by stating the real problem is a lack of empathy. I liked this and felt that this was a nice supporting argument to the whole conjecture that “people issues” is a major theme in this discussion, so I added it as a pro.

image

 

Now we have an interesting bit in the conversation (for me anyway). Terry throws a curveball question. (Side note: Curveball questions are usually asked with genuine intent, but tend to have a negative effect on live meetings. Dialogue Mapping loves curveball questions as it is often able to deflect its negative impacts).

Terry Golding • Can I play devils advocate and ask WHY you feel meta data is so vital? Dont misunderstand me I am not saying that it is not important, but I cant help feeling that just saying meta data as a reason for not finding things is rather a simplification. Let me ask it another way, what is GOOD meta data, can you give examples please ?

Luc de Ruijter • @Terry. Good questions which can have many answers (see all comments above where you’ll find several answers already). Why do library books have labels on their covers? Those labels are in fact metadata (avant la lettre) which help library people ordering their collection, and clients to find titles. How do you create tag clouds which offer a more intuitive and user centered way to navigate a website/blog? By tagging all content with (structured) meta tags.Look around a bit and you’ll see that metadata are everywhere and that they serve you in browsing and retrieving content. That’s why metadata are vital these days.I think there are no strict right and good meta structures. Structures depend on organisational contexts. Some metastructures are very complex and formal (see comments about taxonomies above), others are quite simple.Metadata can enable users to browse information blocks. By comparisson navigation schemes can only offer rigid sender driven structures to navigate to pages.

Andrew Wright • @Terry. Meta data enables content to be found in a number of different ways – not just one as is typical of paper based content (and many intranets as well unfortunately).
For instance, if you advertise a house for sale you may have meta data about the house such as location, number of rooms and price. This then allows people to locate the house using this meta data (eg. search by number of bedrooms, price range, location). Compare this with how houses are advertised in newspapers (ie. by location only) and you can see the benefits of meta data.
For a good article about the benefits of meta data, read Card Sorting Doesn’t Cut the Custard:
http://www.zefamedia.com/websites/card-sorting-doesnt-cut-the-custard/
To read a more detailed example about how meta data can be applied to intranets, read:
Contextual integration: how it can transform your intranet
http://cibasolutions.typepad.com/wic/2011/03/contextual-integration-how-it-can-transform-your-intranet.html

Terry questions the notion of metadata. I framed it as a con against the previous metadata arguments. Both Luc and Andrew answer and I think the line that most succinctly captures the essence of than answer is Andrew’s “Meta data enables content to be found in a number of different ways”. So I reframe that slightly as a pro supporting the notion that lack of metadata is one of the reasons why users can;t find stuff on the intranet.

image

Next is yours truly…

Paul Culmsee • Hi all
Terry a devils advocate flippant answer to your devils advocate question comes from Corey Doctrow with his dated, but still hilarious essay on the seven insurmountable obstacles to meta-utopia 🙂 Have a read and let me know what you think.
http://www.well.com/~doctorow/metacrap.htm
Further to your question (and I *think* I sense the undertone behind your question)…I think that the discussion around metadata can get a little … rational and as such, rational metadata metaphors are used when they are perhaps not necessarily appropriate. Yes metadata is all around us – humans are natural sensemakers and we love to classify things. BUT usually the person doing the information architecture has a vested interest in making the information easy for you. That vested interest drives the energy to maintain the metadata.
In user land in most organisations, there is not that vested interest unless its on a persons job description and their success is measured on it. For the rest of us, the energy required to maintain metadata tends to dissipate over time. This is essentially entropy (something I wrote about in my SharePoint Fatigue Syndrome post)
http://www.cleverworkarounds.com/2011/10/12/sharepoint-fatigue-syndrome/

Bob Meier • Paul, I think you (and that metacrap post) hit the nail on the head describing the conflict between rational, unambiguous IA vs. the personal motivations and backgrounds of the people tagging and consuming content. I suspect it’s near impossible to develop a system where anyone can consistently and uniquely tag every type of information.
For me, it’s easy to get paralyzed thinking about metadata or IA abstractly for an entire business or organization. It becomes much easier for me when I think about a very specific problem – like the library book example, medical reports, or finance documents.

Taino Cribb • @Terry, brilliant question – and one which is quite challenging to us that think ‘metadata is king’. Good on you @Paul for submitting that article – I wouldn’t dare start to argue that. Metadata certainly has its place, in the absence of content that is filed according to an agreed taxonomy, correctly titled, the most recent version (at any point in time), written for the audience/purpose, valued and ranked comparitively to all other content, old and new. In the absence of this technical writer’s utopia, the closest we can come to sorting the wheat from the chaff is classifcation. It’s not a perfect workaround by any means, though it is a workaround.
Have you considered that the inability to find useful information is a natural by-product of the times? Remember when there was a central pool to type and file everything? It was the utopia and it worked, though it had its perceived drawbacks. Fast forward, and now the role of knowledge worker is disseminated to the population – people with different backgrounds, language, education and bias’ all creating content.
It is no wonder there is content chaos – it is the price we pay for progress. The best we as information professionals can do is ride the wave and hold on the best we can!

Now my reply to Terry was essentially speaking about the previously spoken of issue around lack of motivation on the part of users to make their information easy to use. I added a pro to that existing idea to capture my point that users who are not measured on accurate metadata have little incentive to put in the extra effort. Taino then refers to pace of change more broadly with her “natural by-product of the times” comment. This made me realise my meta theme of “people aspects” was not encompassing enough. I retitled it “people and change aspects” and added two of Taino’s points as supporting arguments for it.

image

At this point I stopped as enough had been captured the the conversation had definitely reached saturation point. It was time to look at what we had…

For those interested, the final map had 139 nodes.

The second refactor

At this point is was time to sit back and look at the map with the view of seeing if my emergent themes were correct and to consolidate any conversational chaff. Almost immediately, the notion of “content” started to bubble to the surface of my thinking. I had noticed that a lot of conversation and re-iteration by various people related to the content being searched in the first place. I currently had some of that captured in Information Architecture and in light of the final map, I felt that this wasn’t correct. The evidence for this is that Information Architecture topics dominated the maps. There were 55 nodes for information architecture, compared to 34 for people and change and 31 for governance.

Accordingly, I took all of the captured rationale related to content and made it its own meta-theme as shown below…

image

Within the “Issues with the content being searched” map are the following nodes…

image

I also did another bit of fine tuning too here and there and overall, I was pretty happy with the map in its current form.

The root causes

If you have followed my synthesis of what the dialogue from the discussion told me, it boiled down to 5 key recurring themes.

  1. Poor Information Architecture
  2. Issues with the content itself
  3. People and change aspects
  4. Inadequate governance
  5. Lack of user-centred design

I took the completed maps, exported the content to word and then pared things back further. This allowed me to create the summary table below:

Poor Information Architecture Issues with content People and change aspects Inadequate governance Lack of user-centred design
Vocabulary and labelling issues

· Inconsistent vocabulary and acronyms

· Not using the vocabulary of users

· Documents have no naming convention

Poor navigation

Lack of metadata

· Tagging does not come naturally to employees

Poor structure of data

· Organisation structure focus instead of user task focussed

· The intranet’s lazy over-reliance on search

Old content not deleted

Too much information of little value

Duplicate or “near duplicate” content

Information does not exist or an unrecognisable form

People with different backgrounds, language, education and bias’ all creating content

Too much “hard drive” thinking

People not knowing what they want

Lack of motivation for contributors to make information easier to use

Google inspired inflated expectations on search functionality on intranet

Adopting social media from a hype driven motivation

Lack of governance/training around metadata and tagging

Not regularly reviewing search analytics

Poor and/or low cost search engine is deployed

Search engine is not set up properly or used to full potential

Lack of “before the fact” coordination with business communications and training

Comms and intranet don’t listen and learn from all levels of the business.

Ambiguous, under-resourced or misplaced Intranet ownership

The wrong content is being managed

There are easier alternatives available

Content is structured according to the view of the owners rather than the audience

Not accounting for two types of visitors… task-driven and browse-based

No social aspects to search

Not making the search box available enough

A failure to offer an entry level view

Not accounting for people who do not know what they are looking for versus those who do

Not soliciting feedback from a user on a failed search about what was being looked for

The final maps

The final map can be found here (for those who truly like to see full context I included an “un-chunked” map which would look terrific when printed on a large sized plotter). Below however, is a summary as best I can do in a blog post format (click to enlarge). For a decent view of proceedings, visit this site.

Poor Information Architecture

part4map1

Issues with the content itself

part4map2

People and change aspects

part4map3

Inadequate governance

part4map4

Lack of user-centred design

part4map5

Thanks for reading.. as an epilogue I will post a summary with links to all maps and discussion.

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why can’t users find stuff on the intranet? An IBIS synthesis–Part 3

Send to Kindle

Hi all

This is the third post in a quick series that attempts to use IBIS to analyse an online discussion. The map is getting big now, but luckily, we are halfway through the discussion and will have most of the rationale captured by the end of this post. We finished the part 2 with a summary map that has grouped the identified reasons why it is hard to find information on intranets into core themes. Right now there are 4 themes that have emerged. In this post we see if there are any more to emerge and fully flesh out the existing ones. Below is our starting point for part 3.

part3map1_thumb5

Our next two responses garnered more nodes in the map than most others. I think this is a testament to the quality of their input to the discussion. First up Dan…

Dan Benatan • Having researched this issue across many diffferent company and departmental intranets, my most frequent findings are:
1. A complete lack of user-centred design. Content that many members of the organization need to access is structured according to the view of the content owners rather than the audience. This should come as no surprise, it remains the biggest challenge in public websites.
2. A failure to offer an entry level view. Much of the content held on departmental intranets is at a level of operational detail that is meaningless to those outside the team. The information required is there, but it is buried so deep in the documents that people outside the team can’t find it.
3. The intranet’s lazy over-reliance on search. Although many of us have become accustomed to using Google as our primary entry point to find content across the web, we may do this because we know we have no hope of finding the content through traditional navigation. The web is simply far too vast. We do not, however, rely purely on search once we are in the website we’ve chosen. We expect to be able to navigate to the desired content. Navigation offers context and enables us to build an understanding of the knowledge area as we approach the destination. In my research I found that most employees (>70%) try navigation first because they feel they understand the company well enough to know where to look.
4. Here I agree with many of the comments posted above. Once the user does try search, it still fails. The search engine returns too many results with no clear indication of their relative validity. There is a wealth of duplicate content on most intranets and , even worse, there is a wealth of ‘near duplicate’ content; some of which is accurate and up-to-date and much that is neither. The user has no easy way to know which content to trust. This is where good intranet management and good metadata can help.

My initial impression was that this was an excellent reply and Dan’s experience shone through it. I thought this was one of the best contributions to the discussion thus far. Let’s see what I added shall we?

First up, Dan returned to the user experience issue, which was one of the themes that had emerged. I liked his wording of the issue, so I also changed the theme node of “Inadequate user experience design” to Dan’s framing of “Lack of user-centred design”, which I thought was better put. I then added his point about content structured to the world view of owner, rather than audience. His second point about an “entry level view” relates to the first point in the sense that both are user centred design issues. So I added the entry level view point as an example…

image_thumb14

I added Dan’s point about the intranet’s lazy over-reliance on search to the information architecture theme. I did this because he was discussing the relationship between navigation and search, and navigation had already come up as an information architecture issue.

image_thumb23

Dan’s final point about too many results returned was already covered previously, but he added a lot of valuable arguments around it. I restructured that section of the map somewhat and incorporated his input.

image_thumb6

Next we have Rob, who also made a great contribution (although not as concise as Dan)

Rob Faulkner • Wow… a lot of input, and a lot of good ideas. In my experience there can be major liabilities with all of these more “global” concepts, however.
No secret… Meta data is key for both getting your site found to begin with, as well as aiding in on-site search. The weak link in this is the “people aspect” of the exercise, as has been alluded to. I’ve worked on interactive vehicles with ungodly numbers of pages and documents that rely on meta data for visibility and / or “findability” (yes, I did pay attention in English class once in a while… forgive me), and the problem — more often than not — stems from content managers either being lazy and doing a half ass job of tagging, if at all, or inconsistency of how things are tagged by those that are gung-ho about it. And, as an interactive property gets bigger, so too does the complexity tagging required to make it all work. Which circles back to freaking out providers into being lazy on the one hand, or making it difficult for anyone to get it “right” on the other. Vicious circle. Figure that one out and you win… from my perspective.
Another major issue that was also alluded to is organization. For an enterprise-class site, thorough taxonomy / IA exercises must be hammered out by site strategists and THEN tested for relevance to target audiences. And I don’t mean asking targets what THEY want… because 9 times out of 10 you’re either going to get hair-brained ideas at best, or blank stares at worst. You’ve just got to look at the competitive landscape to figure out where the bar has been set, what your targets are doing with your product (practical application, OEMing, vertical-specific use, etc… then Test the result of your “informed” taxonomy and IA to ensure that it does, in fact, resonate with your targets once you’ve gotten a handle on it.
Stemming from the above, and again alluded to, be cautious about how content is organized in order to reflect how your targets see it, not how internal departments handle it. Most of the time they are not one in the same. Further, you’ve got to assume that you’re going to have at least two types of visitors… task-driven and browse-based. Strict organization by product or service type may be in order for someone that knows what they’re looking for, but may not mean squat to those that don’t. Hence, a second axis of navigation that organizes your solutions / products by industry, pain point, what keeps you up at night, or whatever… will enable those that are browsing, or researching, a back door into the same ultimate content. Having a slick dynamic back-end sure helps pull this off
Finally, I think a big mistake made across all verticals is what the content consists of to begin with. What you may think is the holy grail, and the most important data or interactive gadget in the world may not mean a hill-of-beans to the user. I’ve conducted enough focus groups, worldwide, to know that this is all typically out of alignment. I never cease to be amazed at exactly what it is that most influences decision makers.
I know a lot of this was touched upon by many of you. Sorry about that… damn thread is just getting too long to go back and figure out exactly who said what!
Cheers…

Now Rob was the first to explicitly mention “People aspects”, and I immediately realised this was the real theme that “Lack of motivation on the part of contributors…”was getting at. So I restructured the map so that “people aspects” was the key theme and the previous point of “Lack of motivation” was an example. I then added Rob’s other examples.

image_thumb27

After making his points around people aspects, Rob then covers some areas well covered already (metadata, content organsiation), so I did not add any more nodes. But at the end, he added a point about browse oriented vs. search oriented users, which I did add to the user-centred design discussion.

image_thumb33

Rob also made a point about users who know what they want when searching for information vs. those who do not. (In Information Architecture terms, this is called “Known item seeking” vs “exploratory seeking”). That had not been covered previously, so I added it to the Information Architecture discussion.

image_thumb31

Finally, I captured Rob’s point about the wrong content being managed in the first place. This is a governance issue since the best Information architecture or user experience design won;t matter a hoot if you’re not making the right content available in the first place.

image_thumb32

Hans Leijström • Great posts! I would also like to add lack of quality measurements (e.g. number of likes, useful or not) and the fact that the intranets of today are not social at all…

Caleb Lamz • I think everyone has provided some great reasons for users not being able to find what they are looking for. I lean toward one of the reasons Bob mentions above – many intranets are simply not actively managed, or the department managing it is not equipped to do so.
Every intranet needs a true owner (no matter where it falls in the org chart) that acts a champion of the user. Call it the intranet manager, information architect, collaboration manager, or whatever you want, but their main job needs to be make life easier for users. Responsibilities include doing many of the things mentioned above like refining search, tweaking navigation, setting up a metadata structure, developing social tools (with a purpose), conducting usability tests, etc.
Unfortunately, with the proliferation of platforms like SharePoint, many IT departments roll out a system with no true ownership, so you end up with content chaos.

There is no need to add anything from Hans as he was re-iterating a previous comment about analytics which was captured already. Caleb makes a good point about ownership of content/intranet which is a governance issue in my book. So I added his contribution there…

image

Dena Gazin • @Suzanne. Yes, yes, yes – a big problem is old content. Spinning up new sites (SharePoint) and not using, or migrating sites and not deleting old or duplicative content. Huge problem! I’m surprised more people didn’t mention this. Here’s my three:
1. Metadata woes (@Erin – lack of robust metadata does sound better as improvements can be remedied on multiple levels)
2. Old or duplicate content (Data or SharePoint Governance)
3. Poorly configured search engine
Bonus reason: Overly complicated UIs. There’s a reason people like Google. Why do folks keep trying to mess up a good thing? Keep it as simple as you possibly can. Create views for those who need more. 80/20 rule!

Dena’s points are a reiteration of previous points, but I did like her “there is a reason people like google” point, which I considered to be a nice supporting argument of the entire user-centric design theme.

image

Next up we have another group of discussions. What is interesting here is that there is some disagreement – and a lot of prose – but not a lot of information was added to the map from it.

Luc de Ruijter • @Rob. Getting information and metastructures in place requires interactions with the owners of information. I doubt whether they are lazy or blank staring people – I have different experiences with engaging users in preparing digital working environments. People may stare back at you when you offer complete solutions they can say “yea” or “nay” to. And this is still common practice amogst Communication specialists (who like creating stuff themselves first and then communicate about them to others later). And if colleagues stare blank at your proposal, they obviously are resisting change and in need of some compelling communciation campaign…
Communication media legacy models are a root cause for failing intranets.
Tagging is indeed a complex excercise. And we come from a media-age in which fun predominated and we were all journalists and happy bunnies writing post after post, page after page, untill the whole cluttered intranet environment was ready again for a redesign.
Enterprise content is not media content, but enterprise content. Think about it (again please 🙂 ). If you integrate the storage process of enterprise content into the “saving as” routine, you’ll have no problems anymore with keeping your content clean and consistent. All wil be channeled through consistent routines. This doesn’t kill adding free personal meta though, it just puts the content in a enterprise structure. Think enterprise instead of media and tagging solutions are for grabs.
I agree that working on taxonomies can become a drag. Leadership and vison can speed up the process. And mandate of course.
I believe that the whole target rationale behind websites is part of the Communication media legacy we need to loose in order to move forward in better communication services to eployees. Target-thinking hampers the construction of effectve user centered websites, for it focusses on targets, persona, audiences, scenario’s and the whole extra paper media works.
While users only need flexibility, content in context, filters and sorting options. Filtering and sorting are much more effective than adding one navigation tree ater another. And they require a 180° turn in conventional communciation thinking.
@Caleb. Who manages an intranet?
Is that a dedicated team of intranet managers, previously known as content managers, previously known as communciation advisors, previously known as mere journalists? Or is intranet a community affair in which the community IS the manager of content? Surely you want a metamodel to be managed by a specialist. And make general management as much a non-silo activity as possible. Collaboration isn’t confined to silo’s so intranet shouldn’t be either.
A lot of intranets are run by a small group of ‘experts’ whose basic reasoning is that intranet is a medium like an employee magazine. If you want content management issues, start making such groups responsible for intranet.
In my experience intranets work once you integrate them in primary processes. Itranet works for you if you make intranet part of your work. De-medialise the intranet and you have more chance for sustainable success.
Rolling out Sharepoints is a bit like rolling back time. We’ll end up somewhere where we already were in 2001, when digital IC was IT policy. The fact that we are turning back to that situation is a good and worrying illustration of the fact that strategy on digital communications is lacking in the Communications department – otherwise they wouldn’t loose out to IT.
@Dena. I think your bonus reason is a specific Sharepoint reason. Buy Sharepoint and get a big bonus bag of bad design stuff with it – for free! An offer you can’t refuse 🙂

Luc de Ruijter • @Dena. My last weeks tweet about search: Finding #intranet content doesn’t start with #search #SEO. It starts with putting information in a content #structure which is #searchable. Instead of configuring your search engine, think about configuring your content first.

Once again Luc is playing the devils advocate role with some broader musings. I might have been able to add some of this to the map, but it was mostly going over old ground or musings that were not directly related to the question being asked. This time around, Rob takes issue with some of his points and Caleb agrees…

Rob Faulkner • @Luc, Thanks for your thoughtful response, but I have to respectfully disagree with you on a few points. While my delivery may have been a bit casual, the substance of my post is based on experience.
First of all, my characterizations of users being 1) lazy or 2) blank staring we’re not related to the same topic. Lazy: in reference to tagging content. Blank Staring: related to looking to end users for organizational direction.
Lazy, while not the most diplomatic means of description, I maintain, does occur. I’ve experienced it, first hand. A client I’m thinking of is a major technology, Fortune 100 player with well over 100K tech-focused, internet savvy (for the most part) employees. And while they are great people and dedicated to their respective vocation, they don’t always tag documents and / or content-chunks correctly. It happens. And, it IS why a lot of content isn’t being located by targets — internally or externally. This is especially the case when knowledge or content management grows in complexity as result of content being repurposed for delivery via different vehicles. It’s not as simple as a “save as” fix. This is why I find many large sites that provide for search via pre-packed variables, — i.e. drop-downs, check-boxes, radio-buttons, etc — somewhat suspect, because if you elect to also engage in keyword index search you will, many times, come up with a different set of results. In other words, garbage in, garbage out. That being said, you asked “why,” not “what to do about it” and they are two completely different topics. I maintain that this IS definitely a potential “why.”
As far as my “blank stare” remark, it had nothing to do with the above, which you tied it to… but I am more than fluent in engaging and empowering content owners in the how’s and why’s of content tagging without confusing them or eliciting blank stares. While the client mentioned above is bleeding-edge, I also have vast experience with less tech-sophisticated entities — i.e. 13th-century country house hotels — and, hence, understand the need to communicate with contributors appropriate to what will resonate with them. This is Marketing 101.
In regard to the real aim of my “blank stare” comment, it is very germane to the content organization conversation in that it WILL be one of your results if you endeavour to ask end-users for direction. It is, after all, what we as experts should be bringing to the table… albeit being qualified by user sounding boards.
Regarding my thoughts on taxonomy exercises… I don’t believe I suggested it was a drag, at all. The fact is, I find this component of interactive strategy very engaging… and a means to create a defensible, differentiated marketing advantage if handled with any degree of innovation.
In any event, I could go on and on about this post and some of the assumptions, or misinterpretations, you’ve made, but why bother? When I saw your post initially, it occurred to me you were looking for input and perhaps insight into what could be causing a problem you’re encountering… hence the “why does this happen” tone. Upon reviewing the thread again, it appears you’re far more interested in establishing a platform to pontificate. If you want to open a discussion forum you may want to couch your topic in more of a “what are your thoughts about x, y, z?”… rather than “what could be causing x, y, z?” As professionals, if we know the causes we’re on track to address the problem.

Caleb Lamz • I agree with Rob, that this thread has gone from “looking for input” to “a platform to pontificate”. You’re better off making this a blog post rather than asking for input and then making long and sometimes off the cuff remarks on what everyone else has graciously shared. It’s unproductive to everyone when you jump to conclusions based on the little information that other users can provide in a forum post.

Luc de Ruijter • The list:
Adopting social media from a hype-driven motivation (lack of coherence)
big problem with people just PDFing EVERYTHING instead of posting HTML pages
Comms teams don’t listen and learn from all levels of the business
Content is not where someone thought it would be or should be or its not called what they thought it was called or should be called.
content is titled poorly
content managers either being lazy and doing a half ass job of tagging
content they are trying to find is out of date, cannot be trusted or isn’t even available on the intranet.
Documents have no naming convention
failure to offer an entry level view
inconsistency of how things are tagged
Inconsistent vocabulary and acronyms
info is organised by departmental function rather than focussed on end to end business process.
information being searched does not actually exist or exists only in an unrecognisable form and therefore cannot be found!
intranet’s lazy over-reliance on search
intranets are simply not actively managed, or the department managing it is not equipped to do so.
intranets of today are not social at all
just too much stuff
Lack of content governance, meta-data and inconsistent taxonomy, resulting in poor search capability.
Lack of measuring and feedback on (quality, performance of) the intranet
Lack of metadata
lack of motivation on the part of contributors to make their information easy to use
lack of quality measurements (e.g. number of likes, useful or not
lack of robust metadata
lack of robust metadata, resulting in poor search results;
lack of user-centred design
main navigation is poor
not fitting the fact that there are at least two types of visitors… task-driven and browse-based
Not making the search box available enough
Old content is not deleted and therefore too many results/documents returned
Old or duplicate content (Data or SharePoint Governance)
Overly complicated UIs
Poor navigation, information architecture and content sign-posting
Poorly configured search engine
proliferation of platforms like SharePoint
relevance of content (what’s hot for one is not for another)
Search can’t find it due to poor meta data
Search engine is not set up correctly
search engine returns too many results with no clear indication of their relative validity
structure is not tailored to the way the user thinks

Luc de Ruijter • This discussion has produced a qualitative and limited list of root causes for not finding stuff. I think we can all work with this.
@Rob & @Caleb My following question is always what to do after digesting and analysing information. I’m after solutions, that;s why I asked about root causes (and not symptoms). Reading all the comments triggers me in sharing some points of view. Sometimes that’s good to fuel the conversation sometimes. For if there is only agreement, there is no problem. And if there is no problem, what will we do in our jobs? If I came across clerical, blame it on Xmas.
Asking the “what to do with this input?” is pehaps a question for another time.

The only thing I added to the map from this entire exchange is Rob’s point of no social aspects to search. I thought this was interesting because of an earlier assertion that applying social principles to an intranet caused more silos. Seems Luc and Rob have differing opinions on this point.

image

Where are we at now?

At this point, we are almost at the end of the discussion. In this post, I added 25 nodes against 10 comments. Nevertheless, we are not done yet. In part 4 I will conclude the synthesis of the discussion and produce a final map. I’ll also export the map to MSWord, summarising the discussion as it happened. Like the last three posts, you can click here to see the maps exported in more detail.

There are four major themes that have emerged. Information Architecture, People aspects, Inadequate governance and lack of user-centred design. The summary maps for each of these areas are below (click to enlarge):

Information Architecture

part3map2[5]

People aspects

part3map3[5]

Inadequate Governance

part3map4[5]

Lack of user-centred design

part3map5[5]

Thanks for sticking with me thus far – almost done now…

Paul Culmsee

CoverProof29

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why can’t users find stuff on the intranet? An IBIS synthesis–Part 2

Send to Kindle

Hi all

This is the second post in a quick series that attempts to use IBIS to analyse an online discussion. Strange as it may sound, but I believe that issue mapping and IBIS is one of the most pure forms of information architecture you can do. This is because a mapper, you are creating a navigable mental model of speech as it is uttered live. This post is semi representative of this. I am creating an IBIS based issue map, but I’m not interacting live with participants. nevertheless, imagine if you will, you sitting in a room with a group of stakeholders answering the question on why users cannot find what they are looking for on the intranet. Can you see its utility in creating shared understanding of a multifaceted issue?

Where we left off…

We finished the previous discussion with a summary map that identified several reasons why it is hard to find information on intranets. In this post we will continue our examination of this topic. What you will notice in this post is that the number of nodes that I capture are significantly less than in part 1. This is because some topics start to become saturated and people’s contributions are the same as what is already captured. In Part 1, I captured 55 nodes from the first 11 replies to the question. In this post, I capture an additional 33 nodes from the next 15 replies.

image_thumb72

So without further adieu, lets get into it!

Suzanne Thornley • Just another few to add (sorry 5 not 3 :-):
1. Search engine is not set up correctly or used to full potential
2. Old content is not deleted and therefore too many results/documents returned
3. Documents have no naming convention and therefore it is impossible to clearly identify what they are and if they are current.
4. Not just a lack of metadata but also a lack of governance/training around metadata/meta tagging so that less relevant content may surface because the tagging and metadata is better.
5. Poor and/or low cost search engine is deployed in the mistaken belief that users will be happy/capable of finding content by navigating through a complex intranet structure.

Suzanne offered 5 additional ideas to the original map from where we last left off. She was also straight to the point too, which always makes a mappers job of expressing it in IBIS easier. You might notice that I reversed “Old content is not deleted and therefore too many results/documents returned” in the resulting map. This is because I felt that old content not being deleted was one of a number arguments supporting why too many results are returned.

image_thumb[26]

My first map refactor

With the addition of Suzanne’s contributions, I felt that it was a good time to take stock and adjust the map. First up, I felt that a lot of topics were starting to revolve around the notion of information architecture, governance and user experience design. So I grouped the themes of vocabulary, lack of metadata, excessive results and issues around structure of data as part of a meta theme of “information architecture”. I similarly grouped a bunch of answers into “governance” and “user experience design”. These for me, seemed to be the three meta-themes that were emerging so far…

For the trainspotters, Suzanne’s comment about document naming conventions was added to the “Vocabulary and labelling issues” sub-map. You can’t see it here because I collapsed the detailed so you can see the full picture of themes as they are at this point.

part2map1

Patrik Bergman • Several of you mention the importance of adding good metadata. Since this doesn’t come natural to all employees, and the wording they use can differ – how do you establish a baseline for all regarding how to use metadata consistently? I have seen this in a KM product from Layer 2 for example, but it can of course be managed without this too, but maybe to a higher cost, or?

Patrick’s comment was a little hard to map. I captured his point that metadata does not come natural to employees as a pro, supporting the idea that lack of metadata is an example of poor information architecture. The other points I opted to leave off, because they were not really related to the core question on why people can’t find stuff on the intranet.

image_thumb[7]

Luc de Ruijter • @Patrik. Metadata are crucial. I’ve been using them since 2005 (Tridion at that time).You can build a lot of functionality with it. And it requires standardisation. If everyone adds his own meta, this will not enable you to create solutions. You can standardize anything in any CMS. So use your CMS to include metadata. If you have a DMS the same applies. (DMS are a more logical tool for intranets, as most enterprise content exists as documenst. Software such as LiveLink can facilitate adding meta in the save as process. You just have to tick some fields before you can save a document on to the intranet.)
@Suzanne. There’s been a lot of buzz about governance. You don’t need governance over meta, you just need a sound metastructure (and a dept of function to manage it – such as library of information management). Basically a lot of ‘governance’ can be automated instead of being discussed all the time :-).

Like Patricks comment, much of what Luc said here wasn’t really related to the question at hand or has been captured already. But I did acknowledge his contribution to the governance debate, and he specifically argued against Suzanne’s point about lack of governance around metadata tagging.

image_thumb[11]

Next we have a series of answers, but you will notice that most of the points are re-iterating points that have already been made.

Patrik Bergman • Thanks Luc. It seems SharePoint gives us some basic metadata handling, but perhaps we need something strong in addition to SharePoint later.

Simon Evans • My top three?
1) The information being searched does not actually exist or exists only in an unrecognisable form and therefore cannot be found!
2) As Karen says above, info is organised by departmental function rather than focussed on end to end business process.
3) Lack of metadata as above

Mahmood Ahmad • @Simon evan. I want to also add Poor Information Structure in the list. Therefore Information Management should be an important factor.

Luc de Ruijter • @Patrik. Sharepoint 2010 is the first version that does something with it. Ms is a bit slow in pushing the possibilities with it.
@Simon @Mahmood Let’s say that information structure is the foundation for an intranet (or any website), and that a lack of metadata is only a symptom of a bad foundation?

Patrik Bergman • Good thing we use the 2010 version then 😀 I will see how good it handles it, and see if we need additional software.

Erin Dammen • I believe 1) lack of robust metadata, resulting in poor search results; 2) structure is not tailored to the way the user thinks; 3) lack of motivation on the part of contributors to make their information easy to use (we have a big problem with people just PDFing EVERYTHING instead of posting HTML pages.) I like that in SP 2010, users have the power to add their own keywords and flag pages as "I like it." Let your community do some of the legwork, I think it helps!

Simon’s first point that the information searched may not exist or may not be in the right format was new, so that was captured under governance. (After all, its hard to architect information when its not there!).

image_thumb[16]

I also added Erin’s third point about lack of motivation on the part of contributors. I mulled over this and decided it was a new theme, so I added it to the root question, rather than trying to make it fit into information architecture, governance or user experience design. I also captured her point on letting the community do the legwork through user tagging (known as folksonomy).

image_thumb[19]

Luc de Ruijter • @all. The list of root causes remains small. This is not surprising (it would be really worrying if the list of causes would be a long list). And it is good to learn that we encounter the same (few but not so easy to solve) issues.
Still, in our line of work these root causes lack overall attention. What could be the reason for that? 🙂
@Erin Motivation is not the issue, I think; and facilitation is. If it is easier to PDF everything, than everyone will do so. And apparently everyone has the tools to do so. (If you don’t want people to PDF stuff, don’t offer them the quick fix.)
If another method of sharing documents is easier, then people will migrate. How easy is it to find PDF’s through search? How easy is it to add metadata to PDF’s? And are colleagues explained why consistent(!) meta is so relevant? Can employees add their own meta keywords? How do you maintain the quality and integrity of your keywords?
Of course it depends on your professional usergroup whether professionals will use "I like" buttons. Its a bit on the Facebook consumer edge if you’d ask me. Very en vogue perhaps, but in my view not so business ‘like’.

Luc, who is playing the devils advocate role as this discussion progresses, provides three counter arguments to Erin’s argument around user motivation. They are all captured as con’s.

image_thumb[21]

Steven Osborne • 1) Its not there and never was
2) Its there but inactive so can no longer be accessed
3) Its not where someone thought it would be or should be or its not called what they thought it was called or should be called.

Marcus Hamilton-Mills • 1) The main navigation is poor
2) The content is titled poorly (e.g internal branding, uncommon wording, not easy to differentiate from other content etc.)
3) Search can’t find it due to poor meta data

patrick c walsh • 1) Navigation breaks down because there’s too much stuff
2) There’s too much crap content hidden away because there’s just too much stuff
and
3) er…there’s just too much stuff

Mark Smith • 1. Poor navigation, information architecture and content sign-posting
2. Lack of content governance, meta-data and inconsistent taxonomy, resulting in poor search capability.
3. The content they are trying to find is out of date, cannot be trusted or isn’t even available on the intranet

Luc de Ruijter • @Steven Had a bit of a laugh there
@all Am I right in making the connection between
– the huge amount of content is an issue
– that internal branding causes confusion (in labeling and titles).
and
the fact that – in most cases – these causes can be back tracked to the owners of intranet, the comms department? They produce most content clutter.
Or am I too quick in drawing that conclusion?

Now the conversation is really starting to saturate. Most of the contributions above are captured already in the map as it is, so I only added two nodes: Patrick’s point about navigation (an information architecture issue) and too much information.

image_thumb[25]

Where are we at now?

We will end part 2 with a summary below. Like the first post, you can click here to see the maps exported in more detail. In part 3, the conversation got richer again, so the maps will change once again.

Until then, thanks for reading

Paul Culmsee

CoverProof29

www.sevensigma.com.au

part2map2

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why can’t users find stuff on the intranet? An IBIS synthesis–Part 1

Send to Kindle

Hi

There was an interesting discussion on the Intranet Professionals group on LinkedIn recently where Luc De Ruijter asked the question:

What are the main three reasons users cannot find the content they were looking for on intranet?

As you can imagine there were a lot of responses, and a lot more than three answers. As I read through them, I thought it might be a good exercise to use IBIS (the language behind issue mapping) to map the discussion and see what the collective wisdom of the group has to say. So in these posts, I will illustrate the utility of IBIS and Issue mapping for this work, and make some comments about the way the conversation progressed.

So what is IBIS and Issue/Dialogue Mapping?

Issue Mapping captures the rationale behind a conversation or dialogue—the emergent ideas and solutions that naturally arise from robust debate. This rationale is graphically represented using a simple, but powerful, visual structure called IBIS (Issue Based Information System). This allows all elements and rationale of a conversation, and subsequent decisions, to be captured in a manner that can be easily reflected upon.

The elements of the IBIS grammar are below. Questions give rise to ideas, or potential answers. Ideas have pros or cons arguing for or against those ideas.

image_thumb81

Dialogue Mapping is essentially Issue Mapping a conversation live, where the mapper is also a facilitator. When it is done live it is powerful stuff. As participants discuss a problem, they watch the IBIS map unfold on the screen. This allows participants to build shared context, identify patterns in the dialogue and move from analysis to synthesis in complex situations. What makes this form of mappingcompelling is that everything is captured. No idea, pro or con is ignored. In a group scenario, this is an extremely efficient way of meeting what social psychologist Hugh Mackay says is the first of the ten human desires which drives us – this being the desire to be taken seriously. Once an idea is mapped, the idea and the person who put it forth are taken seriously. This process significantly reduces “wheel spinning” in meetings where groups get caught up in a frustrating tangled mess of going over the same old ground. It also allows the dialogue to move more effectively to decision points (commitments) around a shared understanding.

In this case though, this was a long discussion on a LinkedIn group so we do not get the benefit of being able to map live. So in this case I will create a map to represent the conversation as it progresses and make some comments here and there…

So let’s kick off with the first reply from Bob Meier.

Bob Meier • Don’t know if these are top 3, but they’re pretty common find-ability issues:
1. Lack of metadata. If there are 2000 documents called “agenda and minutes” then a search engine, fancy intranet, or integrated social tool won’t help.
2. Inconsistent vocabulary and acronyms. If you’ve branded the expense report system with some unintuitive name (e.g. a vendor name like Concur) then I’ll scan right past a link looking for “expense reports” or some variation.
3. Easier alternatives. If it’s easier for me to use phone/email/etc. to find what I want, then I won’t take the time to learn how to use the intranet tools. Do grade schools still teach library search skills? I don’t think many companies do…

In IBIS this was fairly straightforward. Bob listed his three answers with some supporting arguments. I reworded his supporting argument of point 2, but otherwise it pretty much reflects what was said…

image_thumb2

Nigel Williams (LION) • I agree with Bob but I’d add to point two not speaking our user base’s language. How many companies offer a failure to find for example (i.e.if you fail to find something in a search you submit a brief form which pops up automatically stating what you were looking for and where you expected to find it? Lots of comms and intranet teams are great at telling people and assuming we help them to learn but don’t listen and learn from all levels of the business.
If I make that number 1 I’ll also add:
2) Adopting social media because everyone else is, not because our business or users need it. This then ostracises the technophobics and concerns some of our less confident regular users. They then form clans of anti-intranetters and revert to tried and tested methods pre-intranet (instant messaging, shared drives, email etc.)
3) Not making the search box available enough. I’m amazed how many users in user testing say they’ve never noticed search hidden in the top right of the banner – “ebay has their’s in the middle of the screen, so does Google. Where’s ours?” is a typical response. If you have a user group at your mercy ask them to search for an item on on Google, then eBay, then Amazon, then finally your intranet. Note whether they search in the first three and then use navigation (left hand side or top menu) when in your intranet.

Nigel’s starts out by supporting Bob’s answer and I therefore add them as pros in the map. Having done this though, I can already see some future conversational patterns. Bob’s two supporting arguments for “not using the vocabulary of users”, actually are two related issues. One is about user experience and the other is about user engagement/governance. Nevertheless, I have mapped it as he stated it at this point and we see what happens.

image_thumb3

Luc de Ruijter • @Bob. I recognise your first 2 points. The third however might be a symptom or result, not a cause. Or is it information skills you are refering to?
How come metadata are not used? Clearly there is a rationale to put some effort in this?
@Nigel. Is the situation in which Comm. depts don’t really listen to users a reason for not finding stuff? Or would it be a lack of rapport with users before and while building intranets? Is the cause concepetual, rather than editorial for instance?
(I’m really looking for root causes, the symptoms we all know from daily experience).
Adding more media is something we’ve seen for years indeed. Media tend to create silo’s.
Is your third point about search or about usability/design?

In following sections I will not reproduce the entire map in the blog post – just relevant sections.

In this part of the conversation, Luc doesn’t add any new answers to the root question, but queries three that have been put forward thus far. Also note at this point I believe one of Luc’s answers is for a different question. Bob’s “easier alternatives” point was never around metadata. But Luc asks “how come metadata is not used?”. I have added it to the map here, changing the framing from a rhetorical question to an action. Having said that, if I was facilitating this conversation, I would have clarified that point before committing it to the map.

image_thumb27

Luc also indicates that the issue around communications and intranet teams not listening might be due to a lack of rapport.

image_thumb25

Finally, he adds an additional argument why social media may not be the utopia it is made out to be, by arguing that adding more media channels creates more information silos. He also argues against the entire notion on the grounds that this is a usability issue, rather than a search issue.

image_thumb80

Nigel Williams (LION) • Hi Luc, I think regarding Comms not listening that it is two way. If people are expecting to find something with a certain title or keyword and comms aren’t recognising this (or not providing adequate navigation to find it) then the item is unlikely to be found.
Similarly my third point is again both, it is an issue of usability but if that stops users conducting searches then it would impact daily search patterns and usage.

I interpret this reply as Nigel arguing against Luc’s assertion around lack of rapport being the reason behind intranet and comms teams not listening and learning from all levels of the user base.

image_thumb34

Nigel finishes by arguing that even if social media issues are usability issues, they might still impede search and the idea is therefore valid.

image_thumb84

Bob Meier • I really like Nigel’s point about the importance of feedback loops on Intranets, and without those it’s hard to build a system that’s continually improving. I don’t have any data on it, but I suspect most companies don’t regularly review their search analytics even if they have them enabled. Browse-type searching is harder to measure/quantify, but I’d argue that periodic usability testing can be used in place of path analysis.
I also agree with Luc – my comment on users gravitating from the Intranet to easier alternatives could be a symptom rather than a cause. However, I think it’s a self-reinforcing symptom. When you eliminate other options for finding information, then the business is forced to improve the preferred system, and in some cases that can mean user training. Not seeing a search box is a great example of something that could be fixed with a 5-minute Intranet orientation.
If I were to replace my third reason, I’d point at ambiguous or mis-placed Intranet ownership . Luc mentions Communications departments, but in my experience many of those are staffed for distributing executive announcements rather than facilitating collective publishing and consumption. I’ve seen many companies where IT or HR own the Intranet, and I think the “right” department varies by company. Communications could be the right place depending on how their role is defined.

Bob makes quite a number of points in this answer, right across various elements of the unfolding discussion. Firstly, he makes a point about analytics and the fact that a lack of feedback loops makes it hard to build a system that continually improves.

image_thumb45

In term of the discussion around easier alternatives, Bob offers some strategies to mitigate the issue. He notes that there are training implications when eliminating the easier alternatives.

image_thumb41

Finally, Bob identifies issues around the ownership of the intranet as another answer to the original question of people not being able to find stuff on the intranet. He also lists a couple of common examples.

image_thumb69

Karen Glynn • I think the third one listed by Bob is an effect not a cause.
Another cause could be data being structured in ways that employees don’t understand – that might be when it is structured by departments, so that users need to know who does what before they can find it, or when it is structured by processes that employees don’t know about or understand. Don’t forget intranet navigations trends are the opposite to the web – 80% of people will try and navigate first rather than searching the intranet.

In this answer, Karen’s starts by agreeing with the point Luc made about “easier alternatives” being a symptom rather than a cause, so there is no need to add it to the map as it is already there. However she provides a new answer to the original question: the structure of information (this by the way is called top-down information architecture – and it was bound to come out of this discussion eventually). She also makes a claim that 80% of people will navigate prior to search on the intranet. I wonder if you can tell what will happen next? Smile

image_thumb49

Luc de Ruijter • @Nigel Are (customer) keywords the real cause for not finding stuff? In my opinion this limits the chalenge (of building effective intranet/websites) to building understandable navigation patters. But is navigation the complete story? Where do navigation paths lead users to?
@Bob Doesn’t an investiment in training in order to have colleagues use the search function sound a bit like attacking the symptom? Why is search not easy to locate in the first place? I’d argue you’re looking at a (functional) design flaw (cause) for which the (where is the search?) training is a mere remedy, but not a solution.
@Karen You mention data. How does data relate to what we conventionally call content, when we need to bring structure in it?
Where did you read the 80% intranet-users navigate before searching?

Okay, so this is the first time thus far where I do a little bit of map restructuring. In the discussion so far, we had two ideas offered around the common notion of vocabulary. In this reply, Luc states “Are (customer) keywords the real cause for not finding stuff?” I wasn’t sure which vocabulary issue he was referring to, so this prompted me to create a “meta idea” called “Vocabulary and labelling issues”, of which there are two examples cited thus far. This allowed me to capture the essence of Luc’s comment as a con against the core idea of issues around vocabulary and labelling.

image_thumb52

Luc then calls into question Bob’s suggestion of training and eliminating the easier alternatives. Prior to Luc’s counter arguments, I had structured Bob’s argument like this:

image_thumb58

To capture Luc’s argument effectively, I restructured the original argument and made a consolidated idea to “eliminate other options and provide training”. This allowed me to capture Luc’s counter argument as shown below.

image_thumb55

Finally, Luc asked Karen for the source of her contention that 80% of users navigate intranets, rather than use the search engine first up.

image_thumb61

In this final bit of banter for now, the next three conversations did not add too many nodes to the map, so I have grouped them below…

Karen Glynn • Luc, the info came from the Neilsen group.

Helen Bowers • @Karen Do you know if the Neilsen info is available for anyone to look at?

Karen Glynn • I don’t know to be honest – it was in one of the ‘paid for’ reports if I remember correctly.

Luc de Ruijter • @Karen. OK in that case, could you provide us with the title and page reference of the source? Than it can become usable as a footnote (in a policy for instance).Thanks
Reasons so far for not finding stuff:
1. Lack of metadata (lack of content structure).
2. Inconsistent vocabulary and acronyms (customer care words).
3. Adopting social media from a hype-driven motivation (lack of coherence)
4. Bad functional design (having to search for the search box)
5. Lack of measuring and feedback on (quality, performance of) the intranet
6. Silo’s. Site structures suiting senders instead of users

So for all that Banter, here is what I added to what has already been captured.

image_thumb65

Where are we at?

At this point, let’s take a breath and summarise what has been discussed so far. Below is the summary map with core answers to the question so far. I have deliberately tucked away the detail into sub maps so you can see what is emerging. Please note I have not synthesised this map yet (well … not too much anyway). I’ll do that in the next post.

image_thumb72

If you want to take a look at the entire map as it currently stands, take a look at the final image at the very bottom of this post. (click to enlarge). I have also exported the entire map so far for you to view things in more context. Please note that the map will change significantly as we continue to capture and synthesise the rationale, so as we continue to unpack the discussion, expect this map to change quite a bit..

Thanks for reading

Paul Culmsee

CoverProof29

www.sevensigma.com.au

Map25

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud is not the problem-Part 4: Industry shakeout and playing with the big kids…

This entry is part 4 of 6 in the series Cloud
Send to Kindle

Hi all

Welcome to the fourth post about the adaptive change that cloud computing is going to have on practitioners, paradigms and organisations. The previous two posts took a look at some of the dodgier side of two of the industries biggest players, Microsoft and Amazon. While I have highlighted some dumb issues with both, I nevertheless have to acknowledge their resourcing, scalability, and ability to execute. On that point of ability to execute, in this post we are going to expand a little towards the cloud industry more broadly and the inevitable consolidation that is, and will continue to take place.

Now to set the scene, a lot of people know that in the early twentieth century, there were a lot of US car manufacturers. I wonder if you can take a guess at the number of defunct car manufacturers there have been before and after that time.

…Fifty?

…One Hundred?

Not even close…

What if I told you that there were over 1700!

Here is another interesting stat. The table below shows the years where manufacturers went bankrupt or ceased operations. Below that I have put the average shelf life of each company for that decade.

Year 1870’s 1880’s 1890’s 1900’s 1910’s 1920’s 1930’s 1940’s 1950’s 1960’s 1970’s 1980’s 1990’s 2000’s 2010’s
# defunct 4 2 5 88 660 610 276 42 13 33 11 5 5 3 5
avg years in operation 5 1 1 3 3 4 5 7 14 10 19 37 16 49 42

Now, you would expect that the bulk of closures would be depression era, but note that the depression did not start until the late 1920’s and during the boom times that preceded it, 660 manufactures went to the wall – a worse result!

The pattern of consolidation

image

What I think the above table shows is the classic pattern of industry consolidation after an initial phase of innovation and expansion, where over time, the many are gobbled by the few. As the number of players consolidate, those who remain grow bigger, with more resources and economies of scale. This in turn creates barriers to entry for new participants. Accordingly, the rate of attrition slows down, but that is more due to the fact that there are fewer players in the industry. Those that are left continue to fight their battles, but now those battles take longer. Nevertheless, as time goes on, the number of players consolidate further.

If we applied a cloud/web hosting paradigm to the above table, I would equate the dotcom bust of 2000 with the depression era of the 1920’s and 1930’s. I actually think with cloud computing, we are in the 1960’s and on right now. The largest of the large players have how made big bets on the cloud and have entered the market in a big, big way. For more than a decade, other companies hosted Microsoft technology, with Microsoft showing little interest beyond selling licenses via them. Now Microsoft themselves are also the hosting provider. Does that mean most the hosting providers will have the fate of Netscape? Or will they manage to survive the dance with Goliath like Citrix or VMWare have?

For those who are not Microsoft or Amazon…

image

Imagine you have been hosting SharePoint solutions for a number of years. Depending on your size, you probably own racks or a cage in some-one else’s data centre, or you own a small data centre yourself. You have some high end VMWare gear to underpin your hosting offerings and you do both managed SharePoint (i.e. offer a basic site collection subscription with no custom stuff – ala Office 365) and you offer dedicated virtual machines for those who want more control (ala Amazon). You have dutifully paid your service provider licensing to Microsoft, have IT engineers on staff, some SharePoint specialists, a helpdesk and some dodgy sales guys – all standard stuff and life is good. You had a crack at implementing SharePoint multi tenancy, but found it all a bit too fiddly and complex.

Then Amazon comes along and shakes things up with their IaaS offerings. They are cost competitive, have more data centres in more regions, a higher capacity, more fault tolerance, a wider variety of services and can scale more than you can. Their ability to execute in terms of offering new services is impossible to keep up with. In short, they slowly but relentlessly take a chunk of the market and continue to grow. So, you naturally counter by pushing the legitimate line that you specialise in SharePoint, and as a result customers are in much more trusted hands than Amazon, when investing on such a complex tool as SharePoint.

But suddenly the game changes again. The very vendor who you provide cloud-based SharePoint services for, now bundles it with Exchange, Lync and offers Active Directory integration (yeah, yeah, I know there was BPOS but no-one actually heard of that). Suddenly the argument that you are a safer option than Amazon is shot down by the fact that Microsoft themselves now offer what you do. So whose hands are safer? The small hosting provider with limited resources or the multinational with billions of dollars in the bank who develops the product? Furthermore, given Microsoft’s advantage in being able to mobilise its knowledge resources with deep product knowledge, they have a richer managed service offering than you can offer (i.e. they offer multi tenancy :).

This puts you in a bit of a bind as you are getting assailed at both ends. Amazon trumps you in the capabilities at the IaaS end and is encroaching in your space and Microsoft is assailing the SaaS end. How does a small fish survive in a pond with the big ones? In my opinion, the mid-tier SharePoint cloud providers will have to reinvent themselves.

The adaptive change…

So for the mid-tier SharePoint cloud provider grappling with the fact that their play area is reduced because of the big kids encroaching, there is only one option. They have to be really, really good in areas the big kids are not good at. In SharePoint terms, this means they have to go to places many don’t really want to go: they need to bolster their support offerings and move up the SharePoint stack.

You see, traditionally a SharePoint hosting provider tends to take two approaches. They provide a managed service where the customer cannot mess with it too much (i.e. Site collection admin access only). For those who need more than that, they will offer a virtual machine and wipe their hands of any maintenance or governance, beyond ensuring that  the infrastructure is fast and backed up. Until now, cloud providers could get away with this and the reason they take this approach should be obvious to anyone who has implemented SharePoint. If you don’t maintain operational governance controls, things can rapidly get out of hand. Who wants to deal with all that “people crap”? Besides, that’s a different skill set to typical skills required to run and maintain cloud services at the infrastructure layer.

So some cloud providers will kick and scream about this, and delude themselves into thinking that hosting and cloud services are their core business. For those who think this, I have news for you. The big boys think these are their core business too and they are going to do it better than you. This is now commodity stuff and a by-product of commoditisation is that many SharePoint consultancies are now cloud providers anyway! They sign up to Microsoft or Amazon and are able to provide a highly scalable SharePoint cloud service with all the value added services further up the SharePoint stack. In short, they combine their SharePoint expertise with Microsoft/Amazon’s scale.

Now on the issue of support, Amazon has no specific SharePoint skills and they never will. They are first and foremost a compelling IaaS offering. Microsoft’s support? … go and re-read part 2 if you want to see that. It seems that no matter the big multinational, level 1 tech support is always level 1 tech support.

So what strategies can a mid-tier provider take to stay competitive in this rapidly commoditising space. I think one is to go premium and go niche.

  • Provide brilliant support. If I call you, day or night, I expect to speak to a SharePoint person straight away. I want to get to know them on a first name basis and I do not want to fight the defence mechanism of the support hierarchy.
  • Partner with SharePoint consultancies or acquire consulting resources. The latter allows you to do some vertical integration yourself and broaden your market and offerings. A potential KPI for any SharePoint cloud provider should be that no support person ever says “sorry that’s outside the scope of what we offer.”
  • Develop skills in the tools and systems that surround SharePoint or invest in SharePoint areas where skills are lacking. Examples include Project Server, PerformancePoint, integration with GIS, Records management and ERP systems. Not only will you develop competencies that few others have, but you can target particular vertical market segments who use these tools.
  • (Controversial?) Dump your infrastructure and use Amazon in conjunction with another IaaS provider. You just can’t compete with their scale and price point. If you use them you will likely save costs, when combined with a second provider you can play the resiliency card and best of all … you can offer VPC 🙂

Conclusion

In the last two posts we looked at some of the areas where both Microsoft and Amazon sometimes struggle to come to grips with the SharePoint cloud paradigm. In this post, we took a look at other cloud providers having to come to grips with the SharePoint cloud paradigm of having to compete with these two giants, who are clearly looking to eke out as much value as they can from the cloud pie. Whether you agree with my suggested strategy (Rackspace appears to), the pattern of the auto industry serves as an interesting parallel to the cloud computing marketplace. Is the relentless consolidation a good thing? Probably not in the long term (we will tackle that issue in the last post in this series). In the next post, we are going to shift our focus away from the cloud providers themselves, and turn our gaze to the internal IT departments – who until now, have had it pretty good. As you will see, a big chunk of the irrational side of cloud computing comes from this area.

 

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud is not the problem–Part 3: When silos strike back…

This entry is part 3 of 6 in the series Cloud
Send to Kindle

What can Ikea fails tell us about cloud computing?

My next door neighbour is a builder. When he moved next door, the house was an old piece of crap. Within 6 months, he completely renovated it himself, adding in two bedrooms, an underground garage and all sorts of cool stuff. On the other hand, I bought my house because it was a good location, someone had already renovated it and all we had to do was move in. The reason for this was simple: I had a new baby and more importantly, me and power tools do not mix. I just don’t have the skills, nor the time to do what my neighbour did.

You can probably imagine what would happen if I tried to renovate my house the way my neighbour did. It would turn out like the Ikea fails in the video. Similarly, many SharePoint installs tend to look similar to the video too. Moral of the story? Sometimes it is better to get something pre-packaged than to do it yourself.

In the last post, we examined the “Software as a Service” (SaaS) model of cloud computing in the form of Office 365. Other popular SaaS providers include SlideShare, Salesforce, Basecamp and Tom’s Planner to name a few. Most SaaS applications are browser based and not as feature rich or complex as their on-premise competition. Therefore the SaaS model is that its a bit like buying a kit home. In SaaS, no user of these services ever touches the underlying cloud infrastructure used to provide the solution, nor do they have a full mandate to tweak and customise to their hearts content. SaaS is basically predicated on the notion that someone else will do a better set-up job than you and the old 80/20 rule about what features for an application are actually used.

Some people may regard the restrictions of SaaS as a good thing – particularly if they have dealt with the consequences of one too many unproductive customization efforts previously. As many SharePointer’s know, the more you customise SharePoint, the less resilient it gets. Thus restricting what sort of customisations can be done in many circumstances might be a wise thing to do.

Nevertheless, this actually goes against the genetic traits of pretty much every Australian male walking the planet. The reason is simple: no matter how much our skills are lacking or however inappropriate tools or training, we nevertheless always want to do it ourselves. This brings me onto our next cloud provider: Amazon, and their Infrastructure as a Service (IaaS) model of cloud based services. This is the ultimate DIY solution for those of us that find SaaS to cramping our style. Let’s take a closer look shall we?

Amazon in a nutshell

Okay, I have to admit that as an infrastructure guy, I am genetically predisposed to liking Amazon’s cloud offerings. Why? well as an infrastructure guy, I am like my neighbour who renovated his own house. I’d rather do it all myself because I have acquired the skills to do so. So for any server-hugging infrastructure people out there who are wondering what they have been missing out on? Read on… you might like what you see.

Now first up, its easy for new players to get a bit intimidated by Amazon’s bewildering array of offerings with brand names that make no sense to anybody but Amazon… EC2, VPC, S3, ECU, EBS, RDS, AMI’s, Availability Zones – sheesh! So I am going to ignore all of their confusing brand names and just hope that you have heard of virtual machines and will assume that you or your tech geeks know all about VMware. The simplest way to describe Amazon is VMWare on steroids. Amazon’s service essentially allows you to create Virtual Machines within Amazon’s “cloud” of large data centres around the world. As I stated earlier, the official cloud terminology that Amazon is traditionally associated is called Infrastructure as a Service (IaaS). This is where, instead of providing ready-made applications like SaaS, a cloud vendor provides lower level IT infrastructure for rent. This consists of stuff like virtualised servers, storage and networking.

Put simply, utilising Amazon, one can deploy virtual servers with my choice of operating system, applications, memory, CPU and disk configuration. Like any good “all you can eat” buffet, one is spoilt for choice. One simply chooses an Amazon Machine Image (AMI) to use as a base for creating a virtual server. You can choose one of Amazon’s pre-built AMI’s (Base installs of Windows Server or Linux) or you can choose an image from the community contributed list of over 8000 base images. Pretty much any vendor out there who sells a turn-key solution (such as those all-in-one virus scanning/security solutions) has likely created an AMI. Microsoft have also gotten in on the Amazon act and created AMI’s for you, optimised by their product teams. Want SQL 2008 the way Microsoft would install it? Choose the Microsoft Optimized Base SQL Server 2008R2 AMI which “contains scripts to install and optimize SQL Server 2008R2 and accompanying services including SQL Server Analysis services, SQL Server Reporting services, and SQL Server Integration services to your environment based on Microsoft best practices.”

The series of screen shots below shows the basic idea. After signing up, use the “Request instance wizard” to create a new virtual server by choosing an AMI first. In the example below, I have shown the default Amazon AMI’s under “Quick start” as well as the community AMI’s.

image61_thumb1
Amazons default AMI’s
image3_thumb
Community contributed AMI’s

From the list above, I have chosen Microsoft’s “Optimized SQL Server 2008 R2 SP1” from the community AMI’s and clicked “Select”. Now I can choose the CPU and memory configurations. Hmm how does a 16 core server sound with 60 gig of RAM? That ought to do it… 🙂

image13_thumb

Now I won’t go through the full description of commissioning virtual servers, but suffice to say that you can choose which geographic location this server will reside within Amazon’s cloud and after 15 minutes or so, your virtual server will be ready to use. It can be assigned a public IP address, firewall restricted and then remotely managed as per any other server. This can all be done programmatically too. You can talk to Amazon via web services start, monitor, terminate, etc. as many virtual machines as you want, which allows you to scale your infrastructure on the fly and very quickly. There are no long procurement times and you then only pay for what servers are currently running. If you shut them down, you stop paying.

But what makes it cool…

Now I am sure that some of you might be thinking “big deal…any virtual machine hoster can do that.” I agree – and when I first saw this capability I just saw it as a larger scale VMWare/Xen type deployment. But when really made me sit up and take notice was Amazon’s Virtual Private Cloud (VPC) functionality. The super-duper short version of VPC is that it allows you extend your corporate network into the Amazon cloud. It does this by allowing you to define your own private network and connecting to it via site-to-site VPN technology. To describe how it works, diagrammatically check out the image below.

image_thumb13

Let’s use an example to understand the basic idea. Let’s say your internal IP address range at your office is 10.10.10.0 to 10.10.10.255 (a /24 for the geeks). With VPC you tell Amazon “I’d like a new IP address range of 10.10.11.0 to 10.10.11.255” . You are then prompted to tell Amazon the public IP address of your internet router. The screenshots below shows what happens next:

image_thumb14

image6_thumb

The first screenshot asks you to choose what type of router is at your end. Available choices are Cisco, Juniper, Yamaha, Astaro and generic. The second screenshot shows you a sample configuration that is downloaded. Now any Cisco trained person reading this will recognise what is going on here. This is the automatically generated configuration to be added to an organisations edge router to create an IPSEC tunnel. In other words, we have extended our corporate network itself into the cloud. Any service can be run on such a network – not just SharePoint. For smaller organisations wanting the benefits of off-site redundancy without the costs of a separate datacenter, this is a very cost effective option indeed.

For the Cisco geeks, the actual configuration is two GRE tunnels that are IPSEC encrypted. BGP is used for route table exchange, so Amazon can learn what routes to tunnel back to your on-premise network. Furthermore Amazon allows you to manage firewall settings at the Amazon end too, so you have an additional layer of defence past your IPSEC router.

This is called Virtual Private Cloud (VPC) and when configured properly is very powerful. Note the “P” is for private. No server deployed to this subnet is internet accessible unless you choose it to be. This allows you to extend your internal network into the cloud and gain all the provisioning, redundancy and scalability benefits without exposure to the internet directly. As an example, I did a hosted SharePoint extranet where we use SQL log shipping of the extranet content databases back to the a DMZ network for redundancy. Try doing that on Office365!

This sort of functionality shows that Amazon is a mature, highly scalable and flexible IaaS offering. They have been in the business for a long time and it shows because their full suite of offerings is much more expansive than what I can possibly cover here. Accordingly my Amazon experiences will be the subject of a more in-depth blog post or two in future. But for now I will force myself to stop so the non-technical readers don’t get too bored. 🙂

So what went wrong?

So after telling you how impressive Amazon’s offering is, what could possibly go wrong? Like the Office365 issue covered in part 2, absolutely nothing with the technology. To understand why, I need to explain Amazon’s pricing model.

Amazon offer a couple of ways to pay for servers (called instances in Amazon speak). An on-demand instance is calculated based on a per-hour price while the server is running. The more powerful the server is in terms of CPU, memory and disk, the more you pay. To give you an idea, Amazon’s pricing for a Windows box with 8CPU’s and 16GB of RAM, running in Amazon’s “US east” region will set you back $0.96 per hour (as of 27/12/11). If you do the basic math for that, it equates to around $8409 per year, or $25228 over three years. (Yeah I agree that’s high – even when you consider that you get all the trappings of a highly scalable and fault tolerant datacentre).

On the other hand, a reserved instance involves making a one-time payment and in turn, receive a significant discount on the hourly charge for that instance. Essentially if you are going to run an Amazon server on a 24*7 basis for more than 18 months or so, a reserved instance makes sense as it reduces considerable cost over the long term. The same server would only cost you $0.40 per hour if you pay an up-front $2800 for a 3 year term. Total cost: $13312 over three years – much better.

So with that scene set, consider this scenario: Back at the start of 2011, a client of mine consolidated all of their SharePoint cloud services to Amazon from a variety of other another hosting providers. They did this for a number of reasons, but it basically boiled down to the fact they had 1) outgrown the SaaS model and 2) had a growing number of clients. As a result, requirements from clients were getting more complicated and beyond that which most of the hosting providers could cater for. They also received irregular and inconsistent support from their existing providers, as well as some unexpected downtime that reduced confidence. In short, they needed to consolidate their cloud offering and manage their own servers. They were developing custom SharePoint solutions, needed to support federated claims authentication and required disaster recovery assurance to mitigate the risk of going 100% cloud. Amazon’s VPC offering in particular seemed ideal, because it allowed full control of the servers in a secure way.

Now making this change was not something we undertook lightly. We spent considerable time researching Amazon’s offerings, trying to understand all the acronyms as well as their fine print. (For what its worth I used IBIS as the basis to develop an assessment and the map of my notes can be found here). As you are about to see though, we did not check well enough.

Back when we initially evaluated the VPC offering, it was only available in very few Amazon sites (two locations in the USA only) and the service was still in beta. This caused us a bit of a dilemma at the time because of the risk of relying on a beta service. But we were assured when Amazon confirmed that VPC would eventually be available in all of of their datacentres. We also stress tested the service for a few weeks, it remained stable and we developed and tested a disaster recovery strategy involving SQL log shipping and a standby farm. We also purchased reserved instances from Amazon since these servers were going to be there for the long haul, so we pre-paid to reduce the hourly rates. Quite a complex configuration was provisioned in only two days and we were amazed by how easy it all was.

Things hummed along for 9 months in this fashion and the world was a happy place. We were delighted when Amazon notified us that VPC had come out of beta and was now available in any of Amazon’s datacentres around the world. We only used the US datacentre because it was the only location available at the time. Now we wanted to transfer the services to Singapore. My client contacted Amazon about some finer points on such a move and was informed that they would have to pay for their reserved instances all over again!

What the?

It turns out, reserved instances are not transferrable! Essentially, Amazon were telling us that although we paid for a three year reserved instance, and only used it for 9 months, to move the servers to a new region would mean we have to pay all over again for another 3 year reserve. According to Amazon’s documentation, each reserved instance is associated with a specific region, which is fixed for the lifetime of the reserved instance and cannot be changed.

“Okay,” we answer, “we can understand that in circumstances where people move to another cloud provider. But in our case we were not.” We had used around 1/3rd of the reserved instance. So surely Amazon should pro-rata the unused amount, and offer that as a credit when we re-purchase reserved instances in Singapore? I mean, we will still be hosting with Amazon, so overall, they will not be losing any revenue al all. On the contrary, we will be paying them more, because we will have to sign up for an additional 3 years of reserve when we move the services.

So we ask Amazon whether that can be done. “Nope,” comes back the answer from amazons not so friendly billing team with one of those trite and grossly insulting “Sorry for any inconvenience this causes” ending sentences. After more discussions, it seems that internally within Amazon, each region or datacentre within each region is its own profit centre. Therefore in typical silo fashion, the US datacentre does not want to pay money to the Singapore operation as that would mean the revenue we paid would no longer recognised against them.

Result? Customer is screwed all because the Amazon fiefdoms don’t like sharing the contents of the till. But hey – the regional managers get their bonuses right? Sad smile

Conclusion

Like part 2 of this cloud computing series, this is not a technical issue. Amazon’s cloud service in our experience has been reliable and performed well. In this case, we are turned off by the fact that their internal accounting procedures create a situation that is not great for customers who wish to remain loyal to them. In a post about the danger of short termism and ignoring legacy, I gave the example of how dumb it is for organisations to think they are measuring success based on how long it takes to close a helpdesk call. When such a KPI is used, those in support roles have little choice but to try and artificially close calls when users problems have not been solved because that’s how they are deemed to be performing well. The reality though is rather than measure happy customers, this KPI simply rewards which helpdesk operators have managed to game the system by getting callers off the phone as soon as they can.

I feel that Amazon are treating this is an internal accounting issue, irrespective of client outcomes. Amazon will lose the business of my client because of this since they have enough servers hosted where the financial impost of paying all over again is much more than transferring to a different cloud provider. While VPC and automated provisioning of virtual servers is cool and all, at the end of the day many hosting providers can offer this if you ask them. Although it might not be as slick with fancy as Amazon’s automated configuration, it nonetheless is very doable and the other providers are playing catch-up. Like Apple, Amazon are enjoying the benefits of being first to market with their service, but as competition heats up, others will rapidly bridge the gap.

Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle