What does project management mean to me – a Project Manager’s sermon

Send to Kindle

The "message"

What does project management mean to me?

Well if you want me to give you the ‘glass half empty’ perspective, it’s easy. What project management means to me is a confused discipline where practitioners routinely do really dumb shit in its name.

Sermon over – go forth and spread the word…

Okay so that was fairly blunt, so I had better elaborate and perhaps take a more positively framed ‘glass half full’ approach to this sermon. To do that, I need to tell you about the legacy that Cleo magazine has left on society.

When I was younger, I used to skim through magazines like Cosmopolitan and Cleo while waiting in line at the checkout. Now you might think that I must really be in touch with my feminine side in admitting this, but no: the reality is that raw testosterone was the motivator. You see, Cleo would have headline articles like “10 great sex tips (in 50 words or less)” Or “The 10 things he doesn’t want in bed.” Titles like that dangled the possibility in front of me of finally understanding women, with the added bonus of developing Olympic class skills in the bedroom. Of course, each and every time the actual article never lived up to the catchy title. I rarely learnt anything new and in fact the “10 things” were usually pretty banal, self-evident and left me none the wiser.

So as our collective attention spans diminish via constant exposure to “5 steps to success with [insert buzzword here]” articles (articles designed more for search engine placement than actually informing an audience), the curse of Cleo-like catchy titles telling you stuff of little value is now so commonplace that it is hard to suss out the stuff that really makes a difference in outcomes.

So where does one go to find out the answers in project management? Should aspiring Project Managers master the dark arts of their craft by learning everything there is to learn from one or more of the bibles that contain the word “BoK” in them? On the surface it would seem so, given all of the past collective wisdom that is claimed to be codified therein – as well as shiny certification proving that one has passed the multi-choice exam on its contents.

But wait…which BoK is best? After all, we have multiple to choose from, with each making their own claims to the truth. Some BoKs even reject the key tenets of other BoKs, arguing that theirs offers a better answer. Of course, this leads to an endless stream of debate by their respective proponents as to which is really best and who is really the wisest. Not to mention that over time, new and updated BoKs emerge like phoenixes from the ashes of older BoKs. (Sometimes they are so cool that they don’t even claim to be a BoK at all).

It’s little wonder that Project Management is a confused discipline. No matter where you turn, someone is bound to tell you that you are doing it wrong.

While I am on the subject of doing it wrong, let’s poke a stick in one of the well-known project management hornets-nests: the “Waterfall vs. Scrum” argument. We all know that any self-respecting Scrum guy will not miss an opportunity to tell you about the evils of Waterfall – and for the most part they are right too, as Waterfall has a dubious history of which most people are not aware.

But that is not the reason I chose this particular topic, even though it is much loved by PMs who spend endless hours filling up the forums of various LinkedIn groups with discussion. I chose it simply because it’s fun to mess with Scrum guys – particularly the zealots. So if you have a “scrumdamentalist” in your midst, try this question on them:

Would Waterfall work if one could create an environment where all parties—as soon as they become aware of something that might affect a project materially—communicate it to all other parties involved in the project in a full, sincere and open way?

I have posed this question to many Scrum people. Most will think about it for some time, before answering a grudging “possibly” or “I don’t see why not.” Try it yourself… it’s fun pulling the rug out from under their firmly held convictions.

The best answer I have ever got to this question was from Chris Chapman – an Agile coach from Toronto. He gave me what I think is the perfect answer when he astutely observed that in the environment I described, Waterfall would actually not exist in the first place!

Therein lies the heart of my sermon. I contend that the endless debates over the efficacy of methods, tools and even BoKs are answering the wrong question! Don’t worry though, Project Management is not the first, nor the last discipline to lean their ladder against the wrong wall in this regard. To explain, let me introduce you to the work (and genius) of J Richard Hackman.

From the late sixties, Hackman spent his career researching and teaching about team performance, leadership effectiveness, and the design of self-managing teams and organizations. He died in 2013 and one of his last papers he published was called “From causes to conditions in group research.” In this swansong paper, Hackman explained how he spent years examining the factors that made teams work really well. He studied hundreds of teams (not just project teams mind you, but sporting teams, orchestras and flight crews), with the aim to distil the causes of success. Each time Hackman thought he had the causes figured out he would create a model, plug his model into a stats program, and work with real teams to see if the application of his model led to better performance.

On the surface, this approach seems like a logical thing to do. After all, if we can work out the magic levers that cause team success, then organisations would surely work better because they can start to pull those same levers. This is precisely the value proposition offered by the aforementioned BoKs as well.

When Hackman applied the models he lovingly researched and developed, he found they did not make a significant difference in outcomes. Being an academic, he did what most academics do: he spent years trying to refine his models and then re-tested them against real life teamwork situations. But this didn’t work either; his models got no closer to predicting or influencing outcomes in a reliable fashion. Reality it seemed, never fitted the models he developed.

At this point, Hackman began to question whether he was looking at the problem through the right lens. He wondered if trying to determine the causes of team efficacy by looking at successful teams retrospectively and then codifying these into causal models was the best approach. So he changed his focus and asked himself a very different question – a question that every project management practitioner (and project team member) should be asking themselves:

What are the enabling conditions that need to exist that give rise to great outcomes?

Now if, at this point, you think that this is the same question as “What are the causes of successful projects” you would be mistaken. Think about the BoKs and consider this: if you have ever argued with someone about whether a tool, methodology or some process is great or completely sucks, eventually someone will say something like “Well it can work for the right organisation.”

The implicit point here is that depending on the conditions, something that works for one organisation may completely suck for another (thereby invalidating the notion of a “best” practice). The genius of Hackman is that he challenges us to stop arguing about whether one methodology or model is better than the other and focus on what the enabling conditions are instead. Think about it – if project managers and developers did this, we would be able to avoid low value arguments like earned value management versus burndown charts.

In the case of Hackman, he re-examined all of his work on teams and boiled it down to six essential conditions, arguing that irrespective of what else you did or what methodology you used, having these conditions tended to lead to better results. Hackman did not rank any one condition over any other, instead arguing that all were needed for teams to have a greater chance of being high performing. The conditions are:

  1. A real team: Interdependence among members, clear boundaries distinguishing members from non-members and moderate stability of membership over time
  2. A compelling purpose: A purpose that is clear, challenging, and consequential. It energizes team members and fully engages their talents
  3. Right people: People who had task expertise, self-organised and skill in working collaboratively with others
  4. Clear norms of conduct: Team understands clearly what behaviours are, and are not, acceptable
  5. A supportive organisational context: The team has the resources it needs and the reward system provides recognition and positive consequences for excellent team performance
  6. Appropriate coaching: The right sort of coaching for the team was provided at the right time

There is more to that list than what I am covering here, and it is important to note that I’m not saying that Hackman’s conditions are *the* conditions. But I would contend that they are a pretty good start. Look at Hackman’s conditions for teams above and think about your projects and how you manage them. Did you have these conditions in place when you started? If you had them, would it have led to better outcomes?

I believe that it is a huge mistake to attribute success or failure of projects to methods, processes and models used to manage them rather than the conditions in which those processes operate. As long as this attribution error persists, people will continue to get suckered into B-grade verbal-slugfests about whether method X is better than method Y.

What exacerbates this “causes over conditions” problem is that enabling conditions rarely get codified in procedures, governance models, bodies of knowledge or certifications. As a result, the very factors that leads to success (the conditions) are entirely absent from the models that we use. My contention is that most organisations, when delivering projects, do not have the right enabling conditions in place to begin with. If your organisation has a blame culture, then chances are that any process, no matter how noble its design or intent, has the potential to become a blame apportionment mechanism or a responsibility avoidance mechanism.

So Hackman, despite looking at a different discipline of teamwork and leadership, gives us an important clue about what ails project management and how to we might improve it. Focus on enabling conditions rather than attributing causes!

Let’s get back to Chris Chapman’s answer to my Waterfall question. His assertion that Waterfall would not exist in the conditions I described holds a less obvious lesson. That is, the way project management tools or methods are used will affect conditions as well. So if you have ever said to yourself “I can’t believe that I am being forced to follow this wrong-headed process” chances are you have been on the receiving end of negative conditions created by application of a process. (So in this sense the agile dudes have it right.)

So what does project management mean to me? In short, focusing on creating the enabling conditions for great performance, and then getting out of the way!

 

Thanks for reading

Paul  Culmsee

 

Epilogue:

In this sermon I cannot hope to cover all of the things I would like to cover but never fear, Kailash Awati and I already piled some of our thoughts into 420 pages of goodness known as the book “The Heretics Guide to Best Practices” – so if you like what you read here, you will really like what is in there.

Acknowledgements:

As usual I have to thank Kailash for reviewing this post and making it suck much less than it did 🙂

Further reading:

  1. My series on rethinking SharePoint maturity: Don’t let the title put you off – the material here further explores the conditions for great project performance: http://www.cleverworkarounds.com/2013/08/19/rethinking-sharepoint-maturity-part-1-conditions-over-causes/
  2. Jon Whitty’s brilliant work on memeplexes and reconceptualising project management: http://espace.library.uq.edu.au/eserv/UQ:8801/sjw_ijpm_05.pdf
  3. Pretty much anything on Kailash Awati’s blog, but in particular his sermon in this flashblog: http://eight2late.wordpress.com/2013/09/25/what-project-management-means-to-me-a-metalogue/
  4. Stephen Duffield’s work on project knowledge management and risk: http://www.invictaprojects.com.au/pmlessonslearnedblog/?p=850

p.s: This post is published as part of a first ever project management related global blogging initiative to publish a post on a common theme at exactly the same time. Over seventy bloggers from Australia, Canada, Colombia, Denmark, France, Italy, Mexico, Poland, Portugal, Singapore, South Africa, Spain, UK and the USA have committed to make a blogging contribution and the fruit of their labor is now (literally NOW) available all over the web. The complete list of all participating blogs is found here so please go and check them out!

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

How to filter on a Managed Metadata column via REST in SharePoint 2013

Send to Kindle

Pardon the pun, but I just had a ‘clever workaround’ moment with SharePoint’s oData/REST implementation when it comes to filtering list items based on taxonomy (managed metadata) columns. Now I do not consider myself a developer, so this article is probably a little verbose for some readers, but should be helpful to power users or IT pros.

Here is an example term set called FilterDemo. You can see two levels of hierarchy.

image

Take the scenario of a custom list (called TestFilter) with a managed metadata column (called FilterDemo) that links to the above term set. Let’s also assume there are 3 entries in it as follows:

Title FilterDemo
A A1
B B3
C Category A

Using the wonders of the REST API, I am able to get access to all items in the list via the following URL:

http://site/_api/web/lists/getbytitle(‘TestFIlter’)/Items

If you execute that, and IE is has “feed reading view” turned off, you will get back lots of scary looking XML. If you collapse it though, you will see three entry tags in it. One for each item in the TestFilter list.

 <?xml version=”1.0″ encoding=”utf-8″ ?>
<feed xml:base=http://site/_api/ xmlns=”http://www.w3.org/2005/Atom xmlns:d=”http://schemas.microsoft.com/ado/2007/08/dataservices xmlns:m=”http://schemas.microsoft.com/ado/2007/08/dataservices/metadata xmlns:georss=”http://www.georss.org/georss xmlns:gml=”http://www.opengis.net/gml>
  <id>a0dd3649-27b9-4d8d-90f8-243e9622b158</id>
  <title />
  <updated>2013-09-23T01:44:35Z</updated>
+ <entry m:etag=”“2”>
+ <entry m:etag=”“3”>
+ <entry m:etag=”“1”“>
</feed>

Using more wonders of REST (and oData), I can change the URL to filter my results so that I only get matching items back. For example: here I am filtering on Items where the Title field has “A” in it.

http://site/_api/web/lists/getbytitle(‘Testfilter’)/Items/?$filter=Title eq ‘A’

Now we get back just the one entry matching that criteria…

 <?xml version=”1.0″ encoding=”utf-8″ ?>
<feed xml:base=http://site/_api/ xmlns=”http://www.w3.org/2005/Atom xmlns:d=”http://schemas.microsoft.com/ado/2007/08/dataservices xmlns:m=”http://schemas.microsoft.com/ado/2007/08/dataservices/metadata xmlns:georss=”http://www.georss.org/georss xmlns:gml=”http://www.opengis.net/gml>
  <id>9dad763d-743f-4ffb-b26b-e53c0f6e1f7e</id>
  <title />
  <updated>2013-09-23T02:19:02Z</updated>
+ <entry m:etag=”“2”>

</feed>

Okay, so there is nothing earth shattering about what I just did above and its well documented in various places. But look what happens when I try and filter items in the list based on the FilterDemo column which is Managed metadata based…

http://site/_api/web/lists/getbytitle(‘Testfilter’)/Items?$filter=FilterDemo eq ‘A1’

Boom! Browser returns an error. If I do the same thing using Fiddler to look at the trace, it reports a HTTP/1.1 400 Bad Request error.

So I start digging and come across articles from Phil Harding and Serge Luca informing me that Taxonomy columns are unsupported via REST. I got my hopes up when I came across an Andrew Connell article on filtering lookup fields, since behind the scenes the taxonomy field is actually a lookup field, but in the comments section, it seemed to confirm that this wasn’t doable. All seemed lost…

But in reading MSDN’s REST articles, I had a vague recollection that CAML could be done via REST queries. I knew that using CAML, it was indeed possible to filter taxonomy columns. I proved it using CAML Designer 2013, connecting to the TestFilter list and filtering it successfully using the following XML…

<Where>
   <Eq>
      <FieldRef Name='FilterDemo' />
      <Value Type='TaxonomyFieldType'>A1</Value>
   </Eq>
</Where>

So, armed with this knowledge, I came across an MSDN forum thread where a tantalising clue was offered. Christophe Humbert asked whether CAML queries could be done via the REST API and Erik C. Jordan provided this nugget of wisdom:

I was able to get the following to work:

POST https://<site>/_api/web/Lists/GetByTitle(‘[list name]’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>[other CAML query elements]</Query></View>”}

Editors node: I will be a little verbose at this point in case you are not a developer or overly familiar with REST.

This approach looked exactly what I needed and I thought this was worth a shot, but since the remedy is a HTTP POST rather than a GET, I couldn’t do it with Internet Explorer, so I loaded up fiddler, and used the Composer function. I crafted the following POST with an empty CAML query as a test…

http://site/_api/Web/lists/getByTitle(‘TestFilter’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

 

image

And this is the response I got…

HTTP/1.1 411 Length Required

A quick bit of googling, and I realise that some HTTP queries require the use of a ‘Content-Length‘ field in the HTTP header. The standard states that: “Any Content-Length greater than or equal to zero is a valid value”, so I tried this figure as shown below:

image

And this time I get the response:

HTTP/1.1 403 FORBIDDEN (The security validation for this page is invalid and might be corrupted. Please use your web browser’s Back button to try your operation again).

Another quick bit of googling I discover that I am missing another required HTTP header in my POST request. This is called the X-RequestDigest and it holds something called the form digest. The form digest improves SharePoint security because it is specific to a specific user, site and limited to a certain time frame. You need to request a form digest and then pass it back to SharePoint for subsequent calls. To get hold of the form digest, you have to make another REST call which generates one. This is done by making a POST request with an empty body to http://site/_api/contextinfo and extracting the value of the “d:FormDigestValue” node in the information returned. In fiddler it looks like the following…

image

image

If you look at the returned content from calling the _api/contextinfo method above, I have highlighted FormDigestValue. In Fiddler, copy this value into the Request headers section of the composer and retry the CAML request:

image

Now if you execute the request, we get data!

HTTP/1.1 200 OK

If you look a the raw results in fiddler, you will see a whole bunch of scary XML. If you examine the results using the XML parser built into Fiddler as shown in the image below, you will see very similar output to my original REST request that I started this article with – 3 entries in this list. It worked!

image

So now let’s add our CAML query into the XML and see if we can make it work. Recall that I successfully tested this query via the following CAML…

<Where> <Eq> <FieldRef Name=’FilterDemo’ /> <Value Type=’TaxonomyFieldType’>A1</Value> </Eq> </Where>

So I construct the following URL and paste into the Fiddler constructor:

http://site/_api/Web/lists/getByTitle(‘TestFilter’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><Where> <Eq> <FieldRef Name=’FilterDemo’ /> <Value Type=’TaxonomyFieldType’>A1</Value> </Eq> </Where> </Query></View>”}

 

With great excitement, I clicked “Execute” and received….

HTTP/1.1 400 Bad Request

Ah crap! Unfortunately I could not find a single example of this form of REST query to SharePoint, but I got a hint to the problem from Fiddler itself. It wasn’t happy with my request at all, showing the request as red.

image

Clearly I was doing something wrong, and being a non-developer I figured I wasn’t encoding things properly. So after some trial and error, I worked out that spaces were the issue. So where I was able to remove them I did, and those that I couldn’t, I encoded like so:

http://site/_api/Web/lists/getByTitle(‘TestFilter’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><Where><Eq><FieldRef%20Name=’FilterDemo’/><Value%20Type=’TaxonomyFieldType’>A1</Value></Eq></Where></Query></View>”}

At this point fiddler stopped showing me an angry red colour and I clicked the Execute button. Wohoo! It works! Below you can see a single matching entry, just like my example when I filtered on Title column using the $filter parameter.

image

Expanding the XML indeed confirms it has matched term A1. Smile

image

Conclusion

While I was happy that I found a way to use REST to filter a list based on a Taxonomy column, I’m sure this method offers some interesting opportunities in various other scenarios.

In my company Seven Sigma, we have a worn-out post-it note that has the words “Alpha SharePoint Developer” written on it. This gets stuck to the office of whoever does the coolest coding trick and I’m happy to report that this little effort netted me the Alpha developer prize for the first time ever, principally because I then used this approach with SharePoint Designer 2013 workflows and it worked really well. In fact it worked so well that I have decided that using this with the new capabilities of SPD workflows warrants a blog series of its own.

Until then, I hope that this approach works for you and happy REST’ing!

 

Thanks for reading

Paul Culmsee

www.hereticsguidebooks.com

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The worlds biggest Project Management rant is happening on September 25 (#PMFlashblog)

Send to Kindle

Just a quick FYI..

A while back, Shim Marom (an Australian Project Management blogger) invited me to participate in an idea he had. He wanted to get a bunch of PM bloggers to write a sermon on the same topic and release the posts at the same time. So far, more than 70 bloggers have jumped on board, so there will be a lot of reading to do on that day! The topic of the sermon is “What does project management mean to me” and will be promoted via the #PMFlashblog hashtag. With a topic like that, how could I resist?

Given I am one of 70 people (and one of the dumber ones), I thought I’d better do a good rant sermon so I have spent a bit of time (and beer) reflecting on what project management means to me. My Heretics Guide partner in crime Kailash Awati is also participating, as is some of our favourite bloggers and authors in this area.

It should be great!

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Troubleshooting PerformancePoint Dashboard Designer Data Connectivity to SharePoint Lists

Send to Kindle

image

Here is a quick note with regards to PowerPivot Dashboard Designer connecting to SharePoint lists utilising Per-user identity on the single server.  The screenshot above shows what I mean. I have told Dashboard designer to use a SharePoint list as a data source on a site called “http://myserver and chosen the “Per-user identity” to connect with the credentials of the currently logged in user, rather than a shared user or something configured in the secure store (which is essentially another shared user scenario). Per-user identity makes a lot of sense most of the time for SharePoint lists, because SharePoint already controls the permissions to this data. If you have granted users access to the site in question already, the other options are overkill – especially in a single server scenario where Kerberos doesn’t have to be dealt with.

But when you try this for the first time, a common complaint is to see an error message like the one below.

image

This error is likely caused by the Claims to Windows Token Service (C2WTS) not being started. In case you are not aware of how things work behind the scenes, once a user is authenticated to SharePoint, claims authentication is used for inter/intra farm authentication between SharePoint services. Even when classic authentication is used when a user hits a SharePoint site, SharePoint converts it to a claims identity when it talks to service applications. This is all done for you behind the scenes and is all fine and dandy when SharePoint is talking to other SharePoint components, but if the service application needs to talk to an external data source that does not support claims authentication – say a SQL Database – then the Claims to Windows Token Service is used to convert the claims back to a standard Windows token.

Now based on what I just said, you might expect that PerformancePoint should use claims authentication when connecting to a SharePoint list as a data source – after all, we are not leaving the confines of SharePoint and I just told you that claims authentication is used for inter/infra farm authentication between SharePoint services, but this is not the case. When PerformancePoint connects to a SharePoint list (and that connection is set to Per-user identity as above), it converts the users claim back to a Windows Token first.

Now the error above is quite common and well documented. The giveaway is when you see an event log ID 1137 and in the details of the log, the message: Monitoring Service was unable to retrieve a Windows identity for “MyDomain\User”.  Verify that the web application authentication provider in SharePoint Central Administration is the default windows Negotiate or Kerberos provider. This is a pretty strong indicator that the C2WTS has not been configured or is not started.

A lesser known issue is one I came across the other day. In this case, Claims to Windows Token Service was provisioned and started, and a local administrator on the box. In dashboard designer, here was the message:

image

In this error, you will see an eventlog ID 1101 and in the details of the log, something like this:

PerformancePoint Services could not connect to the specified data source. Verify that either the current user or Unattended Service Account has read permissions to the data source, depending on your security configuration. Also verify that all required connection information is provided and correct.

System.Net.WebException: The remote name could not be resolved: ‘myserver’
at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
at System.Net.HttpWebRequest.GetRequestStream()
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
at Microsoft.PerformancePoint.Scorecards.DataSourceProviders.ListService.GetListCollection()
at Microsoft.PerformancePoint.Scorecards.DataSourceProviders.SpListDataSourceProvider.GetCubeNameInfos()

PerformancePoint Services error code 201.

This issue has a misleading message. It suggests that the server “myserver” name could not be resolved, which might have you thinking this was a name resolution issue which is not the true root cause. This one is primarily due to a misconfiguration – most likely through Active Directory Group policy messing with server settings. A better hint to the true cause of this issue can be found in the security event log (assuming you have set the server audit policy to audit failures of “privilege use” which is not enabled by default).

image

The event ID to look for is 4673, and the Task Category is called “Sensitive Privilege Use”. A failure will be logged for the user account running the claims to windows service:

Level:         Information
Keywords:      Audit Failure
User:          N/A
Computer:      myserver.mydomain.com
Description:
A privileged service was called.

Subject:
Security ID:        mydomain\sp_c2wts
Account Name:        sp_c2wts
Account Domain:        mydomain

Service Request Information:
Privileges:        SeTcbPrivilege

The key bit to the log above is the “Service Request Information” section. “SeTcbPrivilege” means “To Act as Part of the Operating System”

Now the root cause of this issue is clear. The claims to windows token service is missing the “Act as part of the operating system” right which is one of its key requirements. You can find this setting by opening local security policy, choosing Local Policies->User Rights  Assignment. Add the claims to windows token service user account into this right. If you cannot do so, then it is governed by an Active Directory policy and you will have to go and talk to your Active Directory admin to get it done.

image

Hope this helps someone

Paul Culmsee

 

p.s Below this is the full eventlog details. I’ve only put it here for search engines so feel free to stop here Smile

 

Log Name:      Application
Source:        Microsoft-SharePoint Products-PerformancePoint Service
Date:          3/09/2013 10:37:12 PM
Event ID:      1101
Task Category: PerformancePoint Services
Level:         Error
Keywords:
User:          domain\sp_service
Computer:      SPAPP.mydomain.com
Description:
PerformancePoint Services could not connect to the specified data source. Verify that either the current user or Unattended Service Account has read permissions to the data source, depending on your security configuration. Also verify that all required connection information is provided and correct.

System.Net.WebException: The remote name could not be resolved: ‘sharepointaite’
at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
at System.Net.HttpWebRequest.GetRequestStream()
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
at Microsoft.PerformancePoint.Scorecards.DataSourceProviders.ListService.GetListCollection()
at Microsoft.PerformancePoint.Scorecards.DataSourceProviders.SpListDataSourceProvider.GetCubeNameInfos()

PerformancePoint Services error code 201.

 

 

 

 

Log Name:      Application
Source:        Microsoft-SharePoint Products-PerformancePoint Service
Date:          3/09/2013 10:18:39 PM
Event ID:      1137
Task Category: PerformancePoint Services
Level:         Error
Keywords:
User:          domain\sp_service
Computer:      SPAPP.mydomain.com
Description:
The following data source cannot be used because PerformancePoint Services is not configured correctly.

Data source location: http://sharepoijntsite/Shared Documents/8_.000
Data source name: New Data Source

Monitoring Service was unable to retrieve a Windows identity for “domain\user”.  Verify that the web application authentication provider in SharePoint Central Administration is the default windows Negotiate or Kerberos provider.  If the user does not have a valid active directory account the data source will need to be configured to use the unattended service account for the user to access this data.

Exception details:
System.InvalidOperationException: Could not retrieve a valid Windows identity. —> System.ServiceModel.EndpointNotFoundException: The message could not be dispatched because the service at the endpoint address ‘net.pipe://localhost/s4u/022694f3-9fbd-422b-b4b2-312e25dae2a2’ is unavailable for the protocol of the address.

Server stack trace:
at System.ServiceModel.Channels.ConnectionUpgradeHelper.DecodeFramingFault(ClientFramingDecoder decoder, IConnection connection, Uri via, String contentType, TimeoutHelper& timeoutHelper)
at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.SendPreamble(IConnection connection, ArraySegment`1 preamble, TimeoutHelper& timeoutHelper)
at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.DuplexConnectionPoolHelper.AcceptPooledConnection(IConnection connection, TimeoutHelper& timeoutHelper)
at System.ServiceModel.Channels.ConnectionPoolHelper.EstablishConnection(TimeSpan timeout)
at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.CallOnceManager.CallOnce(TimeSpan timeout, CallOnceManager cascade)
at System.ServiceModel.Channels.ServiceChannel.EnsureOpened(TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)

Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at Microsoft.IdentityModel.WindowsTokenService.S4UClient.IS4UService_dup.UpnLogon(String upn, Int32 pid)
at Microsoft.IdentityModel.WindowsTokenService.S4UClient.CallService(Func`2 contractOperation)
at Microsoft.SharePoint.SPSecurityContext.GetWindowsIdentity()
— End of inner exception stack trace —
at Microsoft.SharePoint.SPSecurityContext.GetWindowsIdentity()
at Microsoft.PerformancePoint.Scorecards.ServerCommon.ConnectionContextHelper.SetContext(ConnectionContext connectionContext, ICredentialProvider credentials)

Log Name:      Security
Source:        Microsoft-Windows-Security-Auditing
Date:          4/09/2013 8:53:04 AM
Event ID:      4673
Task Category: Sensitive Privilege Use
Level:         Information
Keywords:      Audit Failure
User:          N/A
Computer:      SPAPP.mydomain.com
Description:
A privileged service was called.

Subject:
Security ID:        domain\sp_c2wts
Account Name:        sp_c2wts
Account Domain:        DOMAIN
Logon ID:        0xFCE1

Service:
Server:    NT Local Security Authority / Authentication Service
Service Name:    LsaRegisterLogonProcess()

Process:
Process ID:    0x224
Process Name:    C:\Windows\System32\lsass.exe

Service Request Information:
Privileges:        SeTcbPrivilege

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Rethinking SharePoint Maturity Part 5: From Conditions to Actionable Lessons Learnt

This entry is part 5 of 5 in the series Maturity
Send to Kindle

Hi all

Welcome to part five of my quest to improve people’s awareness and understanding of what SharePoint maturity is really all about. For those new to this series of articles, we have traversed a bit of territory to get here, and during the journey there has been not a single SharePoint site column, content type or site collection in sight. In fact, I have not touched any of the topics that many would traditionally view as a sign of SharePoint maturity. Instead, I have been taking readers on a fun-filled journey examining three nerdy, yet highly interesting areas of research in team development, collaboration and organisational learning. Along the way we defaced the Mona Lisa, looked at SharePoint through holes in slices of Swiss Cheese and channelled the number of the beast.

After all that, we ended part 4, by arriving at this odd looking diagram below…

What you are looking at is something called the CALL model, which stands for “Conditions to Actionable Lessons Learnt”. I originally developed the model with Dialogue Mapping and knowledge management in mind – essentially to help my clients do a better job of integrating double loop learning into their projects. However, it soon became apparent that it was valuable in various SharePoint contexts too.

Single vs. double loop learning

In the previous paragraph I made a reference to “double loop learning” without explaining what it was, so let’s quickly make amends because it is interesting stuff. The idea of single and double loop learning has been around for close to 40 years – it was in 1974 when Chris Argyris came up with the idea. To explain, let’s bring back our trusty SharePoint 2010 governance poster that I trash-talked in the first and fourth articles…

If you have not seen this poster before, it represents what Microsoft believe to be the focus areas for SharePoint 2010 governance. Many people – consultants in particular – will take the information in this poster for granted and create SharePoint governance plans that try to cover off the various areas it suggests to be covered. Everyone will feel good because they have ticked all the boxes of this authoritative fountain of SharePoint wisdom.

Then, if SharePoint then fails to live up to expectations, many will look at the poster and wonder which areas they did not adequately cover. They will study the poster, search Google or Wikipedia for better definitions of the terms listed and then make another reattempt, trying to do an even better job of implementing the wisdom contained therein.

This my friends, is a shining example of single loop learning. Single loop learning, as described in this article “seems to be present when goals, values, frameworks and, to a significant extent, strategies are taken for granted. The emphasis is on techniques and making techniques more efficient.” In single loop learning, the fundamental premise of a course of action remains unchanged. All of the energy of learning is directed to making sure “we get it right this time.” In short, in a single loop learning scenario, repeated attempts are made at solving the same issue, but no-one questions the underlying premise of the strategy.

Now in case you haven’t noticed, I spent the first three posts of this series “questioning the underlying premise” of the above SharePoint governance poster. So in effect I’ve been introducing you to the notion of double loop learning already. Double-loop learning involves taking a deeper look at what is going on. In double-loop learning, having attempted to achieve a goal on different occasions, the goal itself may be modified, re-framed or rejected in the light of the experience gained in trying to achieve it. Think about it – double loop learning actually happened in organisations people would never say things like “well that’s always how we have done it here.”

I see a lot of single loop learning in SharePoint land, and I want to help people break out of their existing framing of the issue – compassionately, of course Smile

Enter the CALL Model

So getting back to my CALL model, I propose it as a multi-purpose tool that can be used for various SharePoint related stuff. It is based on the Swiss cheese risk management model; a metaphor which suggests most strategies have gaps that create risk. These gaps are analogous to holes in slices of Swiss cheese. In terms of the SharePoint governance poster, think of each of the areas it suggests to be covered as slices of cheese. This key to this model is that it assumes that no single defence layer is sufficient to mitigate risk. It also implies that if risk mitigation strategies are set up with all the holes lined up, there is a systematic flaw, since it would allow a problem to progress all the way through to adversely affect the organisation. Accordingly, the Swiss cheese model encourages a more balanced view of how risk is managed.

You can think of the CALL model as a SharePoint optimised Swiss cheese model. CALL extends the Swiss cheese model by incorporating cutting edge research in enabling team performance (Hackman), collaboration (Wilder) and knowledge management (Duffield). It outlines 8 actionable areas (Swiss cheese slices) that operate at the individual, team and organisation levels. These focus areas can be thought of as enabling conditions that mitigate risk, as well as focus areas for identification and application of lessons learnt. In other words, my contention is that for SharePoint maturity, you should strive to create these 8 conditions and then consider them when evaluating project performance.

image

The image above is another drawing of the model minus the pretty colours I used earlier. In this version, I am showing the path of a SharePoint project flows through these 8 areas. Note how the arrow from left to right deviates because we are seeking to use them to mitigate risk via defence in depth. But when it comes to applying learnings from a project (arrow now moves from right to left to close the loop), the flow is designed to be smooth and unencumbered to ensure that the opportunity for double loop learning takes place.

Here is a description of each of the 8 focus areas:

Skills and expertise

This focus area is concerned with ensuring individuals are selected with the right skills and task expertise to perform their role in delivery and operation of services. In SharePoint this is critical because of the technical depth and breadth of the product. Want to deploy SharePoint 2013 request routing in dedicated mode? Go see Spence so he can tell you not to. Want to learn how the new WOPI protocol works with Office Web Apps? Sign a cheque for Wictor to help you.

Skills is closely associated with high IQ. In other words, specialist skills require smart, dedicated people. Therefore this also incorporates ensuring staff have appropriate qualifications and certifications, that education, training and ongoing development practices are properly targeted, and that individuals are willing to learn new skills and be proactive in keeping themselves up-skilled. (In other words, all of the hallmarks of those brilliantly talented people who completed the now defunct Microsoft Certified Masters program).

Collaborative Maturity

Ever heard of the term “dumb smart guy”? Usually it is someone who is intellectually smart, but has all the emotional maturity (EQ) of a potato. Collaborative maturity is all about ensuring that individuals have skills in working collaboratively with others. It signifies a willingness to listen, empathy, mutual respect, understanding and trust. Collaboratively mature people have a tolerance for ambiguity and have the ability to engage in genuine dialogue to reach compromise. Collaboratively mature people also see collaboration in their self-interest and foster develop deep ties with colleagues in order to work interdependently.

Being in the IT industry, I’m not sure if this person actually exists, but hey – this description gives us all something to aspire to!

Role clarity

Role clarity is concerned that the role of each team member is understood by everyone within the team and it is clear how much authority is vested within each role. This in turn provides task clarity, fosters interdependency among a team and reduces process loss. (Process loss is difficulty in knowing who is doing what and how it is done). Where roles are clear and understood, team members are appropriately appointed to tasks according to their capacity (see “skills and expertise” above) and character (see “collaborative maturity” above).

Goal clarity

Goal clarity relates to purpose, direction and goal alignment between members of a team is essential for good team performance. A compelling purpose energizes team members, orients them toward their collective objective, fully engages their talents and motivates them to resolve conflicts. A compelling purpose should be underpinned by concrete, attainable goals and objectives, both short and long term. Knowing where you are heading focuses the team’s energy in directive meaningful activity. This also helps build team efficacy, which is the belief within teams of their ability to solve problems and deliver great solutions. On the other hand, lack of goal clarify is one of the classic symptoms of wicked problems.

Participation safety and decision influence

Ever been on a project that’s taking on water, but nobody seems to be willing to listen? Ever had any critically important topics not discussed because they are simply taboo and unmentionable? It is not fun – and little breakthrough thinking or innovation can exist without participation safety and decision influence. When a team has a high level of participation safety, members feel safe to share ideas, raise unpopular views or opinions, or speak their truth to one another. This reduces groupthink and social loafing, encourages breakthrough and can lead a collaborative team and a collaborative organisational culture. There are countless case studies of major disasters (such as the Deepwater Horizon Oil Spill), where a culture of “only tell me the good news” prevented critical information from being raised that could have averted the issue. In fact ‘communication’ (or lack of it) is probably the most commonly cited project failure factor.

Having said all that, while participation safety is critical, the ideas that team members put forward need to influence the direction and outcome of the team. A manager who says “my door is always open”, but then ignores feedback creates dissonance among team members because what is espoused is not practiced. The simple facts of the matter is that a key element for peak performance is to provide an environment safe enough for team members to speak their truths, to be rewarded for doing so and for truth telling to actually influence direction.

Enabling Technology

Technology underpins all aspects of organisational systems and projects and provides the means to generate leaps in performance and capabilities of users, as more broadly, team and organisational productivity. Technology at its best facilitates the delivery of timely, relevant information for decision making, co-ordination and collaboration. Thus it is critical that technology does not get in the way of delivering value. How often have you worked on a project when you have been forced to use technologies that stifle productivity, create frustration and reduce collaboration between team members? How often has that technology been SharePoint?

Enabling Process

How often have you said to yourself “I can’t believe I have to follow this braindead process.” Process is the glue that provides the rules of behaviour in delivering on goals and like technology, underpins all aspects of organisational systems and projects and is a key part of performance and productivity. It is critical that process, like technology, is always driven by purpose and that it does not get in the way of delivering value. Inappropriate process can make a huge difference in how team members interact with stakeholders and each-other.

Enabling Resources

Enabling resources is concerned with the financial, material and human input necessary to develop and sustain delivery of services. Put simply, even the best teams with the most compelling direction can falter if they are under-resourced. It is critical that sufficient funds, staff, materials and time are provided to get the job done.

Applications for the CALL model

The CALL model can be used in many ways, given its heritage of Hackman, Wilder and Duffield. Examples include:

  • A model for performing SharePoint governance health check/assessment
  • A model for assessing the makeup of a SharePoint team
  • A model for assessing the complexity of a proposed SharePoint solution
  • A model for assessing departmental readiness for SharePoint
  • A model for developing SharePoint Business Continuity planning

It is worthwhile noting that Hackman developed a team performance instrument called a Team Diagnostic Survey based around his 6 enabling conditions. Since the CALL model is so closely aligned to Hackman, it should also be able to be used in a similar fashion. The same goes for the Wilder research on collaboration, that developed an instrument called the Collaboration Factors Inventory.

So given the source material, the CALL model also has applicability in the areas of:

  • An set of enabling conditions to establish to develop high performing teams
  • An set of enabling conditions to successful collaborative delivery of projects
  • A focus area for the identification of risks (and opportunities) on organisational initiatives
  • A framework for the systematic capture of project lessons learnt
  • A framework for assessing change and other organisational initiatives
  • A governance maturity model
  • A knowledge management and organisational learning maturity model

Conclusion

The CALL model reflects a synthesis of three highly rigorous research efforts. All three seemed to gel really well when they were put into the melting pot and I was pleased with the result. In the next post, I will show you how I have used the CALL model when developing a SharePoint Business Continuity Strategy for a client. I’ll also talk about how I have used it in lessons learnt workshops.

Thanks for reading

 

Paul Culmsee

www.hereticsguidebooks.com

HGBP_Cover-236x300

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle