Consequences of complexity–the evilness of the SharePoint 2010 User Profile Service

Send to Kindle


A few months back I posted a relatively well behaved rant over the ridiculously complex User Profile Service Application of SharePoint 2010. I think this component in particular epitomises SharePoint 2010’s awful combination of “design by committee” clunkiness, along with real-world sheltered Microsoft product manager groupthink which seems to rate success on the number of half baked features packed in, as opposed to how well those features install logically, integrate with other products and function properly in real-world scenarios.

Now truth be told, until yesterday, I have had an unblemished record with the User Profile Service – being able to successfully provision it first time at all sites I have visited (and no I did not resort to running it all as administrator). Of course, we all have Spence to thank for this with his rational guide. Nevertheless, I am strongly starting to think that I should write the irrational guide as a sort of bizzaro version of Spencers articles, which combines his rigour with some mega-ranting ;-).

So what happened to blemish my perfect record? Bloody Active Directory policies – that’s what.

In case you didn’t know, SharePoint uses a scaled down, pre-release version of Forefront Identify Manager. Presumably the logic here to this was to allow more flexibility, by two-way syncing to various directory services, thereby saving the SharePoint team development time and effort, as well as being able to tout yet another cool feature to the masses. Of course, the trade-off that the programmers overlooked is the insane complexity that they introduced as a result. I’m sure if you asked Microsoft’s support staff what they think of the UPS, they will tell you it has not worked out overly well. Whether that feedback has made it way back to the hallowed ground of the open-plan cubicles of SharePoint product development I can only guess. But I theorise that if Microsoft made their SharePoint devs accountable for providing front-line tech support for their components, they will suddenly understand why conspiracy theorist support and infrastructure guys act the way they do.

Anyway I better supress my desire for an all out rant and tell you the problem and the fix. The site in question was actually a fairly simple set-up. Two server farm and a single AD forest. About the only thing of significance from the absolute stock standard setup was that the active directory NETBIOS name did not match the active directory fully qualified domain name. But this is actually a well known and well covered by TechNet and Spence. A quick bit of PowerShell goodness and some AD permission configuration sorts the issue.

Yet when I provisioned the User Profile Service Application and then tried to start the User Profile Synchronisation Service on the server (the big, scary step that strikes fear into practitioners), I hit the sadly common “stuck on starting” error. The ULS logs told me utterly nothing of significance – even when i turned the debug juice to full throttle. The ever helpful windows event logs showed me Event ID 3:

ForeFront Identity Manager,
Level: Error

.Net SqlClient Data Provider: System.Data.SqlClient.SqlException: HostId is not registered
at Microsoft.ResourceManagement.Data.Exception.DataAccessExceptionManager.ThrowException(SqlException innerException)
at Microsoft.ResourceManagement.Data.DataAccess.RetrieveWorkflowDataForHostActivator(Int16 hostId, Int16 pingIntervalSecs, Int32 activeHostedWorkflowDefinitionsSequenceNumber, Int16 workflowControlMessagesMaxPerMinute, Int16 requestRecoveryMaxPerMinute, Int16 requestCleanupMaxPerMinute, Boolean runRequestRecoveryScan, Boolean& doPolicyApplicationDispatch, ReadOnlyCollection`1& activeHostedWorkflowDefinitions, ReadOnlyCollection`1& workflowControlMessages, List`1& requestsToRedispatch)
at Microsoft.ResourceManagement.Workflow.Hosting.HostActivator.RetrieveWorkflowDataForHostActivator()
at Microsoft.ResourceManagement.Workflow.Hosting.HostActivator.ActivateHosts(Object source, ElapsedEventArgs e)

The most common issue with this message is the NETBIOS issue I mentioned earlier. But in my case this proved to be fruitless. I also took Spence’s advice and installed the Feb 2011 cumulative update for SharePoint 2010, but to no avail. Every time I provisioned the UPS sync service, I received the above persistent error – many, many, many times. 🙁

For what its worth, forget googling the above error because it is a bit of a red herring and you will find issues that will likely point you to the wrong places.

In my case, the key to the resolution lay in understanding my previously documented issue with the UPS and self-signed certificate creation. This time, I noticed that the certificates were successfully created before the above error happened.  MIISCLIENT showed no configuration had been written to Forefront Identity Manager at all. Then I remembered that the SharePoint User Profile Service Application talks to Forefront over HTTPS on port 5725. As soon as I remembered that HTTP was the communication mechanism, I had a strong suspicion on where the problem was – as I have seen this sort of crap before…

I wondered if some stupid proxy setting was getting in the way. Back in the halcyon days of SharePoint 2003, I had this issue when scheduling SMIGRATE tasks, where the account used to run SMIGRATE is configured to use a proxy server, would fail. To find out if this was the case here, a quick execute of the GPRESULT tool and we realised that there was a proxy configuration script applied at the domain level for all users. We then logged in as the farm account interactively (given that to provision the UPS it needs to be Administrator anyway this was not a problem). We then disabled all proxy configuration via Internet explorer and tried again.

Blammo! The service provisions and we are cooking with gas! it was the bloody proxy server. Reconfigure group policy and all is good.


The moral of the story is this. Anytime windows components communicate with each-other via HTTP, there is always a chance that some AD induced dumbass proxy setting might get in the way. If not that, stateful security apps that check out HTTP traffic or even a corrupted cache (as happened in this case). The ULS logs will never tell you much here, because the problem is not SharePoint per se, but the registry configuration enforced by policy.

So, to ensure that you do not get affected by this, configure all SharePoint servers to be excluded from proxy access, or configure the SharePoint farm account not to use a proxy server at all. (Watch for certificate revocation related slowness if you do this though).

Finally, I called this post “consequences of complexity” because this sort of problem is very tricky to identify the root cause. With so many variables in the mix, how the hell can people figure this sort of stuff out?

Seriously Microsoft, you need to adjust your measures of success to include resiliency of the platform!


Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Australian SharePoint Conference Community Challenge–How we did it.

Send to Kindle



I recently participated in the Australian and New Zealand community SharePoint conferences and had a blast. First up, I was given the opportunity to keynote the Australian conference on day 2, where I spoke about SharePoint Governance home truths. It received very positive feedback and I was told by a lot of people that it really made them rethink their governance approach. In fact, in the New Zealand session, as I was going through some of the common mistakes people make, I could see people cringing as they knew they were guilty as charged. One attendee buried her head in her hands when I started talking about the “buffet of platitudes” (what is the “buffet of platitudes” you ask? Come to my class to find out! 🙂

The community challenge in Australia was a real highlight. This was a new addition to the conference where a group of conference attendees delivered a SharePoint solution for a not for profit organisation. WorkVentures was the organisation selected and the challenge progressed over three sessions, facilitated by SharePoint community leaders. Session one (Define and Design), was a business session which aimed to work through the high level requirements that WorkVentures had for an intranet, their aims for what they hope it would achieve and what they wanted included.  

This post was written on the assumption that you are familiar with some of Seven Sigma’s methods. If not, then we suggest you stop and read a couple of foundational posts first – especially if these maps do not mean much to you.

The Importance of Goal Alignment…

Nick Hadlee was supposed to chair this define and design session, but was unable to get to the Australian conference due to the earthquake events in Christchurch. As a result, I ended up inheriting this role, so I roped in Andrew Jolly to help me on this, because we have a lot in common and work in a similar way. User surveys had been conducted with WorkVentures staff and management, which gave some insights into potential focus areas for SharePoint. Even so, I had no way of knowing whether those potential focus areas made strategic sense. To resolve this issue, we examined WorkVentures 2009 Annual Report to understand their core purpose and strategic focus areas and various business units. After all, it is all well and good to develop some SharePoint functionality, but if you can’t see how that component helps achieve strategic objectives, how do you know it is the right thing to do?

The annual report proved to be a goldmine. It stated that WorkVentures had embarked on an enterprise improvement strategy prior to SharePoint and the community challenge being on the radar. This enterprise improvement plan, incorporating quality management, IT, HR and business strategy development, provided us the context to focus SharePoint as an enabler that fitted within the plan.

Andrew wasn’t due to fly into Sydney until the evening before the conference. So the day before the conference, Debbie Ireland and I visited WorkVentures on-site, meeting with the CEO, CFO and Marketing Director. The purpose of this visit was to ensure a shared understanding among us all of the alignment of the SharePoint community challenge outputs to the WorkVentures vision, purpose and strategic focus areas. From this conversation, which I mapped, some really interesting stories enabled them to pinpoint one of the key success factors for any SharePoint implementation at WorkVentures – “Bridging Silos”.

Ultimately, we identified four key areas of strategic focus for SharePoint that aligned to WorkVentures strategic goals. Below is a screenshot of the end-to-end alignment in map format . This map was used during the “define and design” conference session to help focus attendees on the purpose of SharePoint for this organisation, as well as noting the key areas that we would have to do well, to consider SharePoint a success.


Stories that led to the goal

Lawrence Luk – the CFO of WorkVentures told Debbie and I several captivating stories that surfaced the bridging silos area of focus. One interesting facet of WorkVentures was that staff from the whole organisation came together once per year – at the Christmas party. This is because each WorkVentures “division” or “business unit” is in effect a separate mini-company, with different goals, customers, vertical markets and regulatory requirements. Thus the problem of silos isn’t a negative one in the sense where dysfunctional “culture” is blamed as such. More that it was the simple fact that each business unit didn’t have a lot in common with other business units. The silo effect was a by-product and it was not driven by negative behaviours.

A great example of this was one particular business unit, Connect IT. It solicits organisations to donate old PC’s, which provides opportunities for skills development for disadvantaged people by teaching them how to refurbish these PC’s. These refurbished PC’s are then sold at low cost. A KPI for this program is the number of organisations donating old PC’s to WorkVentures to sustain ConnectIT. Lawrence had the experience where WorkVentures financial auditors, who had been doing the books for two years prior, asked him why they hadn’t been approached to donate PC’s as they had some. Lawrence realised that he almost missed a great opportunity to help the ConnectIT division achieve one of their key KPI’s. Furthermore, the auditor should never have had to ask themselves. Instead all WorkVentures staff should have this core KPI instilled and internalised so that they could proactively seek out these opportunities to help the other business units.

Another couple of interesting contextual facets illustrated that there were other forms of silo that went beyond a purely divisional basis:

  • Most backoffice staff had never been to the Campbelltown office, where all of the “coal face” work took place with the community.
  • English was a second language to many staff.
  • Not all staff had their own PC’s.

These stories catalysed the conversation to many other examples of missed opportunities, where one business unit has the means to make a massive difference to the results of another. On reflection, it was realised that the nature of WorkVentures business units, being so independent of each-other, inevitably had a silo effect. There was a lack of awareness organisation-wide of the core KPI’s of each unit, hence bridging (not breaking) these silos became a key theme. If SharePoint was to have a long lasting, successful legacy, then it had to play a part in addressing this issue.

The define and design session live…

From there, with invaluable help from Andrew Jolly, we planned and then executed the requirements session with a conference audience of around one hundred people. We split the session up into several areas and the map below shows how we structured it.

After Microsoft did their intro, Debbie explained the context of the community challenge via a short PowerPoint presentation. I then took the chair and explained the vision and areas of focus map (the image above) and stressed to the audience that they were going to be participating in this session as well. I also stressed that no matter what solutions or ideas they came up with, they had to justify them against the four key focus areas, which I went through.


Then we got down to business where I dialogue mapped, with Andrew and I co-facilitating. We decided to focus people’s attention to the core goal of bridging silos as a topic area itself, and ask the audience how SharePoint could indeed bridge silos. We utilised three of the examples that Lawrence gave us  and then leveraged the wisdom of the (large) crowd to solicit ideas. Below is the dialogue map that shows the richness of this discussion (click to enlarge). You will see in this map that for each story told to us by Lawrence, we asked the question “How could we mitigate this with SharePoint?”. The purpose of asking the question this way helped the audience to focus on SharePoint as an enabler to a greater end – and not to be a tool looking for a problem to solve.



Given that we only had around 45 minutes to work with, Andrew and I could only spend around 15 minutes on the bridging silos area. But the map above shows that a lot of very valuable rationale from the audience was captured. The real benefit though was focusing the audience onto the broader goals and how SharePoint could enable them. This was critical to do, because now we had to switch focus from the lofty world of goal alignment to focusing on how SharePoint building blocks could be used to achieve specific ends.

We examined how SharePoint could augment the existing newsletter based method for dissemination of information within WorkVentures. We showed the audience what sample WorkVentures newsletter looked like and reviewed some of the key contextual aspects to newsletters within WorkVentures in terms of their creation, management, reach and format.  We reminded the audience about the importance of bridging silos and then called for ideas from the audience as to how SharePoint could improve the dissemination of news. What was particularly great about this session was that audience members began to relate SharePoint ideas against the key focus areas and identify some of the governance aspects that would be required to make it work.

For example, if you look at the map below (click to enlarge), one of the ideas for the newsletter was a fairly technical one: leveraging “word automation services to extract list or story items and create a PDF”. On first glance one might think “wow that’s fairly heavy” (and not to mention quite nerdy), but the justification for this idea was that it would still account for those WorkVentures users who do not have a PC and therefore access to the portal. Another idea was “Have backoffice staff create the content” on the basis that in doing so, they would get a better feel for coal-face issues that they typically do not see normally. When you think about it, this idea is not SharePoint at all, but more of a strategy for how SharePoint should be adopted and accountabilities for doing so (i.e. a governance approach!)

The key point here is that In both of these examples, audience members were clearly relating their ideas back to the previously established goals, which in turn were aligned to the WorkVentures vision, purpose and key strategic focus areas. Not bad for a couple hours work eh? Smile


With the little time that we had left, we also looked at site navigation and structure, where the audience resolved that WorkVentures would be best served by a hybrid navigation model that was functionally driven primarily (i.e. task based navigation), but then divided into divisional areas. (As opposed to a purely organisational structure driven navigation model).

As you can see below, we made a point of always showing the four areas of focus for SharePoint overall, to ensure that decisions made were informed by them.



I have to say that given the timeframe and constraints, I think we did a great job of developing a shared vision for SharePoint, how it fitted into WorkVentures organisational and strategic context, and then focusing a diverse audience into looking at SharePoint building blocks through that lens. The dialogue maps were very rich, with some terrific ideas, and WorkVentures staff were thrilled to see the alignment of SharePoint to their strategic goals.

I use similar methods to this for non IT projects too, and I think that if we had a week to work on WorkVentures, we would have created something really special. Nevertheless, from my point of view, I think that the community challenge is an terrific idea, I enjoyed being a part of it, and I have to offer special thanks to Debbie and Andrew in particular for helping to make this into a really great mini-engagement. Hopefully we can do it all again next year.

Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

MOSS World

Send to Kindle

Wow, Christian Buckley doesn’t waste any time. First up, watch this video then I will explain:

Now I am writing this at 8am and around 10pm last night I recorded this vid with him and now it has photos, montage and credits. I don’t think he slept.

The story behind this goes back a bit. When Dux first told me he was going to do a rap – I think 2 years ago or more at a Best Practices Conference – and showed me the lyrics to “SharePoint is Nice Nice Baby”, I thought “no that will never work because its framed too nicely”. (Of course I was completely wrong and Dux stole the show and continues to do so. )

Anyway, to me Mad World (the Gary Jule version) seemed the perfect song for SharePoint because it gave me the right sort of subtlety to work with my cynical sense of humour. Much of the lyrics were done years back and I sent them to Ruven Gotz and Peter Serzo who both sent back some mods. Somehow by chance the subject came up again in Wellington New Zealand with Christian Buckley. Christian happens to be a great singer who has been in bands in his past. Our hotel lobby had an old piano so the scene was set.  A bit of Lennon/McCartney-esque collaboration (Culmsee/Buckley of course) on lyrics and we were ready to go.

We did one take, with Mark Miller providing magnificent cinematography (the flowers fade in and fade out was sheer gold)

In case you are interested, here are the full lyrics as far as I remember them… enjoy!


All around me are new interfaces

Shared workspaces, my-site places

Run by admins who can’t tie their laces

Page loads snail pace, backups no trace


Out of box is just so boring, i think its kind of sad

My dreams of coding c-sharp are the best I ever had

I find it hard to tell you, the root hive’s really toast

When coders put in SharePoint its a really really

Mad world, Mad World


Metrosexuals try to make it facebook

Where’s my hairspray? where’s my hairspray?

Using designer to create a workflow

No-one follows, no-one follows


Out of box is just so boring, i think its kind of sad

My dreams of coding c-sharp are the best I’ve ever had

I find it hard to tell you, the root hive’s really toast

When coders put in SharePoint its a really really

Mad world, Mad World


Content database is like Godzilla

8 hour backups, 8 hour backups

Mess with settings inside web.config

Where’s my homepage? where’s my homepage?


Out of box is just so boring, i think its kind of sad

Code monkeys blame their admins but their memory leaks suck bad

I find it hard to tell you, the sequel’s out of space

When tech-geeks put in SharePoint its a very very

Mad world, MOSS world

 Digg  Facebook  StumbleUpon  Technorati  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

How to use Charlie Sheen to improve your estimating…

Send to Kindle

Monte Carlo simulations are cool – very cool. in this post I am going to try and out-do Kailash Awati in trying to explain what they are. You see, I am one of these people who’s eyes glaze over the minute you show me any form of algebra. Kailash recent wrote a post to explain Monte Carlo to the masses, but he went and used a mathematical formula (he couldn’t help himself), and thereby lost me totally. Mind you, he used the example of a drunk person playing darts. This I did like a lot and gave me the inspiration for this post.

So here is my attempt to explain what Monte Carlo is all about and why it is so useful.

I have previously stated, that vaguely right is better than precisely wrong. If someone asks me to make an estimate on something, I offer them a ranged estimate, based on my level of certainty. Thus for example, if you asked me to guess how many beers per day Charlie Sheen has been knocking back lately, I might offer you an estimate of somewhere between 20 and 50 pints. I am not sure of the exact number (and besides, it would vary on a daily basis anyway) , so I would rather give you a range that I feel relatively confident with, than a single answer that is likely to be completely off base.

Similarly, if you asked me how much a SharePoint project to “improve collaboration” would cost, I would do a similar thing. The difference between SharePoint success and Charlie Sheen’s ability to keep a TV show afloat is that with SharePoint, there are more variables to consider. For example, I would have to make ranged estimates for the cost of:

  • Hardware and licensing
  • Solution envisioning and business analysis
  • Application development
  • Implementation
  • Training and user engagement

Now here is the problem. A CFO or similar cheque signer wants certainty. Thus, if you give them a list of ranged estimates, they are not going to be overly happy about it. For a start, any return on investment analysis is by definition, going to have to pick a single value from each of your estimates to “run the numbers”. Therefore if we used the lower estimate (and therefore lower cost) for each variable, we would inevitably get a great return on investment. If we used the upper limit to each range, we are going to get a much costlier project.

So how to we reconcile this level of uncertainty?

Easy! Simply run the numbers lots and lots (and lots) of times – say, 100,000 times, picking random values from each variable that goes into the estimate. Count the number of times that your simulation is a positive ROI compared to a negative one. Blammo – that’s Monte Carlo in a nutshell. It is worth noting that in my example, we are assuming that all values between the upper and lower limits are equally likely. Technically this is called a uniform distribution – but we will get to the distribution thing in a minute.

As a very crappy, yet simple example, imagine that if SharePoint costs over $250,000 it will be considered a failure. Below are our ranged estimates for the main cost areas:

Item Lower Cost Upper Cost
Hardware and licensing $50,000 $60,000
Solution envisioning and business analysis  $20,000 $70,000
Application development $35,000 $150,000
Implementation $25,000 $55,000
Training and User engagement $10,000 $100,000
Total $140,000 $435,000

If you add up my lower estimates we get a total of $140,000 – well within our $250,000 limit. However if my upper estimates turn out to be true we blow out to $435,000 – ouch!

So why don’t we pick a random value from each item, add them up, and then repeat the exercise 100,000 times. Below I have shown 5 of 100,000 simulations.

Item Simulation 1 Simulation 2 Simulation 3 Simulation 4 [snip] Simulation 100,000
Hardware and licensing 57663 52024 53441 58432 51252
Solution envisioning and business analysis 21056 68345 42642 37456 64224
Application development 79375 134204 43566 142998 103255
Implementation 47000 25898 25345 51007 35726
Training and User engagement 46543 73554 27482 87875 13000
Total Cost 251637 354025 192476 377768 267457

So according to this basic simulation, only 2 out of 5 shown are below $250,000 and therefore a success according to my ROI criteria. Therefore we were successful only only 40% of the time (2/5 = .4). By that measure, this is a risky project (and we haven’t taken into account discounting for risk either).

“Thats it?”, I hear you say? Essentially yes. All we are doing is running the numbers over and over again and then looking at the patterns that emerge from this. But that is not the key bit to understand. Instead, the most important thing to understand with Monte Carlo properly is to understand probability distributions. This is the bit that people mess up on and the bit that people are far too quick to jump into mathematical formulae.

But random is not necessarily random

Let’s use Charlie Sheen again to understand probability distributions. If we were to consider the amount of crack he smokes on a daily basis, we could conclude it is between 0 grams  and 120 grams. The 120g upper limit is based on what Charlie Sheen could realistically tolerate (which is probably three times the amount of normal humans). If we plotted this over time, it might look like the example below (which is the last 31 days):


So to make a best guess at the amount he smokes tomorrow, should we pick random values between 0 and 120 grams?  I would say not. Based on observing the chart above, you would be likely to choose values from the upper end of the range scale (lately he has really been hitting things hard and we all know what happens when he hangs with starlets from the adult entertainment industry).

That’s the trick to understanding a probability distribution. If we simply chose a random value it would likely not be representative of the recent range of values. We still have to pick a value from a range of possibilities, but some values are more likely than others. We are not truly picking random values at all.

The most common probability distribution people use is the old bell curve – you probably saw it in high school. For many variables that go into a monte carlo, it is a perfectly fine distribution. For example, the average height of a human male may be 5 foot 6. Some people will be larger and some will be smaller, but you would find that there would be more people closer to this mid-point than far away from it, hence the bell shape.

Let’s see what Charlie Sheen’s distribution looks like. Since we have our range of values, for each days amount of crack usage, let’s divide up crack usage into ranges of grams and see how much Charlie has consumed. The figure is below:

Amount Daily occurrences %
0-10g 16 50%
10-20g 6 19%
20-30g 4 13%
30-40g 1 3%
40-50g 1 3%
50-60g 0 0%
60-70g 2 6%
70-80g 1 3%
80-90g 0 0%
90-100g 1 3%
100-120g 0 0%

As you can see, according to the 50% of the time Charlie was not hitting the white stuff particularly hard. There 16 occurrences where Charlie ingested less than 10 grams. What sort of curve does this make? The picture below illustrates it.


Interesting huh? If we chose random numbers according to this probability distribution, chances are that 50% of the time, we would get a value between 0 and 10 grams of crack being smoked or shovelled up his nasal passages. Yet when we look at the trend of the last 10 days, one could reasonably expect that its likely that tomorrows value would be significantly higher than zero. In fact there were no occurrences at all of less than 10 grams in a single day in the last 10 days.

Now let’s change the date range, and instead look at Charlie’s last 9 days of crack usage. This time the distribution looks a bit more realistic based on recent trends. Since he has not been well behaved lately, there were no days at all where his crack usage was less than 10 grams. In fact 4 of the 9 occurrences were over 60 grams.

Amount Daily occurrences %
0-10g 0 0%
10-20g 3 33%
20-30g 1 11%
30-40g 0 0%
40-50g 1 11%
50-60g 0 0%
60-70g 2 22%
70-80g 1 11%
80-90g 0 0%
90-100g 1 11%
100-120g 0 0%


This time, utilising a different set of reference points (9 days instead of 31), we get very different “randomness”. This gets to one of the big problems with probability distributions which Kailash tells me is called the Reference class problem. How can you pick a representative sample? In some situations completely random might actually be much better than a poorly chosen distribution.

Back to SharePoint…

So imagine that you have been asked to estimate SharePoint costs and you only have vague, ranged estimates. Lets also assume that for each of the variables that need to be assigned an estimate, you have some idea of their distribution. For example if you decide that SharePoint hardware and licensing really could be utterly random between $50000-$60000 then pick a truly random value (a uniform distribution) from the range with each iteration of the simulation. But if you decide that its much more likely to come in at $55000 than it is $50000, then your “random” choice will be closer to the middle of the range more often than not – a normal distribution.

So the moral of the story? Think about the sort of distribution that each variable uses. It’s not always a bell curve. its also not completely random either. In fact you should strive for a distribution that is the closest representation of reality. Kailash tells me that a distribution “should be determined empirically – from real data – not fitted to some mathematically convenient fiction (such as the Normal or Unform distributions). Further, one should be absolutely certain that the data is representative of the situation that is being estimated.”

Since SharePoint often relies on some estimations that offer significant uncertainty, a Monte Carlo simulation is a good way to run the numbers – especially if you want to see how many variables with different probability distributions combine to produce a result. Run the simulation enough times, you will produce a new probability distribution that represents all of these variables.

Just remember though – Charlie Sheen reliably demonstrates that things are not often predictable and that past values are no reliable indicator of future values. Thus a simulation is only as good as your probability distributions in the first place


Thanks for reading


Paul Culmsee


p.s A huge thanks to Kailash for checking this post, offering some suggestions and making sure I didn’t make an arse of myself.

 Digg  Facebook  StumbleUpon  Technorati  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

Technorati Tags: , , ,

Send to Kindle