Help me visualise the pros and cons of hybrid SharePoint 2013…

Send to Kindle

Like it or not, there is a tectonic shift going on in the IT industry right now, driven primarily by the availability of a huge variety of services hosted in the cloud. Over the last few years, organisations are increasingly procuring services that are not hosted locally, much to the chagrin of many an server hugging IT guy who understandably, sees various risks with entrusting your fate to someone else.

We all know that Microsoft had a big focus on trying to reach feature parity between on-premises SharePoint 2013 and Office365. In other words, with cloud computing as a centrepiece of their strategy, Microsoft’s SharePoint 2013 aim was for stuff that both works on premises, but also also works on Office 365 without too much modification. While SharePoint 2013 made significant inroads into meeting this goal (apps model developers might beg to differ), the big theme to really emerge was that feature parity was a relatively small part of the puzzle. What has happened since the release of SharePoint 2013, is that many organisations are much more interested in hybrid scenarios. That is, utilising on-premises SharePoint along with cloud hosted SharePoint and its associated capabilities like OneDrive and Office Web Applications.

So while it is great that SharePoint online can do the same things as on-prem, it all amounts to naught if they cannot integrate well together. Without decent integration, we are left with a lot of manual work to maintain what is effectively two separate SharePoint farms and we all know what excessive manual maintenance brings over time…

Microsoft to their credit have been quick to recognise that hybrid is where the real action is at, and have been addressing this emerging need with a ton of published material, as well as adding new hybrid functionality with service packs and related updates. But if you have read the material, you can attest that there is a lot of it and it spans many topic areas (authentication alone is a complex area in itself). In fact, the sheer volume and pace of material released by Microsoft show that hybrid is a huge and very complex topic, which begs a really critical question…

Where are we now at with hybrid? Is it a solid enough value proposition for organisations?

This is a question that a) I might be able to help you answer and b) you can probably help me answer…

Visualising complex topics…

A few months back, I started issue mapping all of the material I could get my hands on related to hybrid SharePoint deployments. If you are not aware, Issue mapping is a way of visualising rationale and I find it a brilliant personal learning tool. It allows me to read complex articles and boil them down to the core questions, answers, pros and cons of the various topics. The maps are easy to read for others, and they allow me to make my critical thinking visible. As a result, clients also like these maps because they provide a single integrated place where they can explore topics in an engaging, visual way, instead of working their way through complex whitepapers.

If you wish to jump straight in and have a look around, click here to access my map on Hybrid SharePoint 2013 deployments. You will need to sign in using a facebook or gmail ID to do so. But be sure to come back and read the rest of this post, as I need your help…

But for the rest of you, if you are wondering what my hybrid SharePoint map looks like, without jumping straight in, check out the screenshots below. The tool I am using is called Glyma (glimmer), which allows these maps to use developed and consumed using SharePoint itself. First up, we have a very simple map, showing the topic we are discussing.

image

If you click the plus sign next to the “Hybrid SharePoint deployments” idea node, we can see that I mapped all of the various hybrid pros and cons I have come across in my readings and discussions. Given that hybrid SharePoint is a complex topic, there are lots of pros and cons as shown in the partial image below…

image

Many of the pros and cons can be expanded further, which delves deeper into the topics. A single click expands one node level, and a double click expands the entire branch. To illustrate, consider the image below. One of the cons is around many of the search related caveats with hybrid that can easily trip people up. I have expanded the con node and the sub question below it.  Also notice hat one of the idea nodes has an attachment icon. I will get to that in a moment…

image

As I mentioned above, one of the idea nodes titled “SPO search sometimes has delays on low long it takes for new content to appear in the index” has an attachment icon as well as more nodes below it. Let’s click that attachment icon and expand that node. It turns out that I picked this up when I read Chris O’Brien’s excellent article and so I have embedded his original article to that node. Now you can read the full detail of his article for yourself, as well as understand how his article fits into a broader context.

image

It is not just written content either. If I move further up the map, you will see some nodes have video’s tagged to them. When Microsoft released the videos to 2014’s Vegas conference, I found all sorts of interesting nuggets of information that was not in the whitepapers. Below is an example of how I tagged one of the Vegas video’s to one of my nodes.

image

 

A call to action…

SharePoint hybrid is a very complex topic and right now, has a lot of material scattered around the place. This map allows people, both technical and non technical, to grasp the issue in a more strategic, bigger picture way, while still providing the necessary detail to aid implementation.

I continually update this map as I learn more about this topic from various sources, and that is where you come in. If you have had to work around a curly issue, or if you have had a massive win with a hybrid deployment, get in touch and let me know about it. It can be a reference to an article, a skype conversation or anything, The Glyma system can accommodate many sources of information.

More importantly, would you like to help me curate the map on this topic? After all, things move fast the SharePoint community rarely stands still. So If you are up to speed on this topic or have expertise to share, get in touch with me. I can give you access to this map to help with its ongoing development. With the right meeting of the minds, this map could turn into an incredible valuable information resource to a great many people.

So get in touch if you want to put your expertise out there…

 

Thanks for reading

 

 

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Confessions of a (post) SharePoint architect: The dangers of dial tone governance…

This entry is part 8 of 10 in the series confessions
Send to Kindle

Hi all and welcome to the next exciting instalment of my confessions from my work as a SharePoint architect and beyond. This is the eighth post and my last for 2012, so I will get straight into it.

To recap, along the way we have examined 5 f-laws and learned that:

Now, as a preamble to today’s mini-rant, I need to ‘fess up. I know this might come as a shock, but there was once a time when I was not the sweet, kind hearted, gentle soul who pens these articles. In my younger days, I used to judge my self-worth on my level of technical knowledge. As a result of this, I knew my stuff, but was completely oblivious to how much of a pain in the ass I was to everyone but geeks who judged themselves similarly. Met anyone like that in IT? Smile

This brings me onto my next SharePoint governance f-law – one that highlights a common blind spot that many IT people have in their approach to SharePoint governance.

F-Law 6: Geeks are far less important than they think they are

All disciplines and organisational departments have a particular slant on reality that is based on them at the centre of that reality. If this was not true, then departments would not spend so much time bitching about other departments and I would have no Dialogue Mapping work. The IT department is no better or worse in this regard than any other department, except that the effects of their particular slant of reality can be more pronounced and far reaching on everyone else. Why? Because the IT slant of reality sometimes looks like a version of Neo from the Matrix. Many, if not most people in IT, have a little Neo inside of them.

image

We all know Neo – an uber hero. He is wise, blessed with super powers, can manipulate your very reality and is a master of all domains of knowledge. Neo is also your last hope because if he goes down, we all go down. Therefore, everything Neo does – no matter how over the top or what the consequences are – is necessary to save the world from evil.

All of the little Neos in IT have a few things in common with bullet stopping big Neo above. Firstly, little Neo has also been entrusted with ensuring that the environment is safe from the forces of evil. Secondly, Little Neo can manipulate the reality that everybody else experiences. And finally, little Neo is often the last hope when things go bad. But that is where the similarities end because big Neo has two massive advantages over little Neo. First, big Neo was a master of a lot of domains of knowledge because he had the convenience of being able to learn any new subject by downloading it into his brain. Little Neo does not have this convenience, yet many little Neos still think they are all-knowing and wise. Secondly, big Neo was never mentally scarred from a really bad tequila bender…

Bad tequila bender? What the…

Never again…

Years ago when I was young and dumb, I was at a party drinking some tequila using the lemon and salt method. My brother-in-law thought it would be hilarious to switch my tequila shots with vodka double shots. Unfortunately for me, I didn’t notice because the lemon and salt masked the taste. I downed a heap of vodkas and the net result for me was not pretty at all. Although I wasn’t quite as unfortunate as the guy in the picture below I wasn’t that far off. As a result, to this day I cannot bring myself to drink tequila or vodka and the smell of it makes me feel sick with painful memories best left supressed.

image

I’m sure many readers can relate to a story like this. Most people have had a similar experience from an alcohol bender, eating a dodgy oyster or accidentally drinking tap water in a place like Bali. So take a moment to reflect on your absolute worst experience until you feel clammy and queasy. Feeling nauseous? Well guess what – there is something even worse…

Anyone who has ever worked in a system administrator role or any sort of infrastructure support will know the feeling of utter dread when the after hours pager goes off, alerting you some sort of problem with the IT infrastructure on which the business depends. Like many, I have lived through disaster recovery scenarios and let me tell you – they are not fun. Everyone turns to little Neo to save the day. It is high pressure and stressful trying to get things back on track, with your boss breathing down your neck, knowing that the entire company is severely degraded until you to get things back online.

Now while that is bad enough, the absolute nightmare scenario for every little Neo in IT is having to pick up the pieces of something not of their doing in the first place. In other words, somehow a non-production system morphed into production and nobody bothered to tell Little Neo. In this situation, not only is there the pressure to get things back as quickly as possible, but Little Neo has no background knowledge of the system being recovered, has no documentation on what to do, never backed it up properly and yet the business expects it back pronto.

So what do you expect will happen in the aftermath of a situation like the one I described above? Like my aversion to tequila, Little Neo will develop a pathological desire to avoid reliving that sort of pain and stress. It will be an all-consuming focus, overriding or trivialising other considerations. Governance for little Neo is all about avoiding risk and just like Big Neo, any actions – no matter how over the top or what the consequences are – will be deemed as necessary to ensure that risk is mitigated. Consequently, a common characteristic of lots of little Neos is the classic conservative IT department who defaults to “No” to pretty much any question that involves introducing something new. Accordingly, governance material will abound with service delivery aspects such as lovingly documented physical and logical architecture, performance testing regimes, defining universal site templates, defining security permissions/policies, allowed columns, content types and branding/styling standards.

Now all of this is nice and needs to be done. But there is a teeny problem. This quest to reduce risk has the opposite effect. It actually increases it because little Neo’s notion of governance is just one piece of the puzzle. It is the “dial tone” of SharePoint governance.

The thing about dial tone…

What is the first thing you hear when you pick up the phone to make a call? The answer of course is dial tone.

Years ago, Ruven Gotz asked me if I had ever picked up the phone, heard dial tone and thought “Ah, dial tone… Those engineers down at the phone company are doing a great job. I ought to bake them a  cake to thank them.” Of course, my answer was “No” and if anyone ever answered “Yes” then I suspect they have issues.

This highlights an oft-overlooked issue that afflicts all Neos. Being a hero is a thankless job. The reality is that the vast majority of the world could not care less that there is dial tone because it is expected to be there – a minimum condition of satisfaction that underpins everything else. In fact, the only time they notice dial tone is when it’s not there.

Yet, when you look at the vast majority of SharePoint governance material online, it could easily be described as “dial tone governance.” It places the majority of focus on the dial tone (service delivery) aspects of SharePoint and as a result, de-emphasises much more important factors of governance. Little Neo, unfortunately, has a governance bias that is skewed towards dial tone.

Keen eyed readers might be thinking that dial tone governance is more along the lines of what quality assurance is trying to do. I agree. Remember in part 2 of this series, I explained that the word ‘govern’ means to steer. We aim to steer the energy and resources available for the greatest benefit to all. Assurance, according to the ISO9000 family of standards for quality management, provides confidence in a product’s suitability for its intended purpose. It is a set of activities intended to ensure that requirements are satisfied in a systematic, reliable fashion. Dial tone governance is all about assurance, but the key word for me in the previous sentence is “intended purpose.”

Dial tone governance is silent on “intended purpose” which provides opportunities for platitudes to fetser and governance becoming a self fulfilling prophecy.

and finally for 2012…

So, all of this leads to a really important question. If most people do not care about dial tone governance, then what do they care about?

As it happens, I’m in a reasonable position to be able to answer that question as I’ve had around 200 people around the world do it for me. This is because in my SharePoint Governance and Information Architecture Class, the first question I ask participants is “What is the hardest thing about SharePoint delivery?”

The question makes a lot of sense when you consider that the hardest bits of SharePoint usually translate to the highest risk areas for SharePoint. Accordingly, governance efforts should be focused in those areas. So in the next post in this series, I will take you through all the answers I have received to this question. This is made easier because I dialogue mapped the discussions, so I have built up a nice corpus of knowledge that we can go through and unpack the key issues. What is interesting about the answers is that no matter where I go, or whatever the version of SharePoint, the answers I get have remained extremely consistent over the years I have run the class.

Thanks for reading…

 

Paul Culmsee

p.s I am on vacation for all of January 2013 so you will not be getting the next post till early Feb

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud isn’t the problem–Part 6: The pros and cons of patriotism

This entry is part 6 of 6 in the series Cloud
Send to Kindle

Hi all and welcome to my 6th post on the weird and wonderful world of cloud computing. The recurring theme in this series has been to point out that the technological aspects of cloud computing have never really been the key issue. Instead, I feel It is everything else around the technology, ranging from immature process, through to the effects of the industry shakeout and consolidation, through to the adaptive change required for certain IT roles. To that end, in the last post, we had fun at the expense of server huggers and the typical defence mechanisms they use to scare the rest of the organization into fitting into their happy-place world of in-house managed infrastructure. In that post I made a note on how you can tell an IT FUD defence because risk averse IT will almost always try use their killer argument up-front to bury the discussion. For many server huggers or risk averse IT, the killer defence is US Patriot Act Issue.

Now just in case you have never been hit with the “…ah but what about the Patriot Act?” line and have no idea what the Patriot Act is all about, let me give you a nice metaphor. It is basically a legislative version of the “Men in Black” movies. Why Men in Black? Because in those movies, Will Smith and Tommy Lee Jones had the ability to erase the memories of anyone who witnessed any extra-terrestrial activity with that silvery little pen-like device. With the Patriot Act, US law enforcement now has a similar instrument. Best of all, theirs doesn’t need batteries – it is all done on paper.

image

In short, the Patriot Act provides a means for U.S. law enforcement agencies, to seek a court order allowing access to the personal records of anyone without their knowledge, provided that it is in relation to an anti-terrorism investigation. This act applies to pretty much any organisation who has any kind of presence in the USA and the rationale behind introducing it was to make it much easier for agencies to conduct terrorism investigations and better co-ordinate their efforts. After all, in the reflection and lessons learnt from the 911 tragedy, the need for for better inter-agency co-ordination was a recurring theme.

The implication of this act is for cloud computing should be fairly clear. Imagine our friendly MIB’s Will Smith (Agent J) and Tommy Lee Jones (Agent K) bursting into Google’s headquarters, all guns blazing, forcing them to hand over their customers data. Then when Google staff start asking too many questions, they zap them with the memory eraser gizmo. (Cue Tommy Lee jones stating “You never saw us and you never handed over any data to us.” )

Scary huh? It’s the sort of scenario that warms the heart of the most paranoid server hugger, because surely no-one in their right mind could mount a credible counter-argument to that sort of risk to the confidentiality and integrity of an organisations sensitive data.

But at the end of the day, cloud computing is here to stay and will no doubt grow. Therefore we need to unpack this issue and see what lies behind the rhetoric on both sides of the debate. Thus, I decided to look into the Patriot act a bit further to understand it better. Of course, it should be clear here that I am not a lawyer, and this is just my own opinions from my research and synthesis of various articles, discussion papers and interviews. My personal conclusion is that all the hoo-hah about the Patriot Act is overblown. Yet in stating this, I have to also state that we are more or less screwed anyway (and always were). As you will see later in this post, there are great counter arguments that pretty much dismantle any anti-cloud arguments that are FUD based, but be warned – in using these arguments, you will demonstrate just how much bigger this thing is beyond cloud computing and get a sense of the broader scale of the risk.

So what is the weapon?

The first thing we have to do is understand some specifics about the Patriot Act’s memory erasing device. Within the vast scope of the act, the two areas for greatest concern in relation to data is the National Security Letter and the Section 215 order. Both provide authorities access to certain types of data and I need to briefly explain them:

A National Security Letter (NSL) is a type of subpoena that permits certain law enforcement agencies to compel organisations or individuals to provide certain types of information like financial and credit records, telephone and ISP records (Internet searches, activity logs, etc). Now NSL’s existed prior to the Patriot Act, but the act loosened some of the controls that previously existed. Prior to the act, the information being sought had to be directly related a foreign power or the agent of a foreign power – thereby protecting US citizens. Now, all agencies have to do is assert that the data being sought is relevant in some way to any international terrorism or foreign espionage investigations.

Want to see what a NSL looks like? Check this redacted one from wikipedia.

A Section 215 Order is similar to an NSL in that it is an instrument that law enforcement agencies can use to obtain data. It is also similar to NSL’s in that it existed prior to the Patriot Act – except back then it was called a FISA Order – named after the Foreign Intelligence Surveillance Act that enacted it. The type of data available under a Section 215 Order is more expansive than what you can eke out of an NSL, but a Section 215 Order does require a judge to let you get hold of it (i.e. there is some judicial oversight). In this case, the FBI obtains a 215 order from the Foreign Intelligence Surveillance Court which reviews the application. What the Patriot Act did different to the FISA Order was to broaden the definition of what information could be sought. Under the Patriot Act, a Section 215 Order can relate to “any tangible things (including books, records, papers, documents, and other items).” If these are believed to be relevant to an authorised investigation they are fair game. The act also eased the requirements for obtaining such an order. Previously, the FBI had to present “specific articulable facts” that provided evidence that the subject of an investigation was a “foreign power or the agent of a foreign power.” From my reading, now there is no requirement for evidence and the reviewing judge therefore has little discretion. If the application meets the requirements of Section 215, they will likely issue the order.

So now that we understand the two weapons that are being wielded, let’s walk through the key concerns being raised.

Concern 1: Impacted cloud providers can’t guarantee that sensitive client data won’t be turned over to the US government

CleverWorkArounds short answer:

Yes this is dead-set true and it has happened already.

CleverWorkArounds long answer:

This concern stems from the “loosening” of previous controls on both NSL’s and Section 215 Orders. NSL’s for example, require no probable cause or judicial oversight at all, meaning that the FBI can issue these at their own volition. Now it is important to note that they could do this before the Patriot Act came into being too, but back then the parameters for usage was much stricter. Section 215 Orders on the other hand, do have judicial oversight, but that oversight has also been watered down. Additionally the breadth of information that can be collected is now greater. Add to that the fact that both NSL’s and Section 215 Orders almost always include a compulsory non-disclosure or “gag” order, preventing notification to the data owner that this has even happened.

This concern is not only valid but it has happened and continues to happen. Microsoft has already stated that it cannot guarantee customers would be informed of Patriot Act requests and furthermore, they have also disclosed that they have complied with Patriot Act requests. Amazon and Google are in the same boat. Google also have also disclosed that they have handed data stored in European datacenters back to U.S. law enforcement.

Now some of you – particularly if you live or work in Europe – might be wondering how this could happen, given the European Union’s strict privacy laws. Why is it that these companies have complied with the US authorities regardless of those laws?

That’s where the gag orders come in – which brings us onto the second concern.

Concern 2: The reach of the act goes beyond US borders and bypasses foreign legislation on data protection for affected providers

CleverWorkArounds short answer:

Yes this is dead-set true and it has happened already.

CleverWorkArounds long answer:

The example of Google – a US company – handing over data in its EU datacentres to US authorities, highlights that the Patriot Act is more pervasive than one might think. In terms of who the act applies to, a terrific article put out by Alex C. Lakatos put it really well when he said.

Furthermore, an entity that is subject to US jurisdiction and is served with a valid subpoena must produce any documents within its “possession, custody, or control.” That means that an entity that is subject to US jurisdiction must produce not only materials located within the United States, but any data or materials it maintains in its branches or offices anywhere in the world. The entity even may be required to produce data stored at a non-US subsidiary.

Think about that last point – “non-US subsidiary”.  This gives you a hint to how pervasive this is. So in terms of jurisdiction and whether an organisation can be compelled to hand over data and be subject to a gag order, the list is expansive. Consider these three categories:

  • – US based company? Absolutely: That alone takes out Apple, Amazon, Dell, EMC (and RSA), Facebook, Google, HP, IBM, Symantec, LinkedIn, Salesforce.com, McAfee, Adobe, Dropbox and Rackspace
  • – Subsiduary company of a US company (incorporated anywhere else in the world)? It seems so.
  • – Non US company that has any form of US presence? It also seems so. Now we are talking about Samsung, Sony, Nokia, RIM and countless others.

The crux of this argument about bypassing is the gag order provisions. If the US company, subsidiary or regional office of a non US company receives the order, they may be forbidden from disclosing anything about it to the rest of the organisation.

Concern 3: Potential for abuse of Patriot Act powers by authorities

CleverWorkArounds short answer:

Yes this is true and it has happened already.

CleverWorkArounds long answer:

Since the Patriot Act came into place, there was a significant marked increase in the FBI’s use of National Security Letters. According to this New York Times article, there were 143,000 requests  between 2003 to 2005. Furthermore, according to a report from the Justice Department’s Inspector General in March 2007, as reported by CNN, the FBI was guilty of “serious misuse” of the power to secretly obtain private information under the Patriot Act. I quote:

The audit found the letters were issued without proper authority, cited incorrect statutes or obtained information they weren’t supposed to. As many as 22% of national security letters were not recorded, the audit said. “We concluded that many of the problems we identified constituted serious misuse of the FBI’s national security letter authorities,” Inspector General Glenn A. Fine said in the report.

The Liberty and Security Coalition went into further detail on this. In a 2009 article, they list some of the specific examples of FBI abuses:

  • – FBI issued NSLs when it had not opened the investigation that is a predicate for issuing an NSL;
  • – FBI used “exigent letters” not authorized by law to quickly obtain information without ever issuing the NSL that it promised to issue to cover the request;
  • – FBI used NSLs to obtain personal information about people two or three steps removed from the subject of the investigation;
  • – FBI has used a single NSL to obtain records about thousands of individuals; and
  • – FBI retains almost indefinitely the information it obtains with an NSL, even after it determines that the subject of the NSL is not suspected of any crime and is not of any continuing intelligence interest, and it makes the information widely available to thousands of people in law enforcement and intelligence agencies.

Concern 4: Impacted cloud providers cannot guarantee continuity of service during investigations

CleverWorkArounds short answer:

Yes this is dead-set true and it has happened already.

CleverWorkArounds long answer:

An oft-overlooked side effect of all of this is that other organisations can be adversely affected. One aspect of cloud computing scalability that we talked about in part 1 is that of multitenancy. Now consider a raid on a datacenter. If cloud services are shared between many tenants, innocent tenants who had nothing whatsoever to do with the investigation can potentially be taken offline. Furthermore, the hosting provider may be gagged from explaining to these affected parties what is going on. Ouch!

An example of this happening was reported in the New York TImes in mid 2011 and concerned Curbed Network, a New York blog publisher. Curbed, along with some other companies, had their service disrupted after an F.B.I. raid on their cloud providers datacenter. They were taken down for 24 hours because the F.B.I.’s raid on the hosting provider seized three enclosures which, unfortunately enough, included the gear they ran on.

Ouch! Is there any coming back?

As I write this post, I wonder how many readers are surprised and dismayed by my four risk areas. The little security guy in me says If you are then that’s good! It means I have made you more aware than you were previously which is a good thing. I also wonder if some readers by now are thinking to themselves that their paranoid server huggers are right?

To decide this, let’s now examine some of the the counter-arguments of the Patriot Act issue.

Rebuttal 1: This is nothing new – Patriot Act is just amendments to pre-existing laws

One common rebuttal is that the Patriot Act legislation did not fundamentally alter the right of the government to access data. This line of argument was presented in August 2011 by Microsoft legal counsel Jeff Bullwinkel in Microsoft Australia’s GovTech blog. After all, it was reasoned, the areas frequently cited for concern (NSL’s and Section 215/FISA orders) were already there to begin with. Quoting from the article:

In fact, U.S. courts have long held that a company with a presence in the United States is obligated to respond to a valid demand by the U.S. government for information – regardless of the physical location of the information – so long as the company retains custody or control over the data. The seminal court decision in this area is United States v. Bank of Nova Scotia, 740 F.2d 817 (11th Cir. 1984) (requiring a U.S. branch of a Canadian bank to produce documents held in the Cayman Islands for use in U.S. criminal proceedings)

So while the Patriot Act might have made it easier in some cases for the U.S. government to gain access to certain end-user data, the right was always there. Again quoting from Bullwinkel:

The Patriot Act, for example, enabled the U.S. government to use a single search warrant obtained from a federal judge to order disclosure of data held by communications providers in multiple states within the U.S., instead of having to seek separate search warrants (from separate judges) for providers that are located in different states. This streamlined the process for U.S. government searches in certain cases, but it did not change the underlying right of the government to access the data under applicable laws and prior court decisions.

Rebuttal 2: Section 215’s are not often used and there are significant limitations on the data you can get using an NSL.

Interestingly, it appears that the more powerful section 215 orders have not been used that often in practice. The best article to read to understand the detail is one by Alex Lakatos. According to him, less than 100 applications for section 215 orders were made in 2010. He says:

In 2010, the US government made only 96 applications to the Foreign Intelligence Surveillance Courts for FISA Orders granting access to business records. There are several reasons why the FBI may be reluctant to use FISA Orders: public outcry; internal FBI politics necessary to obtain approval to seek FISA Orders; and, the availability of other, less controversial mechanisms, with greater due process protections, to seek data that the FBI wants to access. As a result, this Patriot Act tool poses little risk for cloud users.

So while section 215 orders seem less used, NSL’s seem to be used a dime a dozen – which I suppose is understandable since you don’t have to deal with a pesky judge and all that annoying due process. But the downside of NSL’s from a law enforcement point of view is that the the sort of data accessible via the NSL is somewhat limited. Again quoting from Lakatos (with emphasis mine):

While the use of NSLs is not uncommon, the types of data that US authorities can gather from cloud service providers via an NSL is limited. In particular, the FBI cannot properly insist via a NSL that Internet service providers share the content of communications or other underlying data. Rather [.] the statutory provisions authorizing NSLs allow the FBI to obtain “envelope” information from Internet service providers. Indeed, the information that is specifically listed in the relevant statute is limited to a customer’s name, address, and length of service.

The key point is that the FBI has no right to content via an NSL. This fact may not stop the FBI from having a try at getting that data anyway, but it seems that savvy service providers are starting to wise up to exactly what information an NSL applies to. This final quote from the Lakato article summarises the point nicely and at the same time, offers cloud providers a strategy to mitigate the risk to their customers.

The FBI often seeks more, such as who sent and received emails and what websites customers visited. But, more recently, many service providers receiving NSLs have limited the information they give to customers’ names, addresses, length of service and phone billing records. “Beginning in late 2009, certain electronic communications service providers no longer honored” more expansive requests, FBI officials wrote in August 2011, in response to questions from the Senate Judiciary Committee. Although cloud users should expect their service providers that have a US presence to comply with US law, users also can reasonably ask that their cloud service providers limit what they share in response to an NSL to the minimum required by law. If cloud service providers do so, then their customers’ data should typically face only minimal exposure due to NSLs.

Rebuttal 3: Too much focus on cloud data – there are other significant areas of concern

This one for me is a perverse slam-dunk counter argument that puts the FUD defence of a server hugger back in its box. The reason it is perverse is that it opens up the debate that for some server huggers, may mean that they are already exposed to the risks they are raising. You see, the thing to always bear in mind is that the Patriot Act applies to data, not just the cloud. This means that data, in any shape or form is susceptible in some circumstances if a service provider exercises some degree of control over it. When you consider all the applicable companies that I listed earlier in the discussion like IBM, Accenture, McAfee, EMC, RIM and Apple, you then start to think about the other services where this notion of “control” might come into play.

What about if you have outsourced your IT services and management to IBM, HP or Accenture? Are they running your datacentres? Are your executives using Blackberry services? Are you using an outsourced email spam and virus scanning filter supplied by a security firm like McAfee? Using federated instant messaging? Performing B2B transactions with a US based company?

When you start to think about all of the other potential touch-points where control over data is exercised by a service provider, things start to look quite disturbing. We previously established that pretty much any organisation with a US interest (whether US owned or not), falls under Patriot Act jurisdiction and may be gagged from disclosing anything. So sure. . .cloud applications are a potential risk, but it may well be that any one of these companies providing services regarded as “non cloud” might receive an NSL or section 215 order with a gag provision, ordering them to hand over some data in their control. In the case of an outsourced IT provider, how can you be sure that the data is not straight out of your very own datacenter?

Rebuttal 4: Most other countries have similar laws

It also turns out that many other jurisdictions have similar types of laws. Canada, the UK, most countries in the EU, Japan and Australia are some good examples. If you want to dig into this, examine Clive Gringa’s article on the UK’s Regulation of Investigatory Powers Act 2000 (RIPA) and an article published by the global law firm Linklaters (a SharePoint site incidentally), on the legislation of several EU countries.

In the UK, RIPA governs the prevention and detection of acts of terrorism, serious crime and “other national security interests”. It is available to security services, police forces and authorities who investigate and detect these offenses. The act regulates interception of the content of communications as well as envelope information (who, where and when). France has a bunch of acts which I won’t bore you too much with, but after 911, they instituted act 2001-1062 of 15 November 2001 which strengthens the powers of French law enforcement agencies. Now agencies can order anyone to provide them with data relevant to an inquiry and furthermore, the data may relate to a person other than the one being subject to the disclosure order.

The Linklaters article covers Spain and Belgium too and the laws are similar in intent and power. They specifically cite a case study in Belgium where the shoe was very much on the other foot. US company Yahoo was fined for not co-operating with Belgian authorities.

The court considered that Yahoo! was an electronic communication services provider (ESP) within the meaning of the Belgian Code of Criminal Procedure and that the obligation to cooperate with the public prosecutor applied to all ESPs which operate or are found to operate on Belgian territory, regardless of whether or not they are actually established in Belgium

I could go on citing countries and legal cases but I think the point is clear enough. Smile

Rebuttal 5: Many countries co-operate with US law enforcement under treaties

So if the previous rebuttal argument that other countries have similar regimes in place is not convincing enough, consider this one. Lets assume that data is hosted by a major cloud services provider with absolutely zero presence in, or contacts with, the United States. There is still a possibility that this information may still be accessible to the U.S. government if needed in connection with a criminal case. The means by which this can happen is via international treaties relation to legal assistance. These are called Mutual Assistance Legal Treaties (MLAT).

As an example, US and Australia have had a longstanding bilateral arrangement. This provides for law enforcement cooperation between the two countries and under this arrangement, either government can potentially gain access to data located within the territory of the other. To give you an idea of what such a treaty might look like consider the scope of the Australia-US one. The scope of assistance is wide and I have emphasised the more relevant ones:

  • (a) taking the testimony or statements of persons;
  • (b) providing documents, records, and other articles of evidence;
  • (c) serving documents;
  • (d) locating or identifying persons;
  • (e) transferring persons in custody for testimony or other purposes;
  • (f) executing requests for searches and seizures and for restitution;
  • (g) immobilizing instrumentalities and proceeds of crime;
  • (h) assisting in proceedings related to forfeiture or confiscation; and
  • (i) any other form of assistance not prohibited by the laws of the Requested State.

For what its worth, if you are interested in the boundaries and limitations of the treaty, it states that the “Central Authority of the Requested State may deny assistance if”:

  • (a) the request relates to a political offense;
  • (b) the request relates to an offense under military law which would not be an offense under ordinary criminal law; or
  • (c) the execution of the request would prejudice the security or essential interests of the Requested State.

Interesting huh? Even if you host in a completely independent country, better check the treaties they have in place with other countries.

Rebuttal 6: Other countries are adjusting their laws to reduce the impact

The final rebuttal to the whole Patriot Act argument that I will cover is that things are moving fast and countries are moving to mitigate the issue regardless of the points and counterpoints that I have presented here. Once again I will refer to an article from Alex Lakatos, who provides a good example. Lakatos writes that the EU may re-write their laws to ensure that it would be illegal for the US to invoke the Patriot Act in certain circumstances.

It is anticipated, however, that at the World Economic Forum in January 2012, the European Commission will announce legislation to repeal the existing EU data protection directive and replace it with more a robust framework. The new legislation might, among other things, replace EU/US Safe Harbor regulations with a new approach that would make it illegal for the US government to invoke the Patriot Act on a cloud-based or data processing company in efforts to acquire data held in the European Union. The Member States’ data protection agency with authority over the company’s European headquarters would have to agree to the data transfer.

Now Lakatos cautions that this change may take a while before it actually turns into law, but nevertheless is something that should be monitored by cloud providers and cloud consumers alike.

Conclusion

So what do you think? Are you enlightened and empowered or confused and jaded? Smile

I think that the Patriot Act issue is obviously a complex one that is not well served by arguments based on fear, uncertainty and doubt. The risks are real and there are precedents that demonstrate those risks. Scarily, it doesn’t make much digging to realise that those risks are more widespread than one might initially consider. Thus, if you are going to play the Patriot Act card for FUD reasons, or if you are making a genuine effort to mitigate the risks, you need to look at all of the touch points where service provider might exercise a degree of control. They may not be where you think they are.

In saying all of this, I think this examination highlights some strategy that can be employed by cloud providers and cloud consumers alike. Firstly, If I were a cloud provider, I would state my policy about how much data will be given when confronted by an NSL (since that has clear limitations). Many providers may already do this, so to turn it around to the customer, it is incumbent on cloud consumers to confirm with the providers as to where they stand. I don’t know if there is that much value in asking a cloud provider if they are exempt from the reach of the Patriot Act. Maybe its better to assume they are affected and instead, ask them how they intend to mitigate their customers downlevel risks.

Another obvious strategy for organisations is to encrypt data before it is stored on cloud infrastructure. While that is likely not going to be an option in a software as a service model like Office 365, it is certainly an option in the infrastructure and platform as a service models like Amazon and Azure. That would reduce the impact of a Section 215 Order being executed as the cloud provider is unlikely going to have the ability to decrypt the data.

Finally (and to sound like a broken record), a little information management governance would not go astray here. Organisations need to understand what data is appropriate for what range of cloud services. This is security 101 folks and if you are prudent in this area, cloud shouldn’t necessarily be big and scary.

Thanks for reading

Paul Culmsee

www.hereticsguidebooks.com

www.sevensigma.com.au

p.s Now do not for a second think this article is exhaustive as this stuff moves fast. So always do your research and do not rely on an article on some guys blog that may be out of date before you know it.

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

An obscure “failed to create the configuration database” issue…

Send to Kindle

Hi all

You would think that after years of installing SharePoint in various scenarios, that I would be able to get past step 3 in the configuration wizard (the step that creates the configuration database). But today I almost got nailed by an issue that – while in hindsight is dead-set obvious – was rather difficult to diagnose.

Basically it was a straightforward two server farm installation. The installer account had local admin rights on the web front end server and sysadmin rights on the SQL box. SQL was a dedicated named instance using an alias. I was tutoring the install while an engineer did the driving and as soon as we hit step 3, blammo! – the Installation failed claiming that the configuration database could not be created.

image

Looking a little deeper into the log, the error message stated:

An error occurred while getting information about the user svc-spfarm at server mydomain.com: Access is denied

Hmm.. After double checking all the obvious things (SQL dbcreator and securityadmin on the farm account, group policy interference, etc) it was clear this was something different. The configuration database was successfully created on the SQL server, although the permissions of the farm account had not been applied. This proved that SQL permissions were appropriate. Clearly this was an issue around authentication and active directory.

There were very few reports of similar symptoms online and the closest I saw was a situation where the person ran the SharePoint configuration wizard using the local machine administrator account by mistake, rather than a domain account. Of course, the local account had no rights to access active directory and the wizard had failed because it had no way to verify the SharePoint farm account in AD to grant it permissions to the configuration database. But in this case we were definitely using a valid domain account.

As part of our troubleshooting, we opted to explicitly give the farm account “Log on as a service” rights (since this is needed for provisioning the user profile service later anyhow). It was then we saw some really bizarre behaviour. We were unable to find the SharePoint farm account in Active Directory. Any attempt to add the farm account to the “log on as a service” right would not resolve and therefore we could not assign that right. We created another service account to test this behaviour and and the same thing happened. This immediately smelt like an issue with Active directory replication – where the domain controller being accessed was not replicating with the others domain controllers. A quick repladmin check and we ascertained that all was fine.

Hmm…

At this point, we started experimenting with various accounts, old and new. We were able to conclude that irrespective of the age of the account, some accounts could be found in Active Directory no problem, whereas others could not be. Yet those that could not be found were valid and working on the domain.

Finally one of the senior guys in the organisation realised the problem. In their AD topology, there was an OU for all service accounts. The permissions of that OU had been modified from the default. The “Domain users” group did not have any access to that OU at all. This prevented service accounts from being enumerated by regular domain accounts (a security design they had adopted some time back). Interestingly, even service accounts that live in this OU cannot enumerate any other accounts in that OU, including themselves.

This caused several problems with SharePoint. First the configuration wizard could not finish because it needed to assign the farm account permissions to the config and central admin databases. Additionally, the farm account would not be able to register managed accounts if those accounts were stored in this OU.

Fortunately, when they created this setup, they made a special group called “Enumerate Service Account OU”. By adding the installer account and the farm account to this group all was well.

I have to say, I thought I had seen most of the ways Active Directory configuration might trip me up – but this was a first. Anyone else seen this before?

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

www.hereticsguidebooks.com

p.s The error log detail is below….


 

Log Name:      Application

Source:        SharePoint 2010 Products Configuration Wizard

Date:          1/02/2012 2:22:52 PM

Event ID:      104

Task Category: None

Level:         Error

Keywords:      Classic

User:          N/A

Computer:      Mycomputer

Description:

Failed to create the configuration database.

An exception of type System.InvalidOperationException was thrown.  Additional exception information: An error occurred while getting information about the user svc-spfarm at server mydomain: Access is denied

System.InvalidOperationException: An error occurred while getting information about the user svc-spfarm at server mydomain

   at Microsoft.SharePoint.Win32.SPNetApi32.NetUserGetInfo1(String server, String name)

   at Microsoft.SharePoint.Administration.SPManagedAccount.GetUserAccountControl(String username)

   at Microsoft.SharePoint.Administration.SPManagedAccount.Update()

   at Microsoft.SharePoint.Administration.SPProcessIdentity.Update()

   at Microsoft.SharePoint.Administration.SPApplicationPool.Update()

   at Microsoft.SharePoint.Administration.SPWebApplication.CreateDefaultInstance(SPWebService service, Guid id, String applicationPoolId, SPProcessAccount processAccount, String iisServerComment, Boolean secureSocketsLayer, String iisHostHeader, Int32 iisPort, Boolean iisAllowAnonymous, DirectoryInfo iisRootDirectory, Uri defaultZoneUri, Boolean iisEnsureNTLM, Boolean createDatabase, String databaseServer, String databaseName, String databaseUsername, String databasePassword, SPSearchServiceInstance searchServiceInstance, Boolean autoActivateFeatures)

   at Microsoft.SharePoint.Administration.SPWebApplication.CreateDefaultInstance(SPWebService service, Guid id, String applicationPoolId, IdentityType identityType, String applicationPoolUsername, SecureString applicationPoolPassword, String iisServerComment, Boolean secureSocketsLayer, String iisHostHeader, Int32 iisPort, Boolean iisAllowAnonymous, DirectoryInfo iisRootDirectory, Uri defaultZoneUri, Boolean iisEnsureNTLM, Boolean createDatabase, String databaseServer, String databaseName, String databaseUsername, String databasePassword, SPSearchServiceInstance searchServiceInstance, Boolean autoActivateFeatures)

   at Microsoft.SharePoint.Administration.SPAdministrationWebApplication.CreateDefaultInstance(SqlConnectionStringBuilder administrationContentDatabase, SPWebService adminService, IdentityType identityType, String farmUser, SecureString farmPassword)

   at Microsoft.SharePoint.Administration.SPFarm.CreateAdministrationWebService(SqlConnectionStringBuilder administrationContentDatabase, IdentityType identityType, String farmUser, SecureString farmPassword)

   at Microsoft.SharePoint.Administration.SPFarm.CreateBasicServices(SqlConnectionStringBuilder administrationContentDatabase, IdentityType identityType, String farmUser, SecureString farmPassword)

   at Microsoft.SharePoint.Administration.SPFarm.Create(SqlConnectionStringBuilder configurationDatabase, SqlConnectionStringBuilder administrationContentDatabase, IdentityType identityType, String farmUser, SecureString farmPassword, SecureString masterPassphrase)

   at Microsoft.SharePoint.Administration.SPFarm.Create(SqlConnectionStringBuilder configurationDatabase, SqlConnectionStringBuilder administrationContentDatabase, String farmUser, SecureString farmPassword, SecureString masterPassphrase)

   at Microsoft.SharePoint.PostSetupConfiguration.ConfigurationDatabaseTask.CreateOrConnectConfigDb()

   at Microsoft.SharePoint.PostSetupConfiguration.ConfigurationDatabaseTask.Run()

   at Microsoft.SharePoint.PostSetupConfiguration.TaskThread.ExecuteTask()

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud isn’t the problem–Part 5: Server huggers and a crisis of identity

This entry is part 5 of 6 in the series Cloud
Send to Kindle

Hi all

Welcome to my fifth post that delves into the irrational world of cloud computing. After examining the not-so-obvious aspects of Microsoft, Amazon and the industry more broadly, its time to shift focus a little. Now the appeal of the cloud really depends on your perspective. To me, there are three basic motivations for getting in on the act…

  1. I can make a buck
  2. I can save a buck
  3. I can save a buck (and while I am at it, escape my pain-in-the-ass IT department)

If you haven’t guessed it, this post will examine #3, and look at what the cloud means for the the perennial issue of the IT department and business disconnect. I recently read an article over at CIO magazine where they coined the term “Server Huggers” to describe the phenomenon I am about to describe. So to set the flavour for this discussion, let me tell you about the biggest secret in organisational life…

We all have an identity crisis (so get over it).

In organizations, there are roles that I would call transactional (i.e. governed by process and clear KPI’s) and those that are knowledge-based (governed by gut feel and insight). Whilst most roles actually entail both of these elements, most of us in SharePoint land are the latter. In fact we actually spend a lot of time in meeting rooms “strategizing” the solutions that our more transactionally focused colleagues will be using to meet their KPI’s. Beyond SharePoint, this also applies to Business Analysts, Information Architects, Enterprise Architects, Project Managers and pretty much anyone with the word “senior”, “architect”, “analyst”  or “strategic” in their job title.

But there is a big, fat, elephant in the “strategizing room” of certain knowledge worker roles that is at the root of some irrational organisational behaviour. Many of us are suffering a role-based identity crisis. To explain this, lets pick a straw-man example of one of the most conflicted roles of all right now: Information Architects.

One challenge with the craft of IA is pace of change, since IA today looks very different from its library and taxonomic roots. Undoubtedly, it will look very different ten years from now too as it gets assailed from various other roles and perspectives, each believing their version of rightness is more right. Consider this slightly abridged quote from Joshua Porter:

Worse, the term “information architecture” has over time come to encompass, as suggested by its principal promoters, nearly every facet of not just web design, but Design itself. Nowhere is this more apparent than in the latest update of Rosenfeld and Morville’s O’Reilly title, where the definition has become so expansive that there is now little left that isn’t information architecture […] In addition, the authors can’t seem to make up their minds about what IA actually is […] (a similar affliction pervades the SIGIA mailing list, which has become infamous for never-ending definition battles.) This is not just academic waffling, but evidence of a term too broadly defined. Many disciplines often reach out beyond their initial borders, after catching on and gaining converts, but IA is going to the extreme. One technologist and designer I know even referred to this ever-growing set of definitions as the “IA land-grab”, referring to the tendency that all things Design are being redefined as IA.

You can tell when a role is suffering an identity crisis rather easily too. It is when people with the current role start to muse that the title no longer reflects what they do and call for new roles to better reflect the environment they find themselves in. Evidence for this exists further in Porter’s post. Check out the line I marked with bold below:

In addition, this shift is already happening to information architects, who, recognizing that information is only a byproduct of activity, increasingly adopt a different job title. Most are moving toward something in the realm of “user experience”, which is probably a good thing because it has the rigor of focusing on the user’s actual experience. Also, this as an inevitable move, given that most IAs are concerned about designing great things. IA Scott Weisbrod, sees this happening too: “People who once identified themselves as Information Architects are now looking for more meaningful expressions to describe what they do – whether it’s interaction architect or experience designer

So while I used the example of Information Architects as an example of how pace of change causes an identity crisis, the advent of the cloud doesn’t actually cause too many IA’s (or whatever they choose to call themselves) to lose too much sleep. But there are other knowledge-worker roles that have not really felt the effects of change in the same way as their IA cousins. In fact, for the better part of twenty years one group have actually benefited greatly from pace of change. Only now is the ground under their feet starting to shift, and the resulting behaviours are starting to reflect the emergence of an identity crisis that some would say is long overdue.

IT Departments and the cloud

At a SharePoint Saturday in 2011, I was on a panel and we were asked by an attendee what effect Office 365 and other cloud based solutions might have on a traditional IT infrastructure role. This person asking was an infrastructure guy and his question was essentially around how his role might change as cloud solutions becomes more and more mainstream. Of course, all of the SharePoint nerds on the panel didn’t want to touch that question with a bargepole and all heads turned to me since apparently I am “the business guy”. My reply was that he was sensing a change – commoditisation of certain aspects of IT roles. Did that mean he was going to lose his job? Unlikely, but nevertheless when  change is upon us, many of us tend to place more value on what we will lose compared to what we will gain. Our defence mechanisms kick in.

But lets take this a little further: The average tech guy comes in two main personas. The first is the tech-cowboy who documents nothing, half completes projects then loses interest, is oblivious to how much they are in over their head and generally gives IT a bad name. They usually have a lot of intellectual intelligence (IQ), but not so much emotional intelligence (EQ). Ben Curry once referred to this group as “dumb smart guys.” The second persona is the conspiracy theorist who had to clean up after such a cowboy. This person usually has more skills and knowledge than the first guy, writes documentation and generally keeps things running well. Unfortunately, they also can give IT a bad name. This is because, after having to pick up the pieces of something not of their doing, they tend to develop a mother hen reflex based on a pathological fear of being paged at 9pm to come in and recover something they had no part in causing. The aforementioned cowboys rarely last the distance and therefore over time, IT departments begin to act as risk minimisers, rather than business enablers.

Now IT departments will never see it this way of course, instead believing that they enable the business because of their risk minimisation. Having spent 20 years being a paranoid conspiracy theorist, security-type IT guy, I totally get why this happens as I was the living embodiment of this attitude for a long time. Technology is getting insanely complex while users innate ability to do some really risky and dumb is increasing. Obviously, such risk needs to be managed and accordingly, a common characteristic of such an IT department is the word “no” to pretty much any question that involves introducing something new (banning iPads or espousing the evils of DropBox are the best examples I can think of right now). When I wrote about this issue in the context of SharePoint user adoption back in 2008, I had this to say:

The mother hen reflex should be understood and not ridiculed, as it is often the user’s past actions that has created the reflex. But once ingrained, the reflex can start to stifle productivity in many different ways. For example, for an employee not being able to operate at full efficiency because they are waiting 2 days for a helpdesk request to be actioned is simply not smart business. Worse still, a vicious circle emerges. Frustrated with a lack of response, the user will take matters into their own hands to improve their efficiency. But this simply plays into the hands of the mother hen reflex and for IT this reinforces the reason why such controls are needed. You just can’t trust those dog-gone users! More controls required!

The long term legacy of increasing technical complexity and risk is that IT departments become slow-moving and find it difficult to react to pace of change. Witness the number of organisations still running parts of their business on Office 2003, IE6 and Windows XP. The rest of the organisation starts to resent using old tools and the imposition of process and structure for no tangible gain. The IT department develops a reputation of being difficult to deal with and taking ages to get anything done. This disconnect begins to fester, and little by little both IT and “the business” develop a rose-tinged view of themselves (which is known as groupthink) and a misguided perception of the other.

At the end of the day though, irrespective of logic or who has the moral high ground in the debate, an IT department with a poor reputation will eventually lose. This is because IT is no longer seen as a business enabler, but as a cost-center. Just as organisations did with the IT outsourcing fad over the last decade, organisational decision makers will read CIO magazine articles about server huggers look longingly to the cloud, as applications become more sophisticated and more and more traditional vendors move into the space, thus legitimising it. IT will be viewed, however unfairly, as a burden where the cost is not worth the value realised. All the while, to conservative IT, the cloud represents some of their worst fears realised. Risk! risk! risk! Then the vicious circle of the mother-hen reflex will continue because rogue cloud applications will be commissioned without IT knowledge or approval. Now we are back to the bad old days of rogue MSAccess or SharePoint deployments that drives the call for control based governance in the first place!

<nerd interlude>

Now to the nerds reading this post who find it incredibly frustrating that their organisation will happily pump money into some cloud-based flight of fancy, but whine when you want to upgrade the network, I want you to take take note of this paragraph as it is really (really) important! I will tell you the simple reason why people are more willing to spend more money on fluffy marketing than IT. In the eyes of a manager who needs to make a profit, sponsoring a conference or making the reception area look nice is seen as revenue generating. Those who sign the cheques do not like to spend capital on stuff unless they can see that it directly contributes to revenue generation! Accordingly, a bunch of servers (and for that matter, a server room) are often not considered expenditure that generates revenue but are instead considered overhead! Overhead is something that any smart organisation strives to reduce to remain competitive. The moral of the story? Stop arguing cloud vs. internal on what direct costs are incurred because people will not care! You would do much better to demonstrate to your decision makers that IT is not an overhead. Depending on how strong your mother hen reflex is and how long it has been in place, that might be an uphill battle.

</nerd interlude>

Defence mechanisms…

Like the poor old Information Architect, the rules of the game are changing for IT with regards to cloud solutions. I am not sure how it will play out, but I am already starting to see the defence mechanisms kicking in. There was a CIO interviewed in the “Server Huggers” article that I referred to earlier (Scott Martin) who was hugely pro-cloud. He suggested that many CIO’s are seeing cloud solutions as a threat to the empire they have built:

I feel like a lot of CIOs are in the process of a kind of empire building.  IT empire builders believe that maintaining in-house services helps justify their importance to the company. Those kinds of things are really irrational and not in the best interest of the company […] there are CEO’s who don’t know anything about technology, so their trusted advisor is the guy trying to protect his job.

A client of mine in Sydney told me he enquired to his IT department about the use of hosted SharePoint for a multi-organisational project and the reply back was a giant “hell no,” based primarily on fear, uncertainty and doubt. With IT, such FUD is always cloaked in areas of quite genuine risk. There *are* many core questions that we must ask cloud vendors when taking the plunge because to not do so would be remiss (I will end this post with some of those questions). But the key issue is whether the real underlying reason behind those questions is to shut down the debate or to genuinely understand the risks and implications of moving to the cloud.

How can you tell an IT department is likely using a FUD defence? Actually, it is pretty easily because conservative IT is very predictable – they will likely try and hit you with what they think is their slam-dunk counter argument first up. Therefore, they will attempt to bury the discussion with the US Patriot Act Issue. I’ve come across this issue and and Mark Miller at FPWeb mentioned to me that this comes up all the time when they talk about SharePoint hosting to clients. (I am going to cover the Patriot Act issue in the next post because it warrants a dedicated post).

If the Patriot Act argument fails to dent unbridled cloud enthusiasm, the next layer of defence is to highlight cloud based security (identity, authentication and compliance) as well as downtime risk, citing examples such as the September outage of Office 365, SalesForce.com’s well publicized outages, the Amazon outage that took out Twitter, Reddit, Foursquare, Turntable.fm, Netflix and many, many others. The fact that many IT departments do not actually have the level of governance and assurance of their systems that they aspire to will be conveniently overlooked. 

Failing that, the last line of defence is to call into question the commercial viability of cloud providers. We talked about the issues facing the smaller players in the last post, but It is not just them. What if the provider decides to change direction and discontinue a service? Google will likely be cited, since it has a habit of axing cloud based services that don’t reach enough critical mass (the most recent casualty is Google health being retired as I write this).  The risk of a cloud provider going out of business or withdrawing a service is a much more serious risk than when a software supplier fails. At least when its on premise you still have the application running and can use it.

Every FUD defence is based on truth…

Now as I stated above, all of the concerns listed above are genuine things to consider before embarking on a cloud strategy. Prudent business managers and CIOs must weigh the pros and cons of cloud offering before rushing into a deployment that may not be appropriate for their organisation. Equally though, its important to be able to see through a FUD defence when its presented. The easiest way to do this is do some of your own investigations first.

To that end, you can save yourself a heap of time by checking out the work of Richard Harbridge. Richard did a terrific cloud talk at the most recent Share 2011 conference. You can view his slide deck here and I recommend really going through slides 48-81. He has provided a really comprehensive summary of considerations and questions to ask. Among other things, he offered a list of questions that any organisation should be asking providers of cloud services. I have listed some of them below and encourage you to check out his slide deck as it is really comprehensive and covers way more than what I have covered here.

Security Storage Identity & Access
Who will have access to my data?
Do I have full ownership of my data?
What type of employee / contractor screening you do, before you hire them?
How do you detect if an application is being attacked (hacked), and how is that
reported to me and my employees?
How do you govern administrator access to the service?
What firewalls and anti-virus technology are in place?
What controls do you have in place to ensure safety for my data while it is
stored in your environment?
What happens to my data if I cancel my service?
Can I archive environments?
Will my data be replicated to any other datacenters around the world (If
yes, then which ones)?
Do you offer single sign-on for your services?
Active directory integration?
Do all of my users have to rely on solely web based tools?
Can users work offline?
Do you offer a way for me to run your application locally and how quickly I can revert to the local installation?
Do you offer on-premise, web-based, or mixed environments?
     
Reliability & Support Performance  
What is your Disaster Recovery and Business Continuity strategy?
What is the retention period and recovery granularity?
Is your Cloud Computing service compliant with [insert compliance regime here]?
What measures do you provide to assist compliance and minimize legal risk?
What types of support do you offer?
How do you ensure we are not affected by upgrades to the service?
What are your SLAs and how do you compensate when it is not met?
How fast is the local network?
What is the storage architecture?
How many locations do you have and how are they connected?
Have you published any benchmark scores for your infrastructure?
What happens when there is over subscription?
How can I ensure CPU and memory are guaranteed?
 

Conclusion and looking forward…

For some organisations, the lure of cloud solutions is very seductive. From a revenue perspective, it saves a lot of capital expenditure. From a time perspective, it can be deployed very quickly and and from a maintenance perspective, takes the burden away from IT. Sounds like a winner when put that way. But the real issue is that the changing cloud paradigm potentially impacts the wellbeing of some IT professionals and IT departments because it calls into question certain patterns and practices within established roles. It also represents a loss of control and as I said earlier, people often place a higher value on what they will lose compared to what they will gain.

Irrespective of this, whether you are a new age cloud loving CIO or a server hugger, any decision to move to the cloud should be about real business outcomes. Don’t blindly accept what the sales guy tells you. Understand the risks as well as the benefits. Leverage the work Richard has done and ask the cloud providers the hard questions. Look for real world stories (like my second and third articles in this series) which illustrate where the services have let people down.

For some, cloud will be very successful. For others, the gap between expectations and reality will come with a thud.

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud is not the problem-Part 4: Industry shakeout and playing with the big kids…

This entry is part 4 of 6 in the series Cloud
Send to Kindle

Hi all

Welcome to the fourth post about the adaptive change that cloud computing is going to have on practitioners, paradigms and organisations. The previous two posts took a look at some of the dodgier side of two of the industries biggest players, Microsoft and Amazon. While I have highlighted some dumb issues with both, I nevertheless have to acknowledge their resourcing, scalability, and ability to execute. On that point of ability to execute, in this post we are going to expand a little towards the cloud industry more broadly and the inevitable consolidation that is, and will continue to take place.

Now to set the scene, a lot of people know that in the early twentieth century, there were a lot of US car manufacturers. I wonder if you can take a guess at the number of defunct car manufacturers there have been before and after that time.

…Fifty?

…One Hundred?

Not even close…

What if I told you that there were over 1700!

Here is another interesting stat. The table below shows the years where manufacturers went bankrupt or ceased operations. Below that I have put the average shelf life of each company for that decade.

Year 1870’s 1880’s 1890’s 1900’s 1910’s 1920’s 1930’s 1940’s 1950’s 1960’s 1970’s 1980’s 1990’s 2000’s 2010’s
# defunct 4 2 5 88 660 610 276 42 13 33 11 5 5 3 5
avg years in operation 5 1 1 3 3 4 5 7 14 10 19 37 16 49 42

Now, you would expect that the bulk of closures would be depression era, but note that the depression did not start until the late 1920’s and during the boom times that preceded it, 660 manufactures went to the wall – a worse result!

The pattern of consolidation

image

What I think the above table shows is the classic pattern of industry consolidation after an initial phase of innovation and expansion, where over time, the many are gobbled by the few. As the number of players consolidate, those who remain grow bigger, with more resources and economies of scale. This in turn creates barriers to entry for new participants. Accordingly, the rate of attrition slows down, but that is more due to the fact that there are fewer players in the industry. Those that are left continue to fight their battles, but now those battles take longer. Nevertheless, as time goes on, the number of players consolidate further.

If we applied a cloud/web hosting paradigm to the above table, I would equate the dotcom bust of 2000 with the depression era of the 1920’s and 1930’s. I actually think with cloud computing, we are in the 1960’s and on right now. The largest of the large players have how made big bets on the cloud and have entered the market in a big, big way. For more than a decade, other companies hosted Microsoft technology, with Microsoft showing little interest beyond selling licenses via them. Now Microsoft themselves are also the hosting provider. Does that mean most the hosting providers will have the fate of Netscape? Or will they manage to survive the dance with Goliath like Citrix or VMWare have?

For those who are not Microsoft or Amazon…

image

Imagine you have been hosting SharePoint solutions for a number of years. Depending on your size, you probably own racks or a cage in some-one else’s data centre, or you own a small data centre yourself. You have some high end VMWare gear to underpin your hosting offerings and you do both managed SharePoint (i.e. offer a basic site collection subscription with no custom stuff – ala Office 365) and you offer dedicated virtual machines for those who want more control (ala Amazon). You have dutifully paid your service provider licensing to Microsoft, have IT engineers on staff, some SharePoint specialists, a helpdesk and some dodgy sales guys – all standard stuff and life is good. You had a crack at implementing SharePoint multi tenancy, but found it all a bit too fiddly and complex.

Then Amazon comes along and shakes things up with their IaaS offerings. They are cost competitive, have more data centres in more regions, a higher capacity, more fault tolerance, a wider variety of services and can scale more than you can. Their ability to execute in terms of offering new services is impossible to keep up with. In short, they slowly but relentlessly take a chunk of the market and continue to grow. So, you naturally counter by pushing the legitimate line that you specialise in SharePoint, and as a result customers are in much more trusted hands than Amazon, when investing on such a complex tool as SharePoint.

But suddenly the game changes again. The very vendor who you provide cloud-based SharePoint services for, now bundles it with Exchange, Lync and offers Active Directory integration (yeah, yeah, I know there was BPOS but no-one actually heard of that). Suddenly the argument that you are a safer option than Amazon is shot down by the fact that Microsoft themselves now offer what you do. So whose hands are safer? The small hosting provider with limited resources or the multinational with billions of dollars in the bank who develops the product? Furthermore, given Microsoft’s advantage in being able to mobilise its knowledge resources with deep product knowledge, they have a richer managed service offering than you can offer (i.e. they offer multi tenancy :).

This puts you in a bit of a bind as you are getting assailed at both ends. Amazon trumps you in the capabilities at the IaaS end and is encroaching in your space and Microsoft is assailing the SaaS end. How does a small fish survive in a pond with the big ones? In my opinion, the mid-tier SharePoint cloud providers will have to reinvent themselves.

The adaptive change…

So for the mid-tier SharePoint cloud provider grappling with the fact that their play area is reduced because of the big kids encroaching, there is only one option. They have to be really, really good in areas the big kids are not good at. In SharePoint terms, this means they have to go to places many don’t really want to go: they need to bolster their support offerings and move up the SharePoint stack.

You see, traditionally a SharePoint hosting provider tends to take two approaches. They provide a managed service where the customer cannot mess with it too much (i.e. Site collection admin access only). For those who need more than that, they will offer a virtual machine and wipe their hands of any maintenance or governance, beyond ensuring that  the infrastructure is fast and backed up. Until now, cloud providers could get away with this and the reason they take this approach should be obvious to anyone who has implemented SharePoint. If you don’t maintain operational governance controls, things can rapidly get out of hand. Who wants to deal with all that “people crap”? Besides, that’s a different skill set to typical skills required to run and maintain cloud services at the infrastructure layer.

So some cloud providers will kick and scream about this, and delude themselves into thinking that hosting and cloud services are their core business. For those who think this, I have news for you. The big boys think these are their core business too and they are going to do it better than you. This is now commodity stuff and a by-product of commoditisation is that many SharePoint consultancies are now cloud providers anyway! They sign up to Microsoft or Amazon and are able to provide a highly scalable SharePoint cloud service with all the value added services further up the SharePoint stack. In short, they combine their SharePoint expertise with Microsoft/Amazon’s scale.

Now on the issue of support, Amazon has no specific SharePoint skills and they never will. They are first and foremost a compelling IaaS offering. Microsoft’s support? … go and re-read part 2 if you want to see that. It seems that no matter the big multinational, level 1 tech support is always level 1 tech support.

So what strategies can a mid-tier provider take to stay competitive in this rapidly commoditising space. I think one is to go premium and go niche.

  • Provide brilliant support. If I call you, day or night, I expect to speak to a SharePoint person straight away. I want to get to know them on a first name basis and I do not want to fight the defence mechanism of the support hierarchy.
  • Partner with SharePoint consultancies or acquire consulting resources. The latter allows you to do some vertical integration yourself and broaden your market and offerings. A potential KPI for any SharePoint cloud provider should be that no support person ever says “sorry that’s outside the scope of what we offer.”
  • Develop skills in the tools and systems that surround SharePoint or invest in SharePoint areas where skills are lacking. Examples include Project Server, PerformancePoint, integration with GIS, Records management and ERP systems. Not only will you develop competencies that few others have, but you can target particular vertical market segments who use these tools.
  • (Controversial?) Dump your infrastructure and use Amazon in conjunction with another IaaS provider. You just can’t compete with their scale and price point. If you use them you will likely save costs, when combined with a second provider you can play the resiliency card and best of all … you can offer VPC 🙂

Conclusion

In the last two posts we looked at some of the areas where both Microsoft and Amazon sometimes struggle to come to grips with the SharePoint cloud paradigm. In this post, we took a look at other cloud providers having to come to grips with the SharePoint cloud paradigm of having to compete with these two giants, who are clearly looking to eke out as much value as they can from the cloud pie. Whether you agree with my suggested strategy (Rackspace appears to), the pattern of the auto industry serves as an interesting parallel to the cloud computing marketplace. Is the relentless consolidation a good thing? Probably not in the long term (we will tackle that issue in the last post in this series). In the next post, we are going to shift our focus away from the cloud providers themselves, and turn our gaze to the internal IT departments – who until now, have had it pretty good. As you will see, a big chunk of the irrational side of cloud computing comes from this area.

 

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud is not the problem–Part 3: When silos strike back…

This entry is part 3 of 6 in the series Cloud
Send to Kindle

What can Ikea fails tell us about cloud computing?

My next door neighbour is a builder. When he moved next door, the house was an old piece of crap. Within 6 months, he completely renovated it himself, adding in two bedrooms, an underground garage and all sorts of cool stuff. On the other hand, I bought my house because it was a good location, someone had already renovated it and all we had to do was move in. The reason for this was simple: I had a new baby and more importantly, me and power tools do not mix. I just don’t have the skills, nor the time to do what my neighbour did.

You can probably imagine what would happen if I tried to renovate my house the way my neighbour did. It would turn out like the Ikea fails in the video. Similarly, many SharePoint installs tend to look similar to the video too. Moral of the story? Sometimes it is better to get something pre-packaged than to do it yourself.

In the last post, we examined the “Software as a Service” (SaaS) model of cloud computing in the form of Office 365. Other popular SaaS providers include SlideShare, Salesforce, Basecamp and Tom’s Planner to name a few. Most SaaS applications are browser based and not as feature rich or complex as their on-premise competition. Therefore the SaaS model is that its a bit like buying a kit home. In SaaS, no user of these services ever touches the underlying cloud infrastructure used to provide the solution, nor do they have a full mandate to tweak and customise to their hearts content. SaaS is basically predicated on the notion that someone else will do a better set-up job than you and the old 80/20 rule about what features for an application are actually used.

Some people may regard the restrictions of SaaS as a good thing – particularly if they have dealt with the consequences of one too many unproductive customization efforts previously. As many SharePointer’s know, the more you customise SharePoint, the less resilient it gets. Thus restricting what sort of customisations can be done in many circumstances might be a wise thing to do.

Nevertheless, this actually goes against the genetic traits of pretty much every Australian male walking the planet. The reason is simple: no matter how much our skills are lacking or however inappropriate tools or training, we nevertheless always want to do it ourselves. This brings me onto our next cloud provider: Amazon, and their Infrastructure as a Service (IaaS) model of cloud based services. This is the ultimate DIY solution for those of us that find SaaS to cramping our style. Let’s take a closer look shall we?

Amazon in a nutshell

Okay, I have to admit that as an infrastructure guy, I am genetically predisposed to liking Amazon’s cloud offerings. Why? well as an infrastructure guy, I am like my neighbour who renovated his own house. I’d rather do it all myself because I have acquired the skills to do so. So for any server-hugging infrastructure people out there who are wondering what they have been missing out on? Read on… you might like what you see.

Now first up, its easy for new players to get a bit intimidated by Amazon’s bewildering array of offerings with brand names that make no sense to anybody but Amazon… EC2, VPC, S3, ECU, EBS, RDS, AMI’s, Availability Zones – sheesh! So I am going to ignore all of their confusing brand names and just hope that you have heard of virtual machines and will assume that you or your tech geeks know all about VMware. The simplest way to describe Amazon is VMWare on steroids. Amazon’s service essentially allows you to create Virtual Machines within Amazon’s “cloud” of large data centres around the world. As I stated earlier, the official cloud terminology that Amazon is traditionally associated is called Infrastructure as a Service (IaaS). This is where, instead of providing ready-made applications like SaaS, a cloud vendor provides lower level IT infrastructure for rent. This consists of stuff like virtualised servers, storage and networking.

Put simply, utilising Amazon, one can deploy virtual servers with my choice of operating system, applications, memory, CPU and disk configuration. Like any good “all you can eat” buffet, one is spoilt for choice. One simply chooses an Amazon Machine Image (AMI) to use as a base for creating a virtual server. You can choose one of Amazon’s pre-built AMI’s (Base installs of Windows Server or Linux) or you can choose an image from the community contributed list of over 8000 base images. Pretty much any vendor out there who sells a turn-key solution (such as those all-in-one virus scanning/security solutions) has likely created an AMI. Microsoft have also gotten in on the Amazon act and created AMI’s for you, optimised by their product teams. Want SQL 2008 the way Microsoft would install it? Choose the Microsoft Optimized Base SQL Server 2008R2 AMI which “contains scripts to install and optimize SQL Server 2008R2 and accompanying services including SQL Server Analysis services, SQL Server Reporting services, and SQL Server Integration services to your environment based on Microsoft best practices.”

The series of screen shots below shows the basic idea. After signing up, use the “Request instance wizard” to create a new virtual server by choosing an AMI first. In the example below, I have shown the default Amazon AMI’s under “Quick start” as well as the community AMI’s.

image61_thumb1
Amazons default AMI’s
image3_thumb
Community contributed AMI’s

From the list above, I have chosen Microsoft’s “Optimized SQL Server 2008 R2 SP1” from the community AMI’s and clicked “Select”. Now I can choose the CPU and memory configurations. Hmm how does a 16 core server sound with 60 gig of RAM? That ought to do it… 🙂

image13_thumb

Now I won’t go through the full description of commissioning virtual servers, but suffice to say that you can choose which geographic location this server will reside within Amazon’s cloud and after 15 minutes or so, your virtual server will be ready to use. It can be assigned a public IP address, firewall restricted and then remotely managed as per any other server. This can all be done programmatically too. You can talk to Amazon via web services start, monitor, terminate, etc. as many virtual machines as you want, which allows you to scale your infrastructure on the fly and very quickly. There are no long procurement times and you then only pay for what servers are currently running. If you shut them down, you stop paying.

But what makes it cool…

Now I am sure that some of you might be thinking “big deal…any virtual machine hoster can do that.” I agree – and when I first saw this capability I just saw it as a larger scale VMWare/Xen type deployment. But when really made me sit up and take notice was Amazon’s Virtual Private Cloud (VPC) functionality. The super-duper short version of VPC is that it allows you extend your corporate network into the Amazon cloud. It does this by allowing you to define your own private network and connecting to it via site-to-site VPN technology. To describe how it works, diagrammatically check out the image below.

image_thumb13

Let’s use an example to understand the basic idea. Let’s say your internal IP address range at your office is 10.10.10.0 to 10.10.10.255 (a /24 for the geeks). With VPC you tell Amazon “I’d like a new IP address range of 10.10.11.0 to 10.10.11.255” . You are then prompted to tell Amazon the public IP address of your internet router. The screenshots below shows what happens next:

image_thumb14

image6_thumb

The first screenshot asks you to choose what type of router is at your end. Available choices are Cisco, Juniper, Yamaha, Astaro and generic. The second screenshot shows you a sample configuration that is downloaded. Now any Cisco trained person reading this will recognise what is going on here. This is the automatically generated configuration to be added to an organisations edge router to create an IPSEC tunnel. In other words, we have extended our corporate network itself into the cloud. Any service can be run on such a network – not just SharePoint. For smaller organisations wanting the benefits of off-site redundancy without the costs of a separate datacenter, this is a very cost effective option indeed.

For the Cisco geeks, the actual configuration is two GRE tunnels that are IPSEC encrypted. BGP is used for route table exchange, so Amazon can learn what routes to tunnel back to your on-premise network. Furthermore Amazon allows you to manage firewall settings at the Amazon end too, so you have an additional layer of defence past your IPSEC router.

This is called Virtual Private Cloud (VPC) and when configured properly is very powerful. Note the “P” is for private. No server deployed to this subnet is internet accessible unless you choose it to be. This allows you to extend your internal network into the cloud and gain all the provisioning, redundancy and scalability benefits without exposure to the internet directly. As an example, I did a hosted SharePoint extranet where we use SQL log shipping of the extranet content databases back to the a DMZ network for redundancy. Try doing that on Office365!

This sort of functionality shows that Amazon is a mature, highly scalable and flexible IaaS offering. They have been in the business for a long time and it shows because their full suite of offerings is much more expansive than what I can possibly cover here. Accordingly my Amazon experiences will be the subject of a more in-depth blog post or two in future. But for now I will force myself to stop so the non-technical readers don’t get too bored. 🙂

So what went wrong?

So after telling you how impressive Amazon’s offering is, what could possibly go wrong? Like the Office365 issue covered in part 2, absolutely nothing with the technology. To understand why, I need to explain Amazon’s pricing model.

Amazon offer a couple of ways to pay for servers (called instances in Amazon speak). An on-demand instance is calculated based on a per-hour price while the server is running. The more powerful the server is in terms of CPU, memory and disk, the more you pay. To give you an idea, Amazon’s pricing for a Windows box with 8CPU’s and 16GB of RAM, running in Amazon’s “US east” region will set you back $0.96 per hour (as of 27/12/11). If you do the basic math for that, it equates to around $8409 per year, or $25228 over three years. (Yeah I agree that’s high – even when you consider that you get all the trappings of a highly scalable and fault tolerant datacentre).

On the other hand, a reserved instance involves making a one-time payment and in turn, receive a significant discount on the hourly charge for that instance. Essentially if you are going to run an Amazon server on a 24*7 basis for more than 18 months or so, a reserved instance makes sense as it reduces considerable cost over the long term. The same server would only cost you $0.40 per hour if you pay an up-front $2800 for a 3 year term. Total cost: $13312 over three years – much better.

So with that scene set, consider this scenario: Back at the start of 2011, a client of mine consolidated all of their SharePoint cloud services to Amazon from a variety of other another hosting providers. They did this for a number of reasons, but it basically boiled down to the fact they had 1) outgrown the SaaS model and 2) had a growing number of clients. As a result, requirements from clients were getting more complicated and beyond that which most of the hosting providers could cater for. They also received irregular and inconsistent support from their existing providers, as well as some unexpected downtime that reduced confidence. In short, they needed to consolidate their cloud offering and manage their own servers. They were developing custom SharePoint solutions, needed to support federated claims authentication and required disaster recovery assurance to mitigate the risk of going 100% cloud. Amazon’s VPC offering in particular seemed ideal, because it allowed full control of the servers in a secure way.

Now making this change was not something we undertook lightly. We spent considerable time researching Amazon’s offerings, trying to understand all the acronyms as well as their fine print. (For what its worth I used IBIS as the basis to develop an assessment and the map of my notes can be found here). As you are about to see though, we did not check well enough.

Back when we initially evaluated the VPC offering, it was only available in very few Amazon sites (two locations in the USA only) and the service was still in beta. This caused us a bit of a dilemma at the time because of the risk of relying on a beta service. But we were assured when Amazon confirmed that VPC would eventually be available in all of of their datacentres. We also stress tested the service for a few weeks, it remained stable and we developed and tested a disaster recovery strategy involving SQL log shipping and a standby farm. We also purchased reserved instances from Amazon since these servers were going to be there for the long haul, so we pre-paid to reduce the hourly rates. Quite a complex configuration was provisioned in only two days and we were amazed by how easy it all was.

Things hummed along for 9 months in this fashion and the world was a happy place. We were delighted when Amazon notified us that VPC had come out of beta and was now available in any of Amazon’s datacentres around the world. We only used the US datacentre because it was the only location available at the time. Now we wanted to transfer the services to Singapore. My client contacted Amazon about some finer points on such a move and was informed that they would have to pay for their reserved instances all over again!

What the?

It turns out, reserved instances are not transferrable! Essentially, Amazon were telling us that although we paid for a three year reserved instance, and only used it for 9 months, to move the servers to a new region would mean we have to pay all over again for another 3 year reserve. According to Amazon’s documentation, each reserved instance is associated with a specific region, which is fixed for the lifetime of the reserved instance and cannot be changed.

“Okay,” we answer, “we can understand that in circumstances where people move to another cloud provider. But in our case we were not.” We had used around 1/3rd of the reserved instance. So surely Amazon should pro-rata the unused amount, and offer that as a credit when we re-purchase reserved instances in Singapore? I mean, we will still be hosting with Amazon, so overall, they will not be losing any revenue al all. On the contrary, we will be paying them more, because we will have to sign up for an additional 3 years of reserve when we move the services.

So we ask Amazon whether that can be done. “Nope,” comes back the answer from amazons not so friendly billing team with one of those trite and grossly insulting “Sorry for any inconvenience this causes” ending sentences. After more discussions, it seems that internally within Amazon, each region or datacentre within each region is its own profit centre. Therefore in typical silo fashion, the US datacentre does not want to pay money to the Singapore operation as that would mean the revenue we paid would no longer recognised against them.

Result? Customer is screwed all because the Amazon fiefdoms don’t like sharing the contents of the till. But hey – the regional managers get their bonuses right? Sad smile

Conclusion

Like part 2 of this cloud computing series, this is not a technical issue. Amazon’s cloud service in our experience has been reliable and performed well. In this case, we are turned off by the fact that their internal accounting procedures create a situation that is not great for customers who wish to remain loyal to them. In a post about the danger of short termism and ignoring legacy, I gave the example of how dumb it is for organisations to think they are measuring success based on how long it takes to close a helpdesk call. When such a KPI is used, those in support roles have little choice but to try and artificially close calls when users problems have not been solved because that’s how they are deemed to be performing well. The reality though is rather than measure happy customers, this KPI simply rewards which helpdesk operators have managed to game the system by getting callers off the phone as soon as they can.

I feel that Amazon are treating this is an internal accounting issue, irrespective of client outcomes. Amazon will lose the business of my client because of this since they have enough servers hosted where the financial impost of paying all over again is much more than transferring to a different cloud provider. While VPC and automated provisioning of virtual servers is cool and all, at the end of the day many hosting providers can offer this if you ask them. Although it might not be as slick with fancy as Amazon’s automated configuration, it nonetheless is very doable and the other providers are playing catch-up. Like Apple, Amazon are enjoying the benefits of being first to market with their service, but as competition heats up, others will rapidly bridge the gap.

Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Troubleshooting SharePoint (People) Search 101

Send to Kindle

I’ve been nerding it up lately SharePointwise, doing the geeky things that geeks like to do like ADFS and Claims Authentication. So in between trying to get my book fully edited ready for publishing, I might squeeze out the odd technical SharePoint post. Today I had to troubleshoot a broken SharePoint people search for the first time in a while. I thought it was worth explaining the crawl process a little and talking about the most likely ways in which is will break for you, in order of likelihood as I see it. There are articles out on this topic, but none that I found are particularly comprehensive.

Background stuff

If you consider yourself a legendary IT pro or SharePoint god, feel free to skip this bit. If you prefer a more gentle stroll through SharePoint search land, then read on…

When you provision a search service application as part of a SharePoint installation, you are asked for (among other things), a windows account to use for the search service. Below shows the point in the GUI based configuration step where this is done. First up we choose to create a search service application, and then we choose the account to use for the “Search Service Account”. By default this is the account that will do the crawling of content sources.

image    image

Now the search service account is described as so: “.. the Windows Service account for the SharePoint Server Search Service. This setting affects all Search Service Applications in the farm. You can change this account from the Service Accounts page under Security section in Central Administration.”

In reading this, suggests that the windows service (“SharePoint Server Search 14”) would run under this account. The reality is that the SharePoint Server Search 14 service account is the farm account. You can see the pre and post provisioning status below. First up, I show below where SharePoint has been installed and the SharePoint Server Search 14 service is disabled and with service credentials of “Local Service”.

image

The next set of pictures show the Search Service Application provisioned according to the following configuration:

  • Search service account: SEVENSIGMA\searchservice
  • Search admin web service account: SEVENSIGMA\searchadminws
  • Search query and site settings account: SEVENSIGMA\searchqueryss

You can see this in the screenshots below.

image

imageimage

Once the service has been successfully provisioned, we can clearly see the “Default content access account” is based on the “Search service account” as described in the configuration above (the first of the three accounts).

image

Finally, as you can see below, once provisioned, it is the SharePoint farm account that is running the search windows service.

image

Once you have provisioned the Search Service Application, the default content access (in my case SEVENSIGMA\searchservice), it is granted “Read” access to all web applications via Web Application User Policies as shown below. This way, no matter how draconian the permissions of site collections are, the crawler account will have the access it needs to crawl the content, as well as the permissions of that content. You can verify this by looking at any web application in Central Administration (except for central administration web application) and choosing “User Policy” from the ribbon. You will see in the policy screen that the “Search Crawler” account has “Full Read” access.

image

image

In case you are wondering why the search service needs to crawl the permissions of content, as well as the content itself, it is because it uses these permissions to trim search results for users who do not have access to content. After all, you don’t want to expose sensitive corporate data via search do you?

There is another more subtle configuration change performed by the Search Service. Once the evilness known as the User Profile Service has been provisioned, the Search service application will grant the Search Service Account specific permission to the User Profile Service. SharePoint is smart enough to do this whether or not the User Profile Service application is installed before or after the Search Service Application. In other words, if you install the Search Service Application first, and the User Profile Service Application afterwards, the permission will be granted regardless.

The specific permission by the way, is “Retrieve People Data for Search Crawlers” permission as shown below:

image    image

Getting back to the title of this post, this is a critical permission, because without it, the Search Server will not be able to talk to the User Profile Service to enumerate user profile information. The effect of this is empty "People Search results.

How people search works (a little more advanced)

Right! Now that the cool kids have joined us (who skipped the first section), lets take a closer look at SharePoint People Search in particular. This section delves a little deeper, but fear not I will try and keep things relatively easy to grasp.

Once the Search Service Application has been provisioned, a default content source, called – originally enough – “Local SharePoint Sites” is created. Any web applications that exist (and any that are created from here on in) will be listed here. An example of a freshly minted SharePoint server with a single web application, shows the following configuration in Search Service Application:

image

Now hopefully http://web makes sense. Clearly this is the URL of the web application on this server. But you might be wondering that sps3://web is? I will bet that you have never visited a site using sps3:// site using a browser either. For good reason too, as it wouldn’t work.

This is a SharePointy thing – or more specifically, a Search Server thing. That funny protocol part of what looks like a URL, refers to a connector. A connector allows Search Server to crawl other data sources that don’t necessarily use HTTP. Like some native, binary data source. People can develop their own connectors if they feel so inclined and a classic example is the Lotus Notes connector that Microsoft supply with SharePoint. If you configure SharePoint to use its Lotus Notes connector (and by the way – its really tricky to do), you would see a URL in the form of:

notes://mylotusnotesbox

Make sense? The protocol part of the URL allows the search server to figure out what connector to use to crawl the content. (For what its worth, there are many others out of the box. If you want to see all of the connectors then check the list here).

But the one we are interested in for this discussion is SPS3: which accesses SharePoint User profiles which supports people search functionality. The way this particular connector works is that when the crawler accesses this SPS3 connector, it in turns calls a special web service at the host specified. The web service is called spscrawl.asmx and in my example configuration above, it would be http://web/_vti_bin/spscrawl.asmx

The basic breakdown of what happens next is this:

  1. Information about the Web site that will be crawled is retrieved (the GetSite method is called passing in the site from the URL (i.e the “web” of sps3://web)
  2. Once the site details are validated the service enumerates all of the use profiles
  3. For each profile, the method GetItem is called that retrieves all of the user profile properties for a given user. This is added to the index and tagged as content class of “urn:content-class:SPSPeople” (I will get to this in a moment)

Now admittedly this is the simple version of events. If you really want to be scared (or get to sleep tonight) you can read the actual SP3 protocol specification PDF.

Right! Now lets finish this discussion by this notion of contentclass. The SharePoint search crawler tags all crawled content according to its class. The name of this “tag” – or in correct terminology “managed property” – is contentclass. By default SharePoint has a People Search scope. It is essentially a limits the search to only returning content tagged as “People” contentclass.

image

Now to make it easier for you, Dan Attis listed all of the content classes that he knew of back in SharePoint 2007 days. I’ll list a few here, but for the full list visit his site.

  • “STS_Web” – Site
  • “STS_List_850″ – Page Library
  • “STS_List_DocumentLibrary” – Document Library
  • “STS_ListItem_DocumentLibrary” – Document Library Items
  • “STS_ListItem_Tasks” – Tasks List Item
  • “STS_ListItem_Contacts” – Contacts List Item
  • “urn:content-class:SPSPeople” – People

(why some properties follow the universal resource name format I don’t know *sigh* – geeks huh?)

So that was easy Paul! What can go wrong?

So now we know that although the protocol handler is SPS3, it is still ultimately utilising HTTP as the underlying communication mechanism and calling a web service, we can start to think of all the ways that it can break on us. Let’s now take a look at common problem areas in order of commonality:

1. The Loopback issue.

This has been done to death elsewhere and most people know it. What people don’t know so well is that the loopback fix was to prevent an extremely nasty security vulnerability known as a replay attack that came out a few years ago. Essentially, if you make a HTTP connection to your server, from that server and using a name that does not match the name of the server, then the request will be blocked with a 401 error. In terms of SharePoint people search, the sps3:// handler is created when you create your first web application. If that web application happens to be a name that doesn’t match the server name, then the HTTP request to the spscrawl.asmx webservice will be blocked due to this issue.

As a result your search crawl will not work and you will see an error in the logs along the lines of:

  • Access is denied: Check that the Default Content Access Account has access to the content or add a crawl rule to crawl the content (0x80041205)
  • The server is unavailable and could not be accessed. The server is probably disconnected from the network.   (0x80040d32)
  • ***** Couldn’t retrieve server http://web.sevensigma.com policy, hr = 80041205 – File:d:\office\source\search\search\gather\protocols\sts3\sts3util.cxx Line:548

There are two ways to fix this. The quick way (DisableLoopbackCheck) and the right way (BackConnectionHostNames). Both involve a registry change and a reboot, but one of them leaves you much more open to exploitation. Spence Harbar wrote about the differences between the two some time ago and I recommend you follow his advice.

(As an slightly related side note, I hit an issue with the User Profile Service a while back where it gave an error: “Exception occurred while connecting to WCF endpoint: System.ServiceModel.Security.MessageSecurityException: The HTTP request was forbidden with client authentication scheme ‘Anonymous’. —> System.Net.WebException: The remote server returned an error: (403) Forbidden”. In this case I needed to disable the loopback check but I was using the server name with no alternative aliases or full qualified domain names. I asked Spence about this one and it seems that the DisableLoopBack registry key addresses more than the SMB replay vulnerability.)

2. SSL

If you add a certificate to your site and mark the site as HTTPS (by using SSL), things change. In the example below, I installed a certificate on the site http://web, removed the binding to http (or port 80) and then updated SharePoint’s alternate access mappings to make things a HTTPS world.

Note that the reference to SPS3://WEB is unchanged, and that there is also a reference still to HTTP://WEB, as well as an automatically added reference to HTTPS://WEB

image

So if we were to run a crawl now, what do you think will happen? Certainly we know that HTTP://WEB will fail, but what about SPS3://WEB? Lets run a full crawl and find out shall we?

Checking the logs, we have the unsurprising error “the item could not be crawled because the crawler could not contact the repository”. So clearly, SPS3 isn’t smart enough to work out that the web service call to spscrawl.asmx needs to be done over SSL.

image

Fortunately, the solution is fairly easy. There is another connector, identical in function to SPS3 except that it is designed to handle secure sites. It is “SPS3s”. We simple change the configuration to use this connector (and while we are there, remove the reference to HTTP://WEB)

image

Now we retry a full crawl and check for errors… Wohoo – all good!

image

It is also worth noting that there is another SSL related issue with search. The search crawler is a little fussy with certificates. Most people have visited secure web sites that warning about a problem with the certificate that looks like the image below:

image

Now when you think about it, a search crawler doesn’t have the luxury of asking a user if the certificate is okay. Instead it errs on the side of security and by default, will not crawl a site if the certificate is invalid in some way. The crawler also is more fussy than a regular browser. For example, it doesn’t overly like wildcard certificates, even if the certificate is trusted and valid (although all modern browsers do).

To alleviate this issue, you can make the following changes in the settings of the Search Service Application: Farm Search Administration->Ignore SSL warnings and tick “Ignore SSL certificate name warnings”.

image  image

image

The implication of this change is that the crawler will now accept any old certificate that encrypts website communications.

3. Permissions and Change Legacy

Lets assume that we made a configuration mistake when we provisioned the Search Service Application. The search service account (which is the default content access account) is incorrect and we need to change it to something else. Let’s see what happens.

In the search service application management screen, click on the default content access account to change credentials. In my example I have changed the account from SEVENSIGMA\searchservice to SEVENSIGMA\svcspsearch

image

Having made this change, lets review the effect in the Web Application User Policy and User Profile Service Application permissions. Note that the user policy for the old search crawl account remains, but the new account has had an entry automatically created. (Now you know why you end up with multiple accounts with the display name of “Search Crawling Account”)

image

Now lets check the User Profile Service Application. Now things are different! The search service account below refers to the *old* account SEVENSIGMA\searchservice. But the required permission of “Retrieve People Data for Search Crawlers” permission has not been granted!

image

 

image

If you traipsed through the ULS logs, you would see this:

Leaving Monitored Scope (Request (GET:https://web/_vti_bin/spscrawl.asmx)). Execution Time=7.2370958438429 c2a3d1fa-9efd-406a-8e44-6c9613231974
mssdmn.exe (0x23E4) 0x2B70 SharePoint Server Search FilterDaemon e4ye High FLTRDMN: Errorinfo is "HttpStatusCode Unauthorized The request failed with HTTP status 401: Unauthorized." [fltrsink.cxx:553] d:\office\source\search\native\mssdmn\fltrsink.cxx
mssearch.exe (0x02E8) 0x3B30 SharePoint Server Search Gatherer cd11 Warning The start address sps3s://web cannot be crawled. Context: Application ‘Search_Service_Application’, Catalog ‘Portal_Content’ Details: Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled. (0x80041205)

To correct this issue, manually grant the crawler account the “Retrieve People Data for Search Crawlers” permission in the User Profile Service. As a reminder, this is done via the Administrators icon in the “Manage Service Applications” ribbon.

image

Once this is done run a fill crawl and verify the result in the logs.4.

4. Missing root site collection

A more uncommon issue that I once encountered is when the web application being crawled is missing a default site collection. In other words, while there are site collections defined using a managed path, such as http://WEB/SITES/SITE, there is no site collection defined at HTTP://WEB.

The crawler does not like this at all, and you get two different errors depending on whether the SPS or HTTP connector used.

  • SPS:// – Error in PortalCrawl Web Service (0x80042617)
  • HTTP:// – The item could not be accessed on the remote server because its address has an invalid syntax (0x80041208)

image

The fix for this should be fairly obvious. Go and make a default site collection for the web application and re-run a crawl.

5. Alternative Access Mappings and Contextual Scopes

SharePoint guru (and my squash nemesis), Nick Hadlee posted recently about a problem where there are no search results on contextual search scopes. If you are wondering what they are Nick explains:

Contextual scopes are a really useful way of performing searches that are restricted to a specific site or list. The “This Site: [Site Name]”, “This List: [List Name]” are the dead giveaways for a contextual scope. What’s better is contextual scopes are auto-magically created and managed by SharePoint for you so you should pretty much just use them in my opinion.

The issue is that when the alternate access mapping (AAM) settings for the default zone on a web application do not match your search content source, the contextual scopes return no results.

I came across this problem a couple of times recently and the fix is really pretty simple – check your alternate access mapping (AAM) settings and make sure the host header that is specified in your default zone is the same url you have used in your search content source. Normally SharePoint kindly creates the entry in the content source whenever you create a web application but if you have changed around any AAM settings and these two things don’t match then your contextual results will be empty. Case Closed!

Thanks Nick

6. Active Directory Policies, Proxies and Stateful Inspection

A particularly insidious way to have problems with Search (and not just people search) is via Active Directory policies. For those of you who don’t know what AD policies are, they basically allow geeks to go on a power trip with users desktop settings. Consider the image below. Essentially an administrator can enforce a massive array of settings for all PC’s on the network. Such is the extent of what can be controlled, that I can’t fit it into a single screenshot. What is listed below is but a small portion of what an anal retentive Nazi administrator has at their disposal (mwahahaha!)

image

Common uses of policies include restricting certain desktop settings to maintain consistency, as well as enforce Internet explorer security settings, such as proxy server and security settings like maintaining the trusted sites list. One of the common issues encountered with a global policy defined proxy server in particular is that the search service account will have its profile modified to use the proxy server.

The result of this is that now the proxy sits between the search crawler and the content source to be crawled as shown below:

Crawler —–> Proxy Server —–> Content Source

Now even though the crawler does not use Internet Explorer per se, proxy settings aren’t actually specific to Internet Explorer. Internet explorer, like the search crawler, uses wininet.dll. Wininet is a module that contains Internet-related functions used by Windows applications and it is this component that utilises proxy settings.

Sometimes people will troubleshoot this issue by using telnet to connect to the HTTP port. "ie: “Telnet web 80”. But telnet does not use the wininet component, so is actually not a valid method for testing. Telnet will happily report that the web server is listening on port 80 or 443, but it matters not when the crawler tries to access that port via the proxy. Furthermore, even if the crawler and the content source are on the same server, the result is the same. As soon as the crawler attempts to index a content source, the request will be routed to the proxy server. Depending on the vendor and configuration of the proxy server, various things can happen including:

  • The proxy server cannot handle the NTLM authentication and passes back a 400 error code to the crawler
  • The proxy server has funky stateful inspection which interferes with the allowed HTTP verbs in the communications and interferes with the crawl

For what its worth, it is not just proxy settings that can interfere with the HTTP communications between the crawler and the crawled. I have seen security software also get in the way, which monitors HTTP communications and pre-emptively terminates connections or modifies the content of the HTTP request. The effect is that the results passed back to the crawler are not what it expects and the crawler naturally reports that it could not access the data source with suitably weird error messages.

Now the very thing that makes this scenario hard to troubleshoot is the tell-tale sign for it. That is: nothing will be logged in the ULS logs, not the IIS logs for the search service. This is because the errors will be logged in the proxy server or the overly enthusiastic stateful security software.

If you suspect the problem is a proxy server issue,  but do not have access to the proxy server to check logs, the best way to troubleshoot this issue is to temporarily grant the search crawler account enough access to log into the server interactively. Open internet explorer and manually check the proxy settings. If you confirm a policy based proxy setting, you might be able to temporarily disable it and retry a crawl (until the next AD policy refresh reapplies the settings). The ideal way to cure this problem is to ask your friendly Active Directory administrator to either:

  • Remove the proxy altogether from the SharePoint server (watch for certificate revocation slowness as a result)
  • Configure an exclusion in the proxy settings for the AD policy to that the content sources for crawling are not proxied
  • Create a new AD policy specifically for the SharePoint box so that the default settings apply to the rest of the domain member computers.

If you suspect the issue might be overly zealous stateful inspection, temporarily disable all security-type software on the server and retry a crawl. Just remember, that if you have no logs on the server being crawled, chances are its not being crawled and you have to look elsewhere.

7. Pre-Windows 2000 Compatibility Access Group

In an earlier post of mine, I hit an issue where search would yield no results for a regular user, but a domain administrator could happily search SP2010 and get results. Another symptom associated with this particular problem is certain recurring errors event log – Event ID 28005 and 4625.

  • ID 28005 shows the message “An exception occurred while enqueueing a message in the target queue. Error: 15404, State: 19. Could not obtain information about Windows NT group/user ‘DOMAIN\someuser’, error code 0×5”.
  • The 4625 error would complain “An account failed to log on. Unknown user name or bad password status 0xc000006d, sub status 0xc0000064” or else “An Error occured during Logon, Status: 0xc000005e, Sub Status: 0x0”

If you turn up the debug logs inside SharePoint Central Administration for the “Query” and “Query Processor” functions of “SharePoint Server Search” you will get an error “AuthzInitializeContextFromSid failed with ERROR_ACCESS_DENIED. This error indicates that the account under which this process is executing may not have read access to the tokenGroupsGlobalAndUniversal attribute on the querying user’s Active Directory object. Query results which require non-Claims Windows authorization will not be returned to this querying user.

image

The fix is to add your search service account to a group called “Pre-Windows 2000 Compatibility Access” group. The issue is that SharePoint 2010 re-introduced something that was in SP2003 – an API call to a function called AuthzInitializeContextFromSid. Apparently it was not used in SP2007, but its back for SP2010. This particular function requires a certain permission in Active Directory and the “Pre-Windows 2000 Compatibility Access” group happens to have the right required to read the “tokenGroupsGlobalAndUniversal“ Active Directory attribute that is described in the debug error above.

8. Bloody developers!

Finally, Patrick Lamber blogs about another cause of crawler issues. In his case, someone developed a custom web part that had an exception thrown when the site was crawled. For whatever reason, this exception did not get thrown when the site was viewed normally via a browser. As a result no pages or content on the site could be crawled because all the crawler would see, no matter what it clicked would be the dreaded “An unexpected error has occurred”. When you think about it, any custom code that takes action based on browser parameters such as locale or language might cause an exception like this – and therefore cause the crawler some grief.

In Patricks case there was a second issue as well. His team had developed a custom HTTPModule that did some URL rewriting. As Patrick states “The indexer seemed to hate our redirections with the Response.Redirect command. I simply removed the automatic redirection on the indexing server. Afterwards, everything worked fine”.

In this case Patrick was using a multi-server farm with a dedicated index server, allowing him to remove the HTTP module for that one server. in smaller deployments you may not have this luxury. So apart from the obvious opportunity to bag programmers :-), this example nicely shows that it is easy for a 3rd party application or code to break search. What is important for developers to realise is that client web browsers are not the only thing that loads SharePoint pages.

If you are not aware, the user agent User Agent string identifies the type of client accessing a resource. This is the means by which sites figure out what browser you are using. A quick look at the User Agent parameter by SharePoint Server 2010 search reveals that it identifies itself as “Mozilla/4.0 (compatible; MSIE 4.01; Windows NT; MS Search 6.0 Robot)“. At the very least, test any custom user interface code such as web parts against this string, as well as check the crawl logs when it indexes any custom developed stuff.

Conclusion

Well, that’s pretty much my list of gotchas. No doubt there are lots more, but hopefully this slightly more detailed exploration of them might help some people.

 

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

www.spgovia.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Consequences of complexity–the evilness of the SharePoint 2010 User Profile Service

Send to Kindle

Hiya

A few months back I posted a relatively well behaved rant over the ridiculously complex User Profile Service Application of SharePoint 2010. I think this component in particular epitomises SharePoint 2010’s awful combination of “design by committee” clunkiness, along with real-world sheltered Microsoft product manager groupthink which seems to rate success on the number of half baked features packed in, as opposed to how well those features install logically, integrate with other products and function properly in real-world scenarios.

Now truth be told, until yesterday, I have had an unblemished record with the User Profile Service – being able to successfully provision it first time at all sites I have visited (and no I did not resort to running it all as administrator). Of course, we all have Spence to thank for this with his rational guide. Nevertheless, I am strongly starting to think that I should write the irrational guide as a sort of bizzaro version of Spencers articles, which combines his rigour with some mega-ranting ;-).

So what happened to blemish my perfect record? Bloody Active Directory policies – that’s what.

In case you didn’t know, SharePoint uses a scaled down, pre-release version of Forefront Identify Manager. Presumably the logic here to this was to allow more flexibility, by two-way syncing to various directory services, thereby saving the SharePoint team development time and effort, as well as being able to tout yet another cool feature to the masses. Of course, the trade-off that the programmers overlooked is the insane complexity that they introduced as a result. I’m sure if you asked Microsoft’s support staff what they think of the UPS, they will tell you it has not worked out overly well. Whether that feedback has made it way back to the hallowed ground of the open-plan cubicles of SharePoint product development I can only guess. But I theorise that if Microsoft made their SharePoint devs accountable for providing front-line tech support for their components, they will suddenly understand why conspiracy theorist support and infrastructure guys act the way they do.

Anyway I better supress my desire for an all out rant and tell you the problem and the fix. The site in question was actually a fairly simple set-up. Two server farm and a single AD forest. About the only thing of significance from the absolute stock standard setup was that the active directory NETBIOS name did not match the active directory fully qualified domain name. But this is actually a well known and well covered by TechNet and Spence. A quick bit of PowerShell goodness and some AD permission configuration sorts the issue.

Yet when I provisioned the User Profile Service Application and then tried to start the User Profile Synchronisation Service on the server (the big, scary step that strikes fear into practitioners), I hit the sadly common “stuck on starting” error. The ULS logs told me utterly nothing of significance – even when i turned the debug juice to full throttle. The ever helpful windows event logs showed me Event ID 3:

ForeFront Identity Manager,
Level: Error

.Net SqlClient Data Provider: System.Data.SqlClient.SqlException: HostId is not registered
at Microsoft.ResourceManagement.Data.Exception.DataAccessExceptionManager.ThrowException(SqlException innerException)
at Microsoft.ResourceManagement.Data.DataAccess.RetrieveWorkflowDataForHostActivator(Int16 hostId, Int16 pingIntervalSecs, Int32 activeHostedWorkflowDefinitionsSequenceNumber, Int16 workflowControlMessagesMaxPerMinute, Int16 requestRecoveryMaxPerMinute, Int16 requestCleanupMaxPerMinute, Boolean runRequestRecoveryScan, Boolean& doPolicyApplicationDispatch, ReadOnlyCollection`1& activeHostedWorkflowDefinitions, ReadOnlyCollection`1& workflowControlMessages, List`1& requestsToRedispatch)
at Microsoft.ResourceManagement.Workflow.Hosting.HostActivator.RetrieveWorkflowDataForHostActivator()
at Microsoft.ResourceManagement.Workflow.Hosting.HostActivator.ActivateHosts(Object source, ElapsedEventArgs e)

The most common issue with this message is the NETBIOS issue I mentioned earlier. But in my case this proved to be fruitless. I also took Spence’s advice and installed the Feb 2011 cumulative update for SharePoint 2010, but to no avail. Every time I provisioned the UPS sync service, I received the above persistent error – many, many, many times. 🙁

For what its worth, forget googling the above error because it is a bit of a red herring and you will find issues that will likely point you to the wrong places.

In my case, the key to the resolution lay in understanding my previously documented issue with the UPS and self-signed certificate creation. This time, I noticed that the certificates were successfully created before the above error happened.  MIISCLIENT showed no configuration had been written to Forefront Identity Manager at all. Then I remembered that the SharePoint User Profile Service Application talks to Forefront over HTTPS on port 5725. As soon as I remembered that HTTP was the communication mechanism, I had a strong suspicion on where the problem was – as I have seen this sort of crap before…

I wondered if some stupid proxy setting was getting in the way. Back in the halcyon days of SharePoint 2003, I had this issue when scheduling SMIGRATE tasks, where the account used to run SMIGRATE is configured to use a proxy server, would fail. To find out if this was the case here, a quick execute of the GPRESULT tool and we realised that there was a proxy configuration script applied at the domain level for all users. We then logged in as the farm account interactively (given that to provision the UPS it needs to be Administrator anyway this was not a problem). We then disabled all proxy configuration via Internet explorer and tried again.

Blammo! The service provisions and we are cooking with gas! it was the bloody proxy server. Reconfigure group policy and all is good.

Conclusion

The moral of the story is this. Anytime windows components communicate with each-other via HTTP, there is always a chance that some AD induced dumbass proxy setting might get in the way. If not that, stateful security apps that check out HTTP traffic or even a corrupted cache (as happened in this case). The ULS logs will never tell you much here, because the problem is not SharePoint per se, but the registry configuration enforced by policy.

So, to ensure that you do not get affected by this, configure all SharePoint servers to be excluded from proxy access, or configure the SharePoint farm account not to use a proxy server at all. (Watch for certificate revocation related slowness if you do this though).

Finally, I called this post “consequences of complexity” because this sort of problem is very tricky to identify the root cause. With so many variables in the mix, how the hell can people figure this sort of stuff out?

Seriously Microsoft, you need to adjust your measures of success to include resiliency of the platform!

 

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle