Trials or tribulation? Inside SharePoint 2013 workflows–Part 6

This entry is part 6 of 13 in the series Workflow
Send to Kindle

Hi and welcome to part 6 of my series of articles aimed at demystifying various aspects to SharePoint 2013 workflows. We have been using a mythical example of a document approval workflow from our mythical multinational called Megacorp Inc. We have been trying to create a workflow attempting to implement the process below…

Snapshot_thumb3

Seems straightforward enough, but in part 3, we foiled by the use of check in/check out on document libraries and a completely useless error message didn’t help matters. We eventually worked around that issue, but in part 4, we got stuck on a bigger snag because of our chosen information architecture. The Organisation column we created is a managed metadata column. It turns out that you cannot use a Managed Metadata column as a filter for a list (steps 2 and 3 above). In the last article, we took a detour into the world of dictionary variables and a very powerful new workflow action called “Call HTTP Web Service”. We learnt that in situations where a built-in workflow action does not cut it for you, but you might be able to use Call HTTP Web Service to do what you need. This sets the scene for our next exciting instalment. Perhaps we can get around this managed metadata issue with one of SharePoint’s many web services? If so, which one do I need to use and why?

In this post and the next few, I am going to show you two ways that we can get around the problem of not being able to filter via Managed metadata using the Call HTTP Web Service capability. The first method is a little easier to build than the second method, but it has a flaw that hopefully will become self evident as we proceed. Having said this, I feel it is really important to cover both approaches, because each showcases different features and capabilities of SharePoint Designer 2013 workflows. Therefore, this article and the next two will show the easier but flawed way, and articles 9, 10, 11 and 12 will show what I think is the better way to go.

The workflow looping method…

The gist of the approach we are going to take is to:

  • Get the unique ID of the Organisation for the selected document in the Documents library
  • Using the SharePoint lists REST web service, we will load the the Assigned to and Organisation columns from the Process Owners list and store it into a Dictionary variable
  • Using workflow looping capability, we will step through each item in the dictionary, and find the first entry where the unique ID of the Organisation from step 1 matches the Organisation in process owners
  • For the marching entry, Assign a task to the person mentioned in the Assigned to column.

Now to pull this off, we are going to bring together all of the topics that I have covered in this series. I am also going to be a little less verbose with screenshots, because by now some aspects of workflow creation using SharePoint designer should be getting more familiar. Speaking of more familiar, let’s take a closer look at the lists web service again. In my second REST interlude in part 4, I demonstrated how you could specify the columns that you want to bring back from a web service call, rather than all columns. In the example below, I am showing how you can bring back just the Organisation and Assigned to columns from the Process Owners list (AssignedToId a REST specific thing that represents the Assigned To column. More about that in part 8).

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbytitle(‘Process%20Owners’)/Items?$select=AssignedToId,Organisation

Here is the XML for a single process owner entry… Note that we never get to see the name of the Organisation in the XML for the Organisation column (for that matter, we don’t see the name person in the Assigned column either – an issue I will deal with later). Instead, we have the GUID for the Organisation in the <d:TermGuid> section.

  - <content type="application/xml">
    - <m:properties>
      - <d:Organisation m:type="SP.Taxonomy.TaxonomyFieldValue">
          <d:Label>14</d:Label> 
          <d:TermGuid>e2f8e2e0-9521-4c7c-95a2-f195ccad160f</d:TermGuid> 
          <d:WssId m:type="Edm.Int32">14</d:WssId> 
        </d:Organisation>
        <d:AssignedToId m:type="Edm.Int32">7</d:AssignedToId> 
      </m:properties>
    </content>

Now also in part 4, I explained the Organsiation_0 hidden column and showed that it stores both the organisation name, as well as the GUID of that organisation. So if Organisation has been set to Megacorp Burgers for a document, the value of Organsiation_0 for that document would be:

Megacorp Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f

The common element between the XML from the Process Qwners list, and the value of Organsiation_0 from the Documents library is the Term GUID. Therefore if we can extract the GUID part of Organsiation_0, we can use it to search the Process Owners list and find which entry where the GUID specified in the <d:TermGuid> matches. So first up, let’s clean things up, then use some workflow actions to get hold of the GUID from the Organsiation_0 column.

Getting the GUID…

Step 1:

Turning our attention back to the Process Owners Approval workflow, let’s delete our existing workflow actions, workflow variables and start afresh. Click on any existing workflow actions and choose Delete Action from the dropdown menu as shown below. To delete variables, click the local variables ribbon icon and remove any listed…

image  image  image

Now you should be looking at a clean workflow.

Step 2:

Add the workflow action Find substring in string. To complete the configuration of this action, click the substring hyperlink and add a pipe symbol “|”. Click the string hyperlink, the fx button and from Current Item, choose Organisation_0 as shown below…

image  image

image

The result of this workflow action, will be the position in the string of the pipe symbol will be stored in a variable called index. For example, if you count the number of characters until you get to the pipe symbol in the string, “Megacorp Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”, the answer is 17.

Our next step is to grab all of the characters in the string after the pipe symbol because that is the GUID we need. The way we will do this, we will use another workflow action called Extract substring from index of string. This action takes a string and an index position, and returns all characters to the right of the index. Thus, with the string “Megacorp Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”, if we start at position 17 we will end up with “|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”. This is not quite right because we do not want the pipe symbol, so we will use another workflow action called Do Calculation to add 1 to the index variable first.

Step 3:

Add the Do Calculation action, click the value hyperlink and click the fx button. Change the data source to Workflow Variables and Parameters and choose the variable called index. Click the value hyperlink and type in the number 1.

image

image

The net result of this is we have a variable called calc that storing the position after the pipe symbol in Organsiation_0.

Step 4:

Add the Extract substring from index of string workflow action. Click the string hyperlink, the fx button and from Current Item, choose Organisation_0. Click the “0” hyperlink next to “starting from” and click the fx button. Change the data source to Workflow Variables and Parameters and choose the variable called calc. Finally, click on Variable: substring and choose to Create a new variable… and call it TermGUID as shown below…

image  image

At this point, it might be handy to use the log the value of TermGUID to the workflow history to make sure that things are working as we expect. We can delete this step later…

Step 5:

Add a log to workflow history action and log the value of TermGUID. The final workflow should look like this…

image

Step 6:

Publish this workflow, confirm there are no errors and then run it against a document in the documents library. Wohoo! we now have the GUID!

image

Using stages…

Now that we have the GUID, it makes sense that we can make this sequence of actions a workflow stage. Then we can add a new stage for the rest of the workflow and add some error checking logic.

Step 1:

Click the stage header and rename the stage to Obtain Term GUID.

image

Step 2:

Click outside the stage and from the ribbon, click the Stage icon. A new stage will be added to the workflow. Call this stage Get Process Owners.

image

Now let’s create the logic that connects up the stages. We will set it that we will only move to the Get Process Owners stage if the TermGUID variable has a value. After all, if there is not a valid GUID, there is no point continuing the workflow.

Step 3:

In the Obtain Term GUID stage, select the Go to End of Workflow action and delete it. In the ribbon, click the Condition button and choose If any value equals value from the drop down menu. Confirm that the condition section has been added to the Transition to stage section of the workflow stage…

image   image

image

Step 4:

Click the value hyperlink, click the fx button and choose Workflow Variables and Parameters from the Data source dropdown. Find the TermGUID variable in the Field from source dropdown. Click the equals hyperlink and from the dropdown, choose “is not empty”.

image  image  image

Step 5:

Click on the top “Insert go-to actions with conditions” section, and add a Go to a Stage action. From the stage dropdown, choose “Get Process Owners”

image

Step 6:

Click on the bottom “Insert go-to actions with conditions” section, and add a Go to a Stage action. From the stage dropdown, choose “End of Workflow”. The complete workflow should look like the image below:

image

A HTTP interlude…

Our next task is to get all of the process owners into a dictionary variable. Before we do this, I am going to give you a little lesson on how the HTTP protocol works, because we are literally going to be hand crafting our own request. Therefore it is handy to understand the basics.

When your browser makes a request to a website or web service, it does not just say “Gimme this URL”. Often the server will change its behaviour based on the nature of the request.  For example, if the requestor is a mobile device, the server will send back different HTML compared to a PC browser. So how can the server tell if a request is made from a mobile device versus a PC? The answer is, that when the browser makes a request, it sends additional information in the form of request headers. Request headers are used for all sorts of things, and we are going to need to make use of them. Why? Remember in part 4, that I mentioned the JSON data format and that we need to tell SharePoint that any data it sends us has to be JSON format. Here is another of my dodgy diagrams explaining this by example…

Snapshot

Technically, we have to send the string “Accept: application/json;odata=verbose” in the request header to make this happen. So let’s see how we can put this request together via SharePoint Designer. Just to remind you the URL of the web service to get all of the process owners is:

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbytitle(‘Process%20Owners’)/Items?$select=AssignedToId,Organisation

Crafting the request…

The first thing we need to do is to create the request header that tells SharePoint to return the data in JSON format. This is done via creating a dictionary variable.

Step 1:

In the Get Process Owners workflow stage, add a Build Dictionary action. Click the this hyperlink to display the Build Dictionary window. Click the Add button and type “Accept” into the Name textbox and application/jason;odata=verbose into the value textbox. Click OK twice, then click the Variable: dictionary hyperlink and create a new variable called RequestHeader.

image  image  image

image  image  image

Step 2:

Add a Call HTTP Web Service action and then click to select it as shown below. If you look at the parameters you can set, there is no mention of request header. To set it, click the Advanced Properties icon in the ribbon. In the Call HTTP Web Service Properties dialog box, click the RequestHeaders parameter and in the drop down list to the right of it, choose the RequestHeader variable created in step 1. Click OK. Now the request for JSON format has been set.

image   image

image  image

Step 3:

Click on the “this” hyperlink to the left of “HTTP web service”. This will bring up the HTTP Web Service Details screen. At this point we could paste in the URL above, but we are going to this a little smarter than that. Click the ellipsis next to the textbox to bring up the String Builder dialog box.

image  image

Click the Add or Change Lookup button and in the Data source dropdown, choose Workflow context. This data source comes built in with any workflow you create and as you will see, contains some very handy information that we can use in our workflows. From the Field from source dropdown, choose Current Site URL and click OK. What this will do is take whatever site the workflow is run from and bring it back as a string – in this case http://megacorp/iso9001/. The reason this is a good thing is when you want to use this workflow on another site, such as from development to production. If you use the Current Site URL workflow context, we are not hard-coding the current site name into the workflow.

image  image  image

Anyhow, now that we have the site name, let’s complete the rest of the URL. In the string builder dialog, add “_vti_bin/client.svc/web/lists/getbytitle(‘Process Owners’)/Items?$select=AssignedToId,Organisation” and click OK

image

Our call HTTP Web service now looks like this:

image

Now we are expecting a JSON data feed as a response to this request, so we need to create another dictionary variable to handle it.

Step 4:

Click the response hyperlink and choose to Create a new variable and call it ProcessOwnersList.

image  image

image

Right! At this point, we have built the Call HTTP Web Service and we should test things to make sure it is working. If you look closely at the Call HTTP Web Service action, one of the variables that get created is called responseCode, which is the way the HTTP protocol reports whether the request worked or not. If the response code is 200 (OK), then the query worked. So let’s log the response code to the workflow history and run a test.

Step 5:

Add a Log to History List action. Click the message hyperlink and click the fx button. In the Lookup for String dialog box, choose Workflow Variables and Parameters from the Data source dropdown and choose responseCode from the Field from source dropdown and click ok.

image

Step 6:

In the Transition to stage section, add a Go to a Stage action and set the stage as End of Workflow. Click OK and review the workflow. It should look like the screen below.

image

Testing our progress and next steps…

Publish the workflow, run it and check the results in workflow history.  Wohoo! Our HTTP call worked! Note the OK in the workflow history!

image

At this point I will stop with this post, as it is getting rather long and we still have a bit to do. Although we know that the HTTP call worked, we have not looked at the data that came back. In the next post, we will use some more workflow actions to loop through the data returned to find the matching process owner.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

A lesser known way to fine-tune SharePoint search precision…

Send to Kindle

Hi all

While I’d like to claim credit for the wisdom in this post, alas I cannot. One of Seven Sigma’s consultants (Daniel Wale) worked this one out and I thought that it was blog-worthy. Before I get into the issue and Daniel’s resolution, let me give you a bit of search engine theory 101 with a concept that I find is useful to help understand search optimisation.

Precision vs. recall

Each time a person searches for information, there is an underlying goal or intended outcome. While there has been considerable study of information seeking behaviours in academia and beyond, they boil down to three archetype scenarios.

  1. “I know exactly what I am looking for” – The user has a particular place in mind, either because they visited it in the past or because they assume it exists. This known as known item seeking, but is also referred to as navigational seeking or refinding.
  2. “I’m not sure what I am looking for but I’ll know it when I find it” – This is known as exploratory seeking and the purpose is to find information assumed to be available. This is characterised by
    • – Looking for more than one answer
    • – No expectation of a “right” answer
    • – Open ended
    • – Not necessarily knowing much about what is being looking for
    • – Not being able to articulate what is being looked for
  3. “Gimme gimme gimme!” – A detailed research type search known as exhaustive seeking, leaving no stone unturned in topic exploration. This is characterised by;
    • – Performing multiple searches
    • – Expressing what is being looked for in many ways

Now among other things, each of these scenarios would require different search results to meet the information seeking need. For example: If you know what you are looking for, then you would likely prefer a small, highly accurate set of search results that has the desired result at the top of the list. Conversely if you are performing an exploratory or exhaustive search, you would likely prefer a greater number of results since any of them are potentially relevant to you.

In information retrieval, the terms precision and recall are used to measure search efficiency. Google’s Tim Bray put it well when he said “recall measures how well a search system finds what you want and precision measures how well it weeds out what you do not want”. Sometimes recall is just what the doctor ordered, whereas other times, precision is preferred.

The scenario and the issue…

That said, recently, Seven Sigma worked on a knowledgebase project for a large customer contact centre. The vast majority of the users of the system are customer centre operators who deal directly with all customer enquiries and have worked there for a long time. Thus most of the search behaviours are in the known item seeking category as they know the content pretty well – it is just that there is a lot of it. Additionally, picture yourself as one of those operators and then imagine the frustration a failed or time consuming search with an equally frustrated customer on the end of the phone and a growing queue of frustrated callers waiting their turn. In this scenario, search results need to be as precise as possible.

Thus, we invested a lot of time in the search and navigation experience on this project and that investment paid off as the users were very happy with the new system and particularly happy with the search experience. Additionally, we created a mega menu solution to the current navigation that dynamically builds links from knowledgebase article metadata and a managed metadata term set. This was done via the data view web part, XSLT, JavaScript and Marc’s brilliant SPServices. We were very happy with it because there was no server side code at all, yet it was very easy to administer.

So what was the search related issue? In a nutshell, we forgot that the search crawler doesn’t differentiate between your pages content and items in your custom navigation. As a result, we had an issue where searches did not have adequate precision.

To explain the problem, and the resolution, I’ll take a step back and let Daniel continue the story… Take it away Dan…

The knowledgebase that Paul described above contained thousands of articles, and when the search crawler accessed each article page, it also saw the titles of many other articles in the dynamic menu code embedded in the page. As a result, this content also got indexed. When you think about it, the search crawler can’t tell whether content is real content versus when it is a dynamic menu that drops down/slides out when you hover over the menu entry point. The result was that when users searched for any term that appeared in the mega menu, they would get back thousands of results (a match for every page) even when the “actual content” of the page doesn’t contain any references to the searched term.

There is a simple solution however, for controlling what the SharePoint search crawler indexes and what it ignores. SharePoint knows to exclude content that exists inside of <div> HTML tags that have the class noindex added to them. Eg

<div class=”menu noindex> 
  <ul> 
    <li>Article 1</li> 
    <li>Article 2</li> 
  </ul> 
</div>

There is one really important thing to note however. If your <div class=”noindex”> contains a nested <div> tag that doesn’t contain the noindex class, everything inside of this inner <div> tag will be included by the crawler. For example:

<div class=”menu noindex> 
  <ul> 
    <li>Article 1</li> 

      <div class=”submenu>
        <ul>
          <li>Article 1.1</li>
          <li>Article 1.2</li>
        </ul>
      </div>

    <li>Article 2</li> 
  </ul> 
</div>

In the code above the nested <div> to surround the submenu items does not contain the noindex class. So the text “Article 1.1” and “Article 1.2” will be crawled, while the “Article 1” and “Article 2” text in the parent <div> will still be excluded.

Obviously the example above its greatly simplified and like our solution, your menu is possibly making use of a DataViewWebPart with an XSL transform building it out. It’s inside your XSL where you’ll need to include the <div> with the noindex class because the Web Part will generate its own <div> tags that will encapsulate your menu. (Use the browser Developer Tools and inspect the code that it inserts if you aren’t familiar with the code generated, you’ll find at least one <div> elements that is nested inside any <div class=”noindex”> you put around your web part thinking you were going to stop the custom menu being crawled).

Initially looking around for why our search results were being littered with so many results that seemed irrelevant, I found the way to exclude the custom menu using this method rather easily, I also found a lot of forum posts of people having the same issue but reporting that their use of <div> tags with the noindex class was not working. Some of these posts people had included snippets of their code, each time they had nested <div> tags and were baffled by why their code wasn’t working. I figured most people were having this problem because they simply don’t read the detail in the solutions about the nesting or simply don’t understand that the web part will generate its own HTML into their page and quite likely insert a <div> that surrounds the content they are wanting to hide. As any SharePoint developer quickly finds out a lot of knowledge in SharePoint won’t come from well set out documentation library with lots of code examples that developers get used to with other environments, you need to read blogs (like this one), read forums, talk to colleagues and just build up your own experience until these kinds of gotchas are just known to you. Even the best SharePoint developer can overlook simple things like this and by figuring them out they get that little bit better each time.

Being a SharePoint developer is really about being the master of self-learning, the master of using a search engine to find the knowledge you need and most importantly the master of knowing which information you’re reading is actually going to be helpful and what is going to lead you down the garden path. The MSDN blog post by Mark Arend (http://blogs.msdn.com/b/markarend/archive/2010/06/07/control-search-indexing-crawling-within-a-page-with-noindex.aspx) gives a clear description of the problem and the solution, he also states that it is by design that nested <div> tags are re-evaluated for the noindex class. He also mentions the product team was considering changing this…  did this create the confusion for people or was it that they read the first part of the solution and didn’t read the note about nested <div> tags? In any case it’s a vital bit of the solution that it seems a lot of people overlook still.

In case you are wondering, the built in SharePoint navigation menu’s already have the correct <div> tags with the noindex class surrounding them so they aren’t any concern. This problem only exists if you have inserted your own dynamic menu system.

Other Search Provider Considerations

It is more common that you think that some sites do not just use SharePoint Search. The <div class=”noindex”> is a SharePoint specific filter for excluding content within a page, what if you have a Google Search Appliance crawling your site as well? (Yep… we did in this project)

You’re in luck, the Google documents how to exclude content within a page from their search appliance. There are a few different options but the equivalent blanket ignore of the contents between the <div class=”noindex”> tags would be to encapsulate the section between the following two comments

<!–googleoff: all–>

and

<!–googleon: all–>

If you want to know more about the GSA googleoff/googleon tags and the various options you have here is the documentation: http://code.google.com/apis/searchappliance/documentation/46/admin_crawl/Preparing.html#pagepart

Conclusion

(… and Paul returns to the conversation).

I think Dan has highlighted an easy to overlook implication of custom designing not only navigational content, but really any type of dynamically generated content on a page. While the addition of additional content can make a page itself more intuitive and relevant, consider the implication on the search experience. Since the contextual content will be crawled along with the actual content, sometimes you might end up inadvertently sacrificing precision of search results without realising.

Hope this helps and thanks for reading (and thanks Dan for writing this up)

 

Paul Culmsee

www.sevensigma.com.au

h2bp2013

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Troubleshooting SharePoint (People) Search 101

Send to Kindle

I’ve been nerding it up lately SharePointwise, doing the geeky things that geeks like to do like ADFS and Claims Authentication. So in between trying to get my book fully edited ready for publishing, I might squeeze out the odd technical SharePoint post. Today I had to troubleshoot a broken SharePoint people search for the first time in a while. I thought it was worth explaining the crawl process a little and talking about the most likely ways in which is will break for you, in order of likelihood as I see it. There are articles out on this topic, but none that I found are particularly comprehensive.

Background stuff

If you consider yourself a legendary IT pro or SharePoint god, feel free to skip this bit. If you prefer a more gentle stroll through SharePoint search land, then read on…

When you provision a search service application as part of a SharePoint installation, you are asked for (among other things), a windows account to use for the search service. Below shows the point in the GUI based configuration step where this is done. First up we choose to create a search service application, and then we choose the account to use for the “Search Service Account”. By default this is the account that will do the crawling of content sources.

image    image

Now the search service account is described as so: “.. the Windows Service account for the SharePoint Server Search Service. This setting affects all Search Service Applications in the farm. You can change this account from the Service Accounts page under Security section in Central Administration.”

In reading this, suggests that the windows service (“SharePoint Server Search 14”) would run under this account. The reality is that the SharePoint Server Search 14 service account is the farm account. You can see the pre and post provisioning status below. First up, I show below where SharePoint has been installed and the SharePoint Server Search 14 service is disabled and with service credentials of “Local Service”.

image

The next set of pictures show the Search Service Application provisioned according to the following configuration:

  • Search service account: SEVENSIGMA\searchservice
  • Search admin web service account: SEVENSIGMA\searchadminws
  • Search query and site settings account: SEVENSIGMA\searchqueryss

You can see this in the screenshots below.

image

imageimage

Once the service has been successfully provisioned, we can clearly see the “Default content access account” is based on the “Search service account” as described in the configuration above (the first of the three accounts).

image

Finally, as you can see below, once provisioned, it is the SharePoint farm account that is running the search windows service.

image

Once you have provisioned the Search Service Application, the default content access (in my case SEVENSIGMA\searchservice), it is granted “Read” access to all web applications via Web Application User Policies as shown below. This way, no matter how draconian the permissions of site collections are, the crawler account will have the access it needs to crawl the content, as well as the permissions of that content. You can verify this by looking at any web application in Central Administration (except for central administration web application) and choosing “User Policy” from the ribbon. You will see in the policy screen that the “Search Crawler” account has “Full Read” access.

image

image

In case you are wondering why the search service needs to crawl the permissions of content, as well as the content itself, it is because it uses these permissions to trim search results for users who do not have access to content. After all, you don’t want to expose sensitive corporate data via search do you?

There is another more subtle configuration change performed by the Search Service. Once the evilness known as the User Profile Service has been provisioned, the Search service application will grant the Search Service Account specific permission to the User Profile Service. SharePoint is smart enough to do this whether or not the User Profile Service application is installed before or after the Search Service Application. In other words, if you install the Search Service Application first, and the User Profile Service Application afterwards, the permission will be granted regardless.

The specific permission by the way, is “Retrieve People Data for Search Crawlers” permission as shown below:

image    image

Getting back to the title of this post, this is a critical permission, because without it, the Search Server will not be able to talk to the User Profile Service to enumerate user profile information. The effect of this is empty "People Search results.

How people search works (a little more advanced)

Right! Now that the cool kids have joined us (who skipped the first section), lets take a closer look at SharePoint People Search in particular. This section delves a little deeper, but fear not I will try and keep things relatively easy to grasp.

Once the Search Service Application has been provisioned, a default content source, called – originally enough – “Local SharePoint Sites” is created. Any web applications that exist (and any that are created from here on in) will be listed here. An example of a freshly minted SharePoint server with a single web application, shows the following configuration in Search Service Application:

image

Now hopefully http://web makes sense. Clearly this is the URL of the web application on this server. But you might be wondering that sps3://web is? I will bet that you have never visited a site using sps3:// site using a browser either. For good reason too, as it wouldn’t work.

This is a SharePointy thing – or more specifically, a Search Server thing. That funny protocol part of what looks like a URL, refers to a connector. A connector allows Search Server to crawl other data sources that don’t necessarily use HTTP. Like some native, binary data source. People can develop their own connectors if they feel so inclined and a classic example is the Lotus Notes connector that Microsoft supply with SharePoint. If you configure SharePoint to use its Lotus Notes connector (and by the way – its really tricky to do), you would see a URL in the form of:

notes://mylotusnotesbox

Make sense? The protocol part of the URL allows the search server to figure out what connector to use to crawl the content. (For what its worth, there are many others out of the box. If you want to see all of the connectors then check the list here).

But the one we are interested in for this discussion is SPS3: which accesses SharePoint User profiles which supports people search functionality. The way this particular connector works is that when the crawler accesses this SPS3 connector, it in turns calls a special web service at the host specified. The web service is called spscrawl.asmx and in my example configuration above, it would be http://web/_vti_bin/spscrawl.asmx

The basic breakdown of what happens next is this:

  1. Information about the Web site that will be crawled is retrieved (the GetSite method is called passing in the site from the URL (i.e the “web” of sps3://web)
  2. Once the site details are validated the service enumerates all of the use profiles
  3. For each profile, the method GetItem is called that retrieves all of the user profile properties for a given user. This is added to the index and tagged as content class of “urn:content-class:SPSPeople” (I will get to this in a moment)

Now admittedly this is the simple version of events. If you really want to be scared (or get to sleep tonight) you can read the actual SP3 protocol specification PDF.

Right! Now lets finish this discussion by this notion of contentclass. The SharePoint search crawler tags all crawled content according to its class. The name of this “tag” – or in correct terminology “managed property” – is contentclass. By default SharePoint has a People Search scope. It is essentially a limits the search to only returning content tagged as “People” contentclass.

image

Now to make it easier for you, Dan Attis listed all of the content classes that he knew of back in SharePoint 2007 days. I’ll list a few here, but for the full list visit his site.

  • “STS_Web” – Site
  • “STS_List_850″ – Page Library
  • “STS_List_DocumentLibrary” – Document Library
  • “STS_ListItem_DocumentLibrary” – Document Library Items
  • “STS_ListItem_Tasks” – Tasks List Item
  • “STS_ListItem_Contacts” – Contacts List Item
  • “urn:content-class:SPSPeople” – People

(why some properties follow the universal resource name format I don’t know *sigh* – geeks huh?)

So that was easy Paul! What can go wrong?

So now we know that although the protocol handler is SPS3, it is still ultimately utilising HTTP as the underlying communication mechanism and calling a web service, we can start to think of all the ways that it can break on us. Let’s now take a look at common problem areas in order of commonality:

1. The Loopback issue.

This has been done to death elsewhere and most people know it. What people don’t know so well is that the loopback fix was to prevent an extremely nasty security vulnerability known as a replay attack that came out a few years ago. Essentially, if you make a HTTP connection to your server, from that server and using a name that does not match the name of the server, then the request will be blocked with a 401 error. In terms of SharePoint people search, the sps3:// handler is created when you create your first web application. If that web application happens to be a name that doesn’t match the server name, then the HTTP request to the spscrawl.asmx webservice will be blocked due to this issue.

As a result your search crawl will not work and you will see an error in the logs along the lines of:

  • Access is denied: Check that the Default Content Access Account has access to the content or add a crawl rule to crawl the content (0x80041205)
  • The server is unavailable and could not be accessed. The server is probably disconnected from the network.   (0x80040d32)
  • ***** Couldn’t retrieve server http://web.sevensigma.com policy, hr = 80041205 – File:d:\office\source\search\search\gather\protocols\sts3\sts3util.cxx Line:548

There are two ways to fix this. The quick way (DisableLoopbackCheck) and the right way (BackConnectionHostNames). Both involve a registry change and a reboot, but one of them leaves you much more open to exploitation. Spence Harbar wrote about the differences between the two some time ago and I recommend you follow his advice.

(As an slightly related side note, I hit an issue with the User Profile Service a while back where it gave an error: “Exception occurred while connecting to WCF endpoint: System.ServiceModel.Security.MessageSecurityException: The HTTP request was forbidden with client authentication scheme ‘Anonymous’. —> System.Net.WebException: The remote server returned an error: (403) Forbidden”. In this case I needed to disable the loopback check but I was using the server name with no alternative aliases or full qualified domain names. I asked Spence about this one and it seems that the DisableLoopBack registry key addresses more than the SMB replay vulnerability.)

2. SSL

If you add a certificate to your site and mark the site as HTTPS (by using SSL), things change. In the example below, I installed a certificate on the site http://web, removed the binding to http (or port 80) and then updated SharePoint’s alternate access mappings to make things a HTTPS world.

Note that the reference to SPS3://WEB is unchanged, and that there is also a reference still to HTTP://WEB, as well as an automatically added reference to HTTPS://WEB

image

So if we were to run a crawl now, what do you think will happen? Certainly we know that HTTP://WEB will fail, but what about SPS3://WEB? Lets run a full crawl and find out shall we?

Checking the logs, we have the unsurprising error “the item could not be crawled because the crawler could not contact the repository”. So clearly, SPS3 isn’t smart enough to work out that the web service call to spscrawl.asmx needs to be done over SSL.

image

Fortunately, the solution is fairly easy. There is another connector, identical in function to SPS3 except that it is designed to handle secure sites. It is “SPS3s”. We simple change the configuration to use this connector (and while we are there, remove the reference to HTTP://WEB)

image

Now we retry a full crawl and check for errors… Wohoo – all good!

image

It is also worth noting that there is another SSL related issue with search. The search crawler is a little fussy with certificates. Most people have visited secure web sites that warning about a problem with the certificate that looks like the image below:

image

Now when you think about it, a search crawler doesn’t have the luxury of asking a user if the certificate is okay. Instead it errs on the side of security and by default, will not crawl a site if the certificate is invalid in some way. The crawler also is more fussy than a regular browser. For example, it doesn’t overly like wildcard certificates, even if the certificate is trusted and valid (although all modern browsers do).

To alleviate this issue, you can make the following changes in the settings of the Search Service Application: Farm Search Administration->Ignore SSL warnings and tick “Ignore SSL certificate name warnings”.

image  image

image

The implication of this change is that the crawler will now accept any old certificate that encrypts website communications.

3. Permissions and Change Legacy

Lets assume that we made a configuration mistake when we provisioned the Search Service Application. The search service account (which is the default content access account) is incorrect and we need to change it to something else. Let’s see what happens.

In the search service application management screen, click on the default content access account to change credentials. In my example I have changed the account from SEVENSIGMA\searchservice to SEVENSIGMA\svcspsearch

image

Having made this change, lets review the effect in the Web Application User Policy and User Profile Service Application permissions. Note that the user policy for the old search crawl account remains, but the new account has had an entry automatically created. (Now you know why you end up with multiple accounts with the display name of “Search Crawling Account”)

image

Now lets check the User Profile Service Application. Now things are different! The search service account below refers to the *old* account SEVENSIGMA\searchservice. But the required permission of “Retrieve People Data for Search Crawlers” permission has not been granted!

image

 

image

If you traipsed through the ULS logs, you would see this:

Leaving Monitored Scope (Request (GET:https://web/_vti_bin/spscrawl.asmx)). Execution Time=7.2370958438429 c2a3d1fa-9efd-406a-8e44-6c9613231974
mssdmn.exe (0x23E4) 0x2B70 SharePoint Server Search FilterDaemon e4ye High FLTRDMN: Errorinfo is "HttpStatusCode Unauthorized The request failed with HTTP status 401: Unauthorized." [fltrsink.cxx:553] d:\office\source\search\native\mssdmn\fltrsink.cxx
mssearch.exe (0x02E8) 0x3B30 SharePoint Server Search Gatherer cd11 Warning The start address sps3s://web cannot be crawled. Context: Application ‘Search_Service_Application’, Catalog ‘Portal_Content’ Details: Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled. (0x80041205)

To correct this issue, manually grant the crawler account the “Retrieve People Data for Search Crawlers” permission in the User Profile Service. As a reminder, this is done via the Administrators icon in the “Manage Service Applications” ribbon.

image

Once this is done run a fill crawl and verify the result in the logs.4.

4. Missing root site collection

A more uncommon issue that I once encountered is when the web application being crawled is missing a default site collection. In other words, while there are site collections defined using a managed path, such as http://WEB/SITES/SITE, there is no site collection defined at HTTP://WEB.

The crawler does not like this at all, and you get two different errors depending on whether the SPS or HTTP connector used.

  • SPS:// – Error in PortalCrawl Web Service (0x80042617)
  • HTTP:// – The item could not be accessed on the remote server because its address has an invalid syntax (0x80041208)

image

The fix for this should be fairly obvious. Go and make a default site collection for the web application and re-run a crawl.

5. Alternative Access Mappings and Contextual Scopes

SharePoint guru (and my squash nemesis), Nick Hadlee posted recently about a problem where there are no search results on contextual search scopes. If you are wondering what they are Nick explains:

Contextual scopes are a really useful way of performing searches that are restricted to a specific site or list. The “This Site: [Site Name]”, “This List: [List Name]” are the dead giveaways for a contextual scope. What’s better is contextual scopes are auto-magically created and managed by SharePoint for you so you should pretty much just use them in my opinion.

The issue is that when the alternate access mapping (AAM) settings for the default zone on a web application do not match your search content source, the contextual scopes return no results.

I came across this problem a couple of times recently and the fix is really pretty simple – check your alternate access mapping (AAM) settings and make sure the host header that is specified in your default zone is the same url you have used in your search content source. Normally SharePoint kindly creates the entry in the content source whenever you create a web application but if you have changed around any AAM settings and these two things don’t match then your contextual results will be empty. Case Closed!

Thanks Nick

6. Active Directory Policies, Proxies and Stateful Inspection

A particularly insidious way to have problems with Search (and not just people search) is via Active Directory policies. For those of you who don’t know what AD policies are, they basically allow geeks to go on a power trip with users desktop settings. Consider the image below. Essentially an administrator can enforce a massive array of settings for all PC’s on the network. Such is the extent of what can be controlled, that I can’t fit it into a single screenshot. What is listed below is but a small portion of what an anal retentive Nazi administrator has at their disposal (mwahahaha!)

image

Common uses of policies include restricting certain desktop settings to maintain consistency, as well as enforce Internet explorer security settings, such as proxy server and security settings like maintaining the trusted sites list. One of the common issues encountered with a global policy defined proxy server in particular is that the search service account will have its profile modified to use the proxy server.

The result of this is that now the proxy sits between the search crawler and the content source to be crawled as shown below:

Crawler —–> Proxy Server —–> Content Source

Now even though the crawler does not use Internet Explorer per se, proxy settings aren’t actually specific to Internet Explorer. Internet explorer, like the search crawler, uses wininet.dll. Wininet is a module that contains Internet-related functions used by Windows applications and it is this component that utilises proxy settings.

Sometimes people will troubleshoot this issue by using telnet to connect to the HTTP port. "ie: “Telnet web 80”. But telnet does not use the wininet component, so is actually not a valid method for testing. Telnet will happily report that the web server is listening on port 80 or 443, but it matters not when the crawler tries to access that port via the proxy. Furthermore, even if the crawler and the content source are on the same server, the result is the same. As soon as the crawler attempts to index a content source, the request will be routed to the proxy server. Depending on the vendor and configuration of the proxy server, various things can happen including:

  • The proxy server cannot handle the NTLM authentication and passes back a 400 error code to the crawler
  • The proxy server has funky stateful inspection which interferes with the allowed HTTP verbs in the communications and interferes with the crawl

For what its worth, it is not just proxy settings that can interfere with the HTTP communications between the crawler and the crawled. I have seen security software also get in the way, which monitors HTTP communications and pre-emptively terminates connections or modifies the content of the HTTP request. The effect is that the results passed back to the crawler are not what it expects and the crawler naturally reports that it could not access the data source with suitably weird error messages.

Now the very thing that makes this scenario hard to troubleshoot is the tell-tale sign for it. That is: nothing will be logged in the ULS logs, not the IIS logs for the search service. This is because the errors will be logged in the proxy server or the overly enthusiastic stateful security software.

If you suspect the problem is a proxy server issue,  but do not have access to the proxy server to check logs, the best way to troubleshoot this issue is to temporarily grant the search crawler account enough access to log into the server interactively. Open internet explorer and manually check the proxy settings. If you confirm a policy based proxy setting, you might be able to temporarily disable it and retry a crawl (until the next AD policy refresh reapplies the settings). The ideal way to cure this problem is to ask your friendly Active Directory administrator to either:

  • Remove the proxy altogether from the SharePoint server (watch for certificate revocation slowness as a result)
  • Configure an exclusion in the proxy settings for the AD policy to that the content sources for crawling are not proxied
  • Create a new AD policy specifically for the SharePoint box so that the default settings apply to the rest of the domain member computers.

If you suspect the issue might be overly zealous stateful inspection, temporarily disable all security-type software on the server and retry a crawl. Just remember, that if you have no logs on the server being crawled, chances are its not being crawled and you have to look elsewhere.

7. Pre-Windows 2000 Compatibility Access Group

In an earlier post of mine, I hit an issue where search would yield no results for a regular user, but a domain administrator could happily search SP2010 and get results. Another symptom associated with this particular problem is certain recurring errors event log – Event ID 28005 and 4625.

  • ID 28005 shows the message “An exception occurred while enqueueing a message in the target queue. Error: 15404, State: 19. Could not obtain information about Windows NT group/user ‘DOMAIN\someuser’, error code 0×5”.
  • The 4625 error would complain “An account failed to log on. Unknown user name or bad password status 0xc000006d, sub status 0xc0000064” or else “An Error occured during Logon, Status: 0xc000005e, Sub Status: 0x0”

If you turn up the debug logs inside SharePoint Central Administration for the “Query” and “Query Processor” functions of “SharePoint Server Search” you will get an error “AuthzInitializeContextFromSid failed with ERROR_ACCESS_DENIED. This error indicates that the account under which this process is executing may not have read access to the tokenGroupsGlobalAndUniversal attribute on the querying user’s Active Directory object. Query results which require non-Claims Windows authorization will not be returned to this querying user.

image

The fix is to add your search service account to a group called “Pre-Windows 2000 Compatibility Access” group. The issue is that SharePoint 2010 re-introduced something that was in SP2003 – an API call to a function called AuthzInitializeContextFromSid. Apparently it was not used in SP2007, but its back for SP2010. This particular function requires a certain permission in Active Directory and the “Pre-Windows 2000 Compatibility Access” group happens to have the right required to read the “tokenGroupsGlobalAndUniversal“ Active Directory attribute that is described in the debug error above.

8. Bloody developers!

Finally, Patrick Lamber blogs about another cause of crawler issues. In his case, someone developed a custom web part that had an exception thrown when the site was crawled. For whatever reason, this exception did not get thrown when the site was viewed normally via a browser. As a result no pages or content on the site could be crawled because all the crawler would see, no matter what it clicked would be the dreaded “An unexpected error has occurred”. When you think about it, any custom code that takes action based on browser parameters such as locale or language might cause an exception like this – and therefore cause the crawler some grief.

In Patricks case there was a second issue as well. His team had developed a custom HTTPModule that did some URL rewriting. As Patrick states “The indexer seemed to hate our redirections with the Response.Redirect command. I simply removed the automatic redirection on the indexing server. Afterwards, everything worked fine”.

In this case Patrick was using a multi-server farm with a dedicated index server, allowing him to remove the HTTP module for that one server. in smaller deployments you may not have this luxury. So apart from the obvious opportunity to bag programmers :-), this example nicely shows that it is easy for a 3rd party application or code to break search. What is important for developers to realise is that client web browsers are not the only thing that loads SharePoint pages.

If you are not aware, the user agent User Agent string identifies the type of client accessing a resource. This is the means by which sites figure out what browser you are using. A quick look at the User Agent parameter by SharePoint Server 2010 search reveals that it identifies itself as “Mozilla/4.0 (compatible; MSIE 4.01; Windows NT; MS Search 6.0 Robot)“. At the very least, test any custom user interface code such as web parts against this string, as well as check the crawl logs when it indexes any custom developed stuff.

Conclusion

Well, that’s pretty much my list of gotchas. No doubt there are lots more, but hopefully this slightly more detailed exploration of them might help some people.

 

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

www.spgovia.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why me? Web part errors on new web applications

Send to Kindle

Oh man, it’s just not my week. After nailing a certificate issue yesterday that killed user profile provisioning, I get an even better one today! I’ve posted it here as a lesson on how not to troubleshoot this issue!

The symptoms:

I created a brand new web application on a SP2010 farm, and irrespective of the site collection I subsequently create, I get the dreaded error "Web Part Error: This page has encountered a critical error. Contact your system administrator if this problem persists"

Below is a screenshot of a web app using the team site template. Not so good huh?

image

The swearing…

So faced with this broken site, I do what any other self respecting SharePoint consultant would do. I silently cursed Microsoft for being at the root of all the world’s evils and took a peek into that very verbose and very cryptic place known as the ULS logs. Pretty soon I found messages like:

0x3348 SharePoint Foundation         General                       8sl3 High     DelegateControl: Exception thrown while building custom control ‘Microsoft.SharePoint.SPControlElement’: This page has encountered a critical error. Contact your system administrator if this problem persists. eff89784-003b-43fd-9dde-8377c4191592

0x3348 SharePoint Foundation         Web Parts                     7935 Information http://sp:81/default.aspx – An unexpected error has been encountered in this Web Part.  Error: This page has encountered a critical error. Contact your system administrator if this problem persists.,

Okay, so that is about as helpful as a fart in an elevator, so I turned up the debug juice using that new, pretty debug juicer turner-upper (okay, the diagnostic logging section under monitoring in central admin). I turned on a variety of logs at different times including.

  • SharePoint Foundation           Configuration                   Verbose
  • SharePoint Foundation           General                         Verbose
  • SharePoint Foundation           Web Parts                       Verbose
  • SharePoint Foundation           Feature Infrastructure          Verbose
  • SharePoint Foundation           Fields                          Verbose
  • SharePoint Foundation           Web Controls                    Verbose
  • SharePoint Server               General                         Verbose
  • SharePoint Server               Setup and Upgrade               Verbose
  • SharePoint Server               Topology                        Verbose

While my logs got very big very quickly, I didn’t get much more detail apart from one gem,to me, seemed so innocuous amongst all the detail, yet so kind of.. fundamental 🙂

0x3348 SharePoint Foundation         Web Parts                     emt7 High     Error: Failure in loading assembly: Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a eff89784-003b-43fd-9dde-8377c4191592

That rather scary log message was then followed up by this one – which proved to be the clue I needed.

0x3348 SharePoint Foundation         Runtime                       6610 Critical Safe mode did not start successfully. This page has encountered a critical error. Contact your system administrator if this problem persists. eff89784-003b-43fd-9dde-8377c4191592

It was about this time that I also checked the event logs (I told you this post was about how not to troubleshoot) and I saw the same entry as above.

Log Name:      Application
Source:        Microsoft-SharePoint Products-SharePoint Foundation
Event ID:      6610
Description:
Safe mode did not start successfully. This page has encountered a critical error. Contact your system administrator if this problem persists.

I read the error message carefully. This problem was certainly persisting and I was the system administrator, so I contacted myself and resolved to search google for the “Safe mode did not start successfully” error.

The 46 minute mark epiphany

image

If you watch the TV series “House”, you will know that House always gets an epiphany around the 46 minute mark of the show, just in time to work out what the mystery illness is and save the day. Well, this is the 46 minute mark of this post!

I quickly found that others had this issue in the past, and it was the process where SharePoint checks web.config to process all of the controls marked as safe. If you have never seen this, it is the section of your SharePoint web application configuration file that looks like this:

image 

This particular version of the error is commonly seen when people deploy multiple servers in their SharePoint farm, and use a different file path for the INETPUB folder. In my case, this was a single server. So, although I knew I was on the right track, I knew this wasn’t the issue.

My next thought was to run the site in full trust mode, to see if that would make the site work. This is usually a setting that makes me mad when developers ask for it because it tells me they have been slack. I changed the entry

<trust level="WSS_Minimal" originUrl="" />

to

<trust level="Full" originUrl="" />

But to no avail. Whatever was causing this was not affected by code access security.

I reverted back to WSS_Minimal and decided to remove all of the SafeControl entries from the web.config file, as shown below. I knew the site would bleat about it, but was interested if the “Safe Mode” error would go away.

image

The result? My broken site was now less broken. It was still bitching, but now it appeared to be bitching more like what I was expecting.

image

After that, it was a matter of adding back the <safecontrol> elements and retrying the site. It didn’t take long to pinpoint the offending entry.

<SafeControl Assembly="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" Namespace="Microsoft.SharePoint.WebPartPages" TypeName="ContentEditorWebPart" Safe="False" />

As soon as I removed this entry the site came up fine. I even loaded up the content editor web part without this entry and it worked a treat. Therefore, how this spurious entry got there is still a mystery.

The final mystery

My colleague and I checked the web.config file in C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\CONFIG. This is the one that gets munged with other webconfig.* files when a new web application is provisioned.

Sure enough, its modified date was July 29 (just outside the range of the SharePoint and event logs unfortunately). When we compared against a known good file from another SharePoint site, we immediately saw the offending entry.

image

The solution store on this SharePoint server is empty and no 3rd party stuff to my knowledge has been installed here. But clearly this file has been modified. So, we did what any self respecting SharePoint consultant would do…

…we blamed the last guy.

 

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

SharePoint, Debategraph and Copenhagen 2009 – Collaboration on a global scale

Send to Kindle

Note: For those of you who do not wish to read my usual verbose writing, then skip to the last section where there is a free web part to download and try out.

Unless you are a complete SharePoint nerd and world events don’t interest you while you spend your hours in a darkened room playing with the SP2010 beta, you would no doubt be aware that one of the most significant collaborative events in the world is currently taking place.

The United Nations climate change conference in Copenhagen this month is one of the most important world gatherings of our time. You might wonder why, as a SharePoint centric blog, I am writing about this. The simple answer is that this conference in which the world will come together to negotiate and agree on one of the toughest wicked problems of our time. How to tackle international climate change in a coordinated global way. As I write this, things do not seem to be going so well :-(.

image

Climate change cuts to the heart of the wellbeing expected by every one of us. Whether you live in an affluent country or a developing nation, the stakes are high and the issues at hand are incredibly complex and tightly intertwined. It might all seem far away and out of sight/out of mind, but it is clear that we will all be affected by the outcomes for better and worse. The spectre of the diminishing window of opportunity to deal with this issue means that an unprecedented scale of international cooperation will be required to produce an outcome that can satisfy all stakeholders in an environmentally, economically and social bottom line.

Can it be done? For readers who are practitioners of SharePoint solutions, you should have an appreciation of the difficulty that a supposedly “collaborative tool” actually is to improve collaboration. Therefore, I want you to imagine your most difficult, dysfunctional project that you have ever encountered and just try and now multiply it by a million, gazzilion times. If there are ever lessons to be learned about effective collaboration among a large, diverse group on a hugely difficult issue, then surely it is this issue and this event.

Our contribution

My colleagues and I became interested in sense-making and collaboration on wicked problems some time back, and through the craft of Dialogue Mapping, we have had the opportunity to help diverse groups successfully work through some very challenging local issues. I need to make it clear that much of what we do in this area is far beyond SharePoint in terms of project difficulty, and in fact we often deal with non IT projects and problems that have significant social complexity.

Working with people like city planners, organisational psychologists, environmental scientists and community leaders to name a few, has rubbed off on myself and my colleagues. Through the sense-making process that we practice with these groups, we have started to see a glimpse of the world through their eyes. For me in particular, it has challenged my values, social conscience and changed the entire trajectory of where I thought my career would go. I feel that the experience has made me a much better practitioner of collaborative tools like SharePoint and I am a textbook case of the the notion that the key to improving in your own discipline, is to learn from people outside of it.

We have now become part of a global sense-making community, much like the global SharePoint community in a way. A group of diverse people that come together via common interest. To that end, my colleague at Seven Sigma, Chris Tomich has embarked on a wonderful initiative that I hope you may find of interest. He has enlisted the help of several world renowned sense-makers, such as Jeff Conklin of Cognexus and David Price of Debategraph, and created a site, http://www.copenhagensummitmap.org/, where we will attempt to create a global issue map of the various sessions and talks at the Copenhagen summit. The aim of this exercise is to try and help interested people cut through the fog of issues and understand the points of view of the participants. We are utilising IBIS, the grammar behind dialogue mapping, and the DebateGraph tool for the shared display.

image

How you can help

If you feel that issue mapping is for you, then I encourage you to sign up to Debategraph and help contribute to the Copenhagen debate by mapping the dialogue of the online sessions (which you can view from the site).

Otherwise, Chris has written a simple, free web part, specifically for Copenhagen which can be downloaded “Mapping tools” section of the Copenhagen site. The idea is that if you or your organisation wish to keep up with the latest information from the conference, then installing this web part onto your site, will allow all of your staff to see the Copenhagen debate unfold live via your SharePoint portal. Given that SharePoint is particularly powerful at surfacing data for business intelligence, think of this web part as a means to display global intelligence (or lack thereof, depending on your political view 🙂 ).

Installing is the usual process for a SharePoint solution file. Add the solution to central admin, deploy it to your web application of choice and then activate the site collection scoped feature called “Seven Sigma Debategraph Components”. The web part will be then available to add to a page layout or web part page.

image

The properties of this web part allow you some fine grained control over how the Debategraph map renders inside SharePoint. The default is to show the Debategraph stream view, which is a twitter style view of the recent updates as shown in the example below.

image

Stream view is not the only view available. Detail view is also very useful for rationale that has supplementary information, as shown in the example below.

image

By the way, you can use this web part to display any Debategraph debate – not just Copenhagen. The Debategraph map to display is also controlled via the web part properties.

For information on how to change the default map, then check out this webcast I recorded for the previous version here.

I hope that some of you find this web part of use and look forward to any feedback.

 

Kind regards

 

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

New book preview – SharePoint 2007 Developers Guide to the Business Data Catalog

Send to Kindle

I’ve been busy on a number of fronts and some of the fruits of that work will appear soon enough, but I thought that I would pop up to let you know about a forthcoming book written by Brett Lonsdale and Nick Swan on a SharePoint component that has until now, been seriously under-represented in the plethora of SharePoint books out there in the marketplace.

The Business Data Catalog is one of those SharePoint components that is easy enough to understand conceptually, but then will scare the utter crap out of you when you delve into the guts of its XML based complexity.  At least that was my experience the first time I toyed with it in early 2007. Luckily for me, my ass was saved by a tool that had just been released as a public beta called BDC MetaMan. I downloaded this tool and within around 15 minutes I used it to set up a BDC connection to Microsoft’s Systems Management Server v4 to pull software package details into a SharePoint list and felt very proud of myself indeed. 

Fast forward to mid 2009 and BDC MetaMan has come a hell of a long way, as have its creators. Nick and Brett are about as world-authoritative as you can possibly get on the BDC and if you wish to become a Jedi in the dark arts of the BDC “force” then you now have your official bible. This book is absolutely crammed with detail and the expertise of the authors in this feature shines throughout.

The book is split up across 11 chapters and although it is not explicitly stated by the authors, seems to be made of 3 broad parts. Chapter 1 introduces the BDC, how it is architected (web parts, BDC column, BDC Search, and integration with User Profile import and the SDK). Also covered is the range of data sources, an introduction to Application Definition Files (ADF) and how it all integrates into the Shared Service Provider model.

Once the intro chapter is done with, Brett and Nick don’t waste too much time in diving deep. 

Chapters 2 and 3 deal with the structure of BDC Application Definition (ADF) files, and follows up with the complex world of how authentication plays out with the BDC. Chapter 2 delves far more into the ADF files than I ever wished to tread, but Nick and Brett somehow manage to describe a long, boring XML file in a logical, easy to follow manner and there was a lot of stuff that I learned here that I had simply missed from trawling MSDN articles. The authentication chapter is covered in excellent detail in Chapter 3 and goes way beyond the usual NTLM/Kerberos double-hop stuff. Authentication in the Microsoft world has become very complex these days, and there are various options and trade-offs. This chapter covers all of this and more, brilliant stuff.

After the deep dive of ADF and authentication, we surface a little from the previous two chapters into what I think really, is part 2 of this book. That is, several chapters that deal with how you leverage the BDC once you have connected to a line of business application. Chapter 4 introduces the built-in web parts that come with the BDC, shows how they are used and how they can be modified either using SharePoint Designer or tweaking XSL styles directly. Chapter 5 explores the BDC column type, how it can be used in the Office document information panel, in SharePoint Designer workflows, as well as its limitations. Chapter 6 explains how to leverage the BDC for allowing SharePoint to crawl your back-end line of business data and present it in search results. In addition to this, chapter 6 has a lot to offer just from the point of view of customising the search experience, whether using BDC or not. Finally, Chapter 7 examines how the BDC can be utilised to add data into user profiles that is leveraged via audience targeting.

Next we dive back into “real programmer” territory and what I think makes part 3 of this book. Chapter 8 delves deep into the BDC object model, for those times when the out of the box stuff just won’t quite cut it for you. The example used to demonstrate this object model is a web service that exposes BDC data via several methods. Chapter 9 then covers the creation of a custom web part that is in effect, an Ajax version of the out of the box “Business Data List web part” that refreshes data every few seconds without requiring a page load. Chapter 10 is particularly interesting because it examines how the BDC is used in conjunction with another oft overlooked suite of technologies known as “Office Business Applications”. The combination of BDC and OBA offer many interesting capabilities and among the examples, there are examples of Excel and Word leveraging the BDC as well as creating custom task panes, custom ribbons and the like. Finally, chapter 11 deals with using the BDC to write data back to the line of business applications and finishes with a great example of using InfoPath to submit data to a line of business application via a webservice that calls the BDC. That is hellishly cool in a nerdy developer kind of a way.

Phew! First up, *man* these guys are smart! I have to say this is the hardest SharePoint book that I have reviewed. It is obviously aimed at developers but it has so much to offer beyond the BDC. The content is very technical at times and obviously low-level. That, itself, is not the problem. Conversely, complex topics are handled really well and everything is extremely logically organised and flows well. The book is simply very, very comprehensive! There is plenty of meat for developers to sink their teeth into and this book will keep you going for a long time.

The preface of the book states that it has been written for an audience of “Microsoft SharePoint 2007 Information Workers and Developers who need to learn how to use, customize and create solutions using the Business Data Catalog”. I would agree with this, but I hope that information workers do not get put off by chapter 2 (and to some extent, chapter 3). This book dives deep straight off the bat and it is actually the middle chapters that offer the sort of insights that information workers will find the most useful.

So, if you think that the BDC deserves more than one single chapter towards the back of a SharePoint book, then this is your answer. As well as becoming an expert on the BDC, It will open your eyes to many possibilities beyond it.

Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

"Ain’t it cool?" – Integrating SharePoint and real-time performance data – Part 2

Send to Kindle

Hi again

This article is the second half of a pair of articles explaining how I integrated real-time performance data with an SharePoint based IT operational portal, designed around the principle of passive compliance with legislative or organisational controls.

In the first post, I introduced the PI product by OSIsoft, and explained how SQL Reporting services is able to generate reports from more than just SQL Server databases. I demonstrated how I created a report server report from performance data stored in the PI historian via an OLE DB provider for PI, and I also demonstrated how I was able to create a report that accepted a parameter, so that the output of the report could be customised.

I also showed how a SharePoint provides a facility to enter parameter data when using the report viewer web part.

We will now conclude this article by explaining a little about my passively compliant IT portal, and how I was able to enhance it with seamless integration with the real-time performance data stored in the PI historian.

Just to remind you, here is my conceptual diagram in "acoustic Visio" format

The IT portal

This is the really ultra brief explanation of the thinking that went into my IT portal

I spent a lot of time thinking about how critical IT information could be stored in SharePoint to achieve the goals of quick and easy access to information, make tasks like change/configuration management more transparent and efficient, as well as capture knowledge and documentation. I was influenced considerably by ISO17799 as it was called back then, especially in the area of asset management. I liked the use of the term "IT Assets" in ISO17799 and the strong emphasis on ownership and custodianship.

ISO defined asset as "any tangible or intangible thing that has value to an organization". It maintained that "…to achieve and maintain appropriate protection of organizational assets. All assets should be accounted for and have a nominated owner. Owners should be identified for all assets and the responsibility for the maintenance of appropriate controls should be assigned. The implementation of specific controls may be delegated by the owner as appropriate but the owner remains responsible for the proper protection of the assets."

That idea of delegation is that an owner of an asset can delegate the day-to-day management of that asset to a custodian, but the owner still bears ultimate responsibility.

So I developed a portal around this idea, but soon was hit by some constraints due to the broad ISO definition of an asset. Since assets have interdependencies, geeks have a tendency to over-complicate things and product a messy web of interdependencies. After some trial and error, as well as some soul searching I was able to come up with a 3 tier model that worked.

I changed the use of the word "asset", and split it into three broad asset types.

  • Devices (eg Server, SAN, Switch, Router, etc)
  • IT Services (eg Messaging, Databases, IP Network, etc)
  • Information Assets (eg Intranet, Timesheets,
image

The main thing to note about this model is to explain the different between an IT Service and an Information Asset. The distinction is in the area of ownership. In the case of an "Information Asset", the ownership of that asset is not IT. IT are a service provider, and by definition the IT view of the world is different to the rest of the organisation. An "IT Service" on the other hand, is always owned by IT and it is the IT services that underpin information assets.

So there is a hierarchical relationship there. You can’t have an information asset without an IT service providing it. Accountabilities are clear also. IT own the service, but are not responsible for the information asset itself – that’s for other areas of the organisation. (an Information Asset can also depend on other information assets as well as many IT services.

While this may sound so obvious that its not worth writing, my experience is that IT department often view information assets and the services providing those assets as one and the same thing.

Devices and Services

So, as an IT department, we provide a variety of services to the organisation. We provide them with an IP network, potentially a voice over IP system, a database subsystem, a backup and recovery service, etc.

It is fairly obvious that each IT service consists of a combination of IT devices (and often other IT services). an IP network is an obvious one and a basic example. The devices that underpin the "IP Network" service are routers, switches and wireless access points.

For devices we need to store information like

  • Serial Number
  • Warranty Details
  • Physical Location
  • Vendor information
  • Passwords
  • Device Type
  • IP Address
  • Change/Configuration Management history
  • IT Services that depend on this device (there is usually more than 1)

For services, we need to store information like

  • Service Owner
  • Service Custodian
  • Service Level Agreement (uptime guarantees, etc)
  • Change/Configuration Management history
  • IT Devices that underpin this service (there is usually more than 1)
  • Dependency relationships with other IT services
  • Information Assets that depend on this IT service

Keen eyed ITIL practitioners will realise that all I am describing here is a SharePoint based CMDB. I have a site template, content types, lists, event handlers and workflows that allow the above information to be managed in SharePoint. Below is three snippets showing sections of the portal, drilling down into the device view by location (click to expand), before showing the actual information about the server "DM01"

image image

image

Now the above screen is the one that I am interested in. You may also notice that the page above is a system generated page, based on the list called "IT Devices". I want to add real-time performance data to this screen, so that as well as being able to see asset information about a device, I also want to see its recent performance.

Modifying a system page

I talked about making modifications to system pages in detail in part 3 of my branding using Javascript series. Essentially, a system page is an automatically generated ASPX page that SharePoint creates. Think about what happens each time you add a column to a list or library. The NewForm.aspx, Editform.Aspx and Dispform.aspx are modified as they have to be rebuild to display the new or modified column.

SharePoint makes it a little tricky to edit these pages on account of custom modifications running the risk of breaking things. But as I described in the branding series, using the ToolPaneView hack does the job for us in a safe manner.

So using this hack, I was able to add a report viewer web part to the Dispform.aspx of the "IT devices" list as shown below.

image image

imageimage

Finally, we have our report viewer webpart, linked to our report that accesses PI historian data. As you can see below, the report that I created actually is expecting two parameters to be supplied. These parameters will be used to retrieve specific performance data and turn it into a chart.

image

Web Part Connection Magic

Now as it stands, the report is pretty much useless to us in the sense that we have to enter parameters to it manually, to get it to actually present us the information that we want. But on the same page as this report is a bunch of interesting information about a particular device, such as its name, IP Address, location and description. Wouldn’t it be great if we could somehow pass the device name (or some other device information) to the report web part automatically.

That way, each time you opened up a device entry, the report would retrieve performance information for the device currently being viewed. That would be very, very cool.

Fortunately for us it can be easily done. The report services web part, like many other web parts is connectable. This means that it can accept information from other web parts. This means that it is possible to have the parameters automatically passed to the report! 

Wohoo!

So here is how I am going to do this. I am going to add two new columns to my device list. Each column will be the parameter passed to the report. This way, I can tailor the report being generated on a device by device basis. For example, for a SAN device I might want to report on disk I/O, but a server I might want CPU. If I store the parameter as a column, the report will be able to retrieve whatever performance data I need.

Below shows the device list with the additional two columns added. the columns are called TAGPARAM1 and TAGPARAM2. The next screen below, shows the values I have entered for each column against the device DM01. These values will be passed to the report server report and used to find matching performance data.

image image

So the next question becomes, how do I now transparently pass these two parameters to the report? We now have the report and the parameters on the same page, but no obvious means to pass the value of TagParam1 and TagParam2 to the report viewer web part.

The answer my friends, is to use a filter web part!

Using the toolpane view hack, we once again edit the view item page for the Device List. We now need to add two additional web parts (because we have two parameters). Below is the web part to add.

image

The result should be a screen looking like the figure below

image

Filter web parts are not visible when a page is rendered in the browser. They are instead used to pass data between other web parts. There are various filter web parts that work in different ways. The Page Field filter is capable of passing the value of any column to another web part.

Confused? Check out how I use this web part below…

The screen above shows that the two Page Field filters web parts are not configured. They are prompting you to open the tool pane and configure them. Below is the configuration pane for the page field filter. Can you see how it has enumerated all of the columns for the "IT device" list? In the second and third screen we have chosen TagParam1 for the first page filter and TagParam2 for the second page filter web part.

image image image

Now take a look at the page in edit mode. The page filters now change display to say that they are not connected. All we have done so far is tell the web parts which columns to grab the parameter values from

image

Almost Home – Connecting the filters

So now we need to connect each Page Field filter web part to the report viewer web part. This will have the effect of passing to the report viewer web part, the value of TagParam1 and TagParam2. Since these values change from device to device, the report will display unique data for each device.

To to connect each page filter web part you click the edit dropdown for each page filter. From the list of choices, choose "Connections", and it will expand out to the choice of "Send Filter Values To". If you click on this, you will be promoted to send the filter values to the report viewer web part on the page. Since in my example, the report viewer web part requires two parameters, you will be asked to choose which of the two parameters to send the value to.

image image

Repeat this step for both page filter web parts and something amazing happens, we see a performance report on the devices page!! The filter has passed the values of TagParam1 and tagParam2 to the report and it has retrieved the matching data!

image

Let’s now save this page and view it in all of its glory! Sweet eh!

image 

Conclusion (and Touchups)

So let’s step back and look at what we have achieved. We can visit our IT Operations portal, open the devices list and immediately view real-time performance statistics for that device. Since I am using a PI historian, the performance data could have been collected via SNMP, netflow, ping, WMI, Performance Monitor counters, a script or many, many methods. But we do not need to worry about that, we just ask PI for the data that we want and display it using reporting services.

Because the parameters are stored as additional metadata with each device, you have complete control over the data being presented back to SharePoint. You might decide that servers should always return CPU stats, but a storage area network return disk I/O stats. It is all controllable just by the values you enter into the columns being used as report parameters.

The only additional thing that I did was to use my CleverWorkArounds Hide Field Web Part, to subsequently hide the TagParam1 and TagParam2 fields from display, so that when IT staff are looking at the integrated asset and performance data, the ‘behind the scenes’ glue is hidden from them.

So looking at this from a IT portal/compliance point of view, we now have an integrated platform where we can:

  • View device asset information (serial number, purchase date, warranty, physical location)
  • View IT Service information (owners, custodians and SLA’s)
  • View Information Asset information (owners, custodians and SLA’s)
  • Understand the relationships between devices, services and information assets
  • Access standards, procedures and work instructions pertaining to devices, services and information assets
  • Manage change and configuration management for devices, services and information assets
  • Quickly and easily view detailed, real time performance statistics of devices

All in all, not a bad afternoons work really! And not one line of code!

As i said way back at the start of the first article, this started out as a quick idea for a demo and it seems to have a heck of a lot of potential. Of course, I used PI but there is no reason why you can’t use similar techniques in your own IT portals to integrate your operational and performance data into the one place.

I hope that you enjoyed this article and I look forward to feedback.

<Blatant Plug>Want an IT Portal built in passive compliance? Then let’s talk!</Blatant Plug>

cheers

Paul Culmsee

 

 

 

 

OSISoft addendum

Now someone at OSISoft at some point will read this article and wonder why I didn’t write about RTWebparts. Essentially PI has some web parts that can be used to display historian data in SharePoint. There were two reasons why I did not mention them.

  1. To use RTWebparts you have to install a lot of PI components onto your web front end servers. Nothing wrong with that, but with Report Services, those components only need to go onto the report server. For my circumstances and what I had to demonstrate, this was sufficient.
  2. This post was actually not about OSISoft or PI per se. It was used to demonstrate how it is possible to use SharePoint to integrate performance and operational information into one integrated location. In the event that you have PI in your enterprise and want to leverage it with SharePoint, I suggest you contact me about it because we do happen to be very good at it 🙂

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why do SharePoint Projects Fail – Part 6

Send to Kindle

Hi again and welcome to part 6 of my series on the factors of why SharePoint projects fail. Joel Oleson’s write-up a while back gave me 5 minutes of fame, but like any contestant on Big Brother, I’ve had my time in the limelight, been voted out of the house (as in Joel’s front page) and I’m back to being an ordinary citizen again.

If you have followed events thus far, I covered off some wicked problem theory, before delving into the bigger ticket items that contribute to SharePoint project failure. In the last post, we pointed our virtual microscope at the infrastructure aspects that can cause a SharePoint problem to go off the rails.

Now we turn our magnifying glass onto application development issues and therefore application developers. Ah, what fun you can have with application developer stereotyping, eh! A strange breed indeed they are. As a group they have had a significant contribution to the bitter and twisted individual that I am today.

The CleverWorkarounds tequila shot rating is back!

image imageimageimageimageimageimage for a project manager in denial 🙂

imagefor the rest of us!

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Free MOSS Web Part – Hide Controls via JavaScript

Send to Kindle

Note: version 0.2 posted with minor bugfix 15th March 08!

Note2: Only works with MOSS 2007 sorry as you WSS guys do not have audiences targeting 🙁

This is my small contribution to the SharePoint world. It is a web part that once added to a web part page, allows you to customise the display by adding JavaScript to selectively hide controls on the page . Ever needed to hide a field from display/edit for a certain audience? Well here is a way do it without requiring SharePoint Designer and having to break a page from it’s site definition (unghosting).

Before and after shots below (look ma – no top button!)

image  image

To fully understand what is being done here, I suggest you read my series of articles on the use of JavaScript in SharePoint. Part 3 in particular will show you how to safely add this web part to pages with editing disabled (NewForm.aspx, EditForm.aspx and DispForm.aspx)

The full series can be found here: Part 1, Part 2, Part 3, Part 4, Part 5 and Part 6.

Kudos to Jeremy Thake for feedback and some code contribution. Despite being seriously metrosexual, he is otherwise otherwise very cool :-P.

Now two important warnings:

Warning 1: This is an alpha quality release and I may never touch it again 🙂 So you very likely *will* break it. If there is enough interest, I am happy to pop it on codeplex

Warning 2: This web part should NOT be considered as a security measure and thus used in any security sensitive scenario (such as an extranet or WCM site). JavaScript by its very nature can be trivially interfered with and thus other methods (server side) should be employed in these scenarios to prevent interference at the browser.

You can download by reading the disclaimer and clicking the button below..

THIS CODE IS PROVIDED UNDER THIS LICENSE ON AN “AS IS” BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COVERED CODE IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED CODE IS WITH YOU. SHOULD ANY COVERED CODE PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF ANY COVERED CODE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER

Use at your own risk!

To install perform the following commands

  1. stsadm.exe” -o addsolution -filename CleverWorkAroundsHideFields.wsp
  2. stsadm.exe” -o execadmsvcjobs
  3. stsadm.exe” -o deploysolution -name CleverWorkAroundsHideFields.wsp -immediate -allowgacdeployment -allcontenturls
  4. stsadm.exe” -o execadmsvcjobs

To remove/reinstall perform the following commands

  1. stsadm.exe” -o retractsolution -name CleverWorkAroundsHideFields.wsp -immediate -allcontenturls
  2. stsadm.exe” -o execadmsvcjobs
  3. stsadm.exe” -o deletesolution -name CleverWorkAroundsHideFields.wsp
  4. stsadm.exe” -o execadmsvcjobs
  5. stsadm.exe” -o addsolution -filename CleverWorkAroundsHideFields.wsp
  6. stsadm.exe” -o execadmsvcjobs
  7. stsadm.exe” -o deploysolution -name CleverWorkAroundsHideFields.wsp -immediate -allowgacdeployment -allcontenturls
  8. stsadm.exe” -o execadmsvcjobs
 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

More SharePoint Branding – Customisation using JavaScript – Part 6

Send to Kindle

God help me, I’m up to part 6 of series about a technology I dislike and still going. For those of you that have just joined us, then you might want to go back to the very beginning of this series where I used JavaScript to improve the SharePoint user experience. Since then, I’ve been trying to pick a path through the thorny maze of what you could term, ‘sustainable customisation’.

By that, I mean something that hopefully will not cause you grief and heartache the next time a service pack is applied!

So no mood for jokes this time – I want to get this over with so let’s get straight to it and finish this thing!

So where are we at?

  • Part 1 looked at how we can use JavaScript to deal with the issue of hiding form elements from the user in lists and document libraries.
  • Part 2 examined some of the issues with the part 1 JavaScript hacks and wrapped it into a web part using the content editor web part.
  • Part 3 then examined the various issues of adding this new web part to certain SharePoint pages (NewForm.aspx, EditForm.aspx and DispForm.aspx). I also covered using SharePoint Audience targeting to make the hiding/unhiding of form elements personalised to particular groups of users.
  • Part 4 started to address a couple of remaining usability issues, and introduced ‘proper’ web-part development using Visual Studio and STSDEV. I created a project to perform the same functionality in part 3, but would not requiring the user to have any JavaScript knowledge or experience.
  • Part 5 then used STSDEV to create a solution package that allowed easy debugging, deployment and updating of the web part developed in part 4.

So what could we possibly have left to cover? Basically this article will revisit the web part code and make some functionality improvements and then I will cover off some remaining quirks/issues that you should be aware of.

[Quick Navigation: Part 1, Part 2, Part 3, Part 4, Part 5 and Part 6]

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle