Back to Cleverworkarounds mainpage
 

Book Review – Essential SharePoint 2007

One SharePoint book on my bookshelf is "Essential SharePoint 2007 – Delivering High Impact Collaboration", by Scott Jamison, Mauro Cardarelli and Susan Hanley.

Time moves fast in the SharePoint world. Having been involved with MOSS2007 since around August 2006, it is amazing just how far things have come. Here we are in August 2008 and I simply cannot keep up! We have a staggering myriad of blogs, books, magazines, products, training and everything in between, competing for the hearts and minds of confused and frustrated user base, all around this powerful yet maddening product known as SharePoint.

I’m not too worried about the pace of new developments happening at an ever increasing rate, because if I struggle to keep up, what hope do my clients have? 

I’ve read a few SharePoint books over the last couple of years (particularly in the early days where like everyone else, I was trying to make sense of it all). I found most of them to offer a background to the product, then go through the extensive feature-set of the product, but are weak on practical guidance.

Continue reading “Book Review – Essential SharePoint 2007”



"Ain’t it cool?" – Integrating SharePoint and real-time performance data – Part 2

Hi again

This article is the second half of a pair of articles explaining how I integrated real-time performance data with an SharePoint based IT operational portal, designed around the principle of passive compliance with legislative or organisational controls.

In the first post, I introduced the PI product by OSIsoft, and explained how SQL Reporting services is able to generate reports from more than just SQL Server databases. I demonstrated how I created a report server report from performance data stored in the PI historian via an OLE DB provider for PI, and I also demonstrated how I was able to create a report that accepted a parameter, so that the output of the report could be customised.

I also showed how a SharePoint provides a facility to enter parameter data when using the report viewer web part.

We will now conclude this article by explaining a little about my passively compliant IT portal, and how I was able to enhance it with seamless integration with the real-time performance data stored in the PI historian.

Just to remind you, here is my conceptual diagram in "acoustic Visio" format

The IT portal

This is the really ultra brief explanation of the thinking that went into my IT portal

I spent a lot of time thinking about how critical IT information could be stored in SharePoint to achieve the goals of quick and easy access to information, make tasks like change/configuration management more transparent and efficient, as well as capture knowledge and documentation. I was influenced considerably by ISO17799 as it was called back then, especially in the area of asset management. I liked the use of the term "IT Assets" in ISO17799 and the strong emphasis on ownership and custodianship.

ISO defined asset as "any tangible or intangible thing that has value to an organization". It maintained that "…to achieve and maintain appropriate protection of organizational assets. All assets should be accounted for and have a nominated owner. Owners should be identified for all assets and the responsibility for the maintenance of appropriate controls should be assigned. The implementation of specific controls may be delegated by the owner as appropriate but the owner remains responsible for the proper protection of the assets."

That idea of delegation is that an owner of an asset can delegate the day-to-day management of that asset to a custodian, but the owner still bears ultimate responsibility.

So I developed a portal around this idea, but soon was hit by some constraints due to the broad ISO definition of an asset. Since assets have interdependencies, geeks have a tendency to over-complicate things and product a messy web of interdependencies. After some trial and error, as well as some soul searching I was able to come up with a 3 tier model that worked.

I changed the use of the word "asset", and split it into three broad asset types.

  • Devices (eg Server, SAN, Switch, Router, etc)
  • IT Services (eg Messaging, Databases, IP Network, etc)
  • Information Assets (eg Intranet, Timesheets,
image

The main thing to note about this model is to explain the different between an IT Service and an Information Asset. The distinction is in the area of ownership. In the case of an "Information Asset", the ownership of that asset is not IT. IT are a service provider, and by definition the IT view of the world is different to the rest of the organisation. An "IT Service" on the other hand, is always owned by IT and it is the IT services that underpin information assets.

So there is a hierarchical relationship there. You can’t have an information asset without an IT service providing it. Accountabilities are clear also. IT own the service, but are not responsible for the information asset itself – that’s for other areas of the organisation. (an Information Asset can also depend on other information assets as well as many IT services.

While this may sound so obvious that its not worth writing, my experience is that IT department often view information assets and the services providing those assets as one and the same thing.

Devices and Services

So, as an IT department, we provide a variety of services to the organisation. We provide them with an IP network, potentially a voice over IP system, a database subsystem, a backup and recovery service, etc.

It is fairly obvious that each IT service consists of a combination of IT devices (and often other IT services). an IP network is an obvious one and a basic example. The devices that underpin the "IP Network" service are routers, switches and wireless access points.

For devices we need to store information like

  • Serial Number
  • Warranty Details
  • Physical Location
  • Vendor information
  • Passwords
  • Device Type
  • IP Address
  • Change/Configuration Management history
  • IT Services that depend on this device (there is usually more than 1)

For services, we need to store information like

  • Service Owner
  • Service Custodian
  • Service Level Agreement (uptime guarantees, etc)
  • Change/Configuration Management history
  • IT Devices that underpin this service (there is usually more than 1)
  • Dependency relationships with other IT services
  • Information Assets that depend on this IT service

Keen eyed ITIL practitioners will realise that all I am describing here is a SharePoint based CMDB. I have a site template, content types, lists, event handlers and workflows that allow the above information to be managed in SharePoint. Below is three snippets showing sections of the portal, drilling down into the device view by location (click to expand), before showing the actual information about the server "DM01"

image image

image

Now the above screen is the one that I am interested in. You may also notice that the page above is a system generated page, based on the list called "IT Devices". I want to add real-time performance data to this screen, so that as well as being able to see asset information about a device, I also want to see its recent performance.

Modifying a system page

I talked about making modifications to system pages in detail in part 3 of my branding using Javascript series. Essentially, a system page is an automatically generated ASPX page that SharePoint creates. Think about what happens each time you add a column to a list or library. The NewForm.aspx, Editform.Aspx and Dispform.aspx are modified as they have to be rebuild to display the new or modified column.

SharePoint makes it a little tricky to edit these pages on account of custom modifications running the risk of breaking things. But as I described in the branding series, using the ToolPaneView hack does the job for us in a safe manner.

So using this hack, I was able to add a report viewer web part to the Dispform.aspx of the "IT devices" list as shown below.

image image

imageimage

Finally, we have our report viewer webpart, linked to our report that accesses PI historian data. As you can see below, the report that I created actually is expecting two parameters to be supplied. These parameters will be used to retrieve specific performance data and turn it into a chart.

image

Web Part Connection Magic

Now as it stands, the report is pretty much useless to us in the sense that we have to enter parameters to it manually, to get it to actually present us the information that we want. But on the same page as this report is a bunch of interesting information about a particular device, such as its name, IP Address, location and description. Wouldn’t it be great if we could somehow pass the device name (or some other device information) to the report web part automatically.

That way, each time you opened up a device entry, the report would retrieve performance information for the device currently being viewed. That would be very, very cool.

Fortunately for us it can be easily done. The report services web part, like many other web parts is connectable. This means that it can accept information from other web parts. This means that it is possible to have the parameters automatically passed to the report! 

Wohoo!

So here is how I am going to do this. I am going to add two new columns to my device list. Each column will be the parameter passed to the report. This way, I can tailor the report being generated on a device by device basis. For example, for a SAN device I might want to report on disk I/O, but a server I might want CPU. If I store the parameter as a column, the report will be able to retrieve whatever performance data I need.

Below shows the device list with the additional two columns added. the columns are called TAGPARAM1 and TAGPARAM2. The next screen below, shows the values I have entered for each column against the device DM01. These values will be passed to the report server report and used to find matching performance data.

image image

So the next question becomes, how do I now transparently pass these two parameters to the report? We now have the report and the parameters on the same page, but no obvious means to pass the value of TagParam1 and TagParam2 to the report viewer web part.

The answer my friends, is to use a filter web part!

Using the toolpane view hack, we once again edit the view item page for the Device List. We now need to add two additional web parts (because we have two parameters). Below is the web part to add.

image

The result should be a screen looking like the figure below

image

Filter web parts are not visible when a page is rendered in the browser. They are instead used to pass data between other web parts. There are various filter web parts that work in different ways. The Page Field filter is capable of passing the value of any column to another web part.

Confused? Check out how I use this web part below…

The screen above shows that the two Page Field filters web parts are not configured. They are prompting you to open the tool pane and configure them. Below is the configuration pane for the page field filter. Can you see how it has enumerated all of the columns for the "IT device" list? In the second and third screen we have chosen TagParam1 for the first page filter and TagParam2 for the second page filter web part.

image image image

Now take a look at the page in edit mode. The page filters now change display to say that they are not connected. All we have done so far is tell the web parts which columns to grab the parameter values from

image

Almost Home – Connecting the filters

So now we need to connect each Page Field filter web part to the report viewer web part. This will have the effect of passing to the report viewer web part, the value of TagParam1 and TagParam2. Since these values change from device to device, the report will display unique data for each device.

To to connect each page filter web part you click the edit dropdown for each page filter. From the list of choices, choose "Connections", and it will expand out to the choice of "Send Filter Values To". If you click on this, you will be promoted to send the filter values to the report viewer web part on the page. Since in my example, the report viewer web part requires two parameters, you will be asked to choose which of the two parameters to send the value to.

image image

Repeat this step for both page filter web parts and something amazing happens, we see a performance report on the devices page!! The filter has passed the values of TagParam1 and tagParam2 to the report and it has retrieved the matching data!

image

Let’s now save this page and view it in all of its glory! Sweet eh!

image 

Conclusion (and Touchups)

So let’s step back and look at what we have achieved. We can visit our IT Operations portal, open the devices list and immediately view real-time performance statistics for that device. Since I am using a PI historian, the performance data could have been collected via SNMP, netflow, ping, WMI, Performance Monitor counters, a script or many, many methods. But we do not need to worry about that, we just ask PI for the data that we want and display it using reporting services.

Because the parameters are stored as additional metadata with each device, you have complete control over the data being presented back to SharePoint. You might decide that servers should always return CPU stats, but a storage area network return disk I/O stats. It is all controllable just by the values you enter into the columns being used as report parameters.

The only additional thing that I did was to use my CleverWorkArounds Hide Field Web Part, to subsequently hide the TagParam1 and TagParam2 fields from display, so that when IT staff are looking at the integrated asset and performance data, the ‘behind the scenes’ glue is hidden from them.

So looking at this from a IT portal/compliance point of view, we now have an integrated platform where we can:

  • View device asset information (serial number, purchase date, warranty, physical location)
  • View IT Service information (owners, custodians and SLA’s)
  • View Information Asset information (owners, custodians and SLA’s)
  • Understand the relationships between devices, services and information assets
  • Access standards, procedures and work instructions pertaining to devices, services and information assets
  • Manage change and configuration management for devices, services and information assets
  • Quickly and easily view detailed, real time performance statistics of devices

All in all, not a bad afternoons work really! And not one line of code!

As i said way back at the start of the first article, this started out as a quick idea for a demo and it seems to have a heck of a lot of potential. Of course, I used PI but there is no reason why you can’t use similar techniques in your own IT portals to integrate your operational and performance data into the one place.

I hope that you enjoyed this article and I look forward to feedback.

<Blatant Plug>Want an IT Portal built in passive compliance? Then let’s talk!</Blatant Plug>

cheers

Paul Culmsee

 

 

 

 

OSISoft addendum

Now someone at OSISoft at some point will read this article and wonder why I didn’t write about RTWebparts. Essentially PI has some web parts that can be used to display historian data in SharePoint. There were two reasons why I did not mention them.

  1. To use RTWebparts you have to install a lot of PI components onto your web front end servers. Nothing wrong with that, but with Report Services, those components only need to go onto the report server. For my circumstances and what I had to demonstrate, this was sufficient.
  2. This post was actually not about OSISoft or PI per se. It was used to demonstrate how it is possible to use SharePoint to integrate performance and operational information into one integrated location. In the event that you have PI in your enterprise and want to leverage it with SharePoint, I suggest you contact me about it because we do happen to be very good at it 🙂

 



"Ain’t it cool?" – Integrating SharePoint and real-time performance data – Part 1

Hi

This is one of those nerdy posts in the category of "look at me! look at me! look at what I did, isn’t it cool?". Normally application developers write posts like this, demonstrating some cool bit of code they are really proud of. Being only a part-time developer and more of a security/governance/compliance kind of guy, my "aint it cool" post is a little different.

So if you are a non developer and you are already tempted to skip this one, please reconsider. If you are a CIO, IT manager, Infrastructure manager or are simply into ITIL, COBiT or compliance around IT operations in general, this post may have something for you. It offers something for knowledge managers too. Additionally it gives you a teensy tiny glimpse into my own personal manifesto of how you can integrate different types of data to achieve the sort of IT operational excellence that is a step above where you may be now.

Additionally, if you are a Cisco nerd or an infrastructure person who has experience with monitoring, you will also find something potentially useful here.

In this post, I am going to show you how I leveraged three key technologies, along with a dash of best practice methodology to create an IT Portal that I think is cool.

The Premise – "Passive Compliance"

In my career I have filled various IT roles and drunk the kool-aid of most of the vendors, technologies, methodologies and practices. Nowadays I am a product of all of these influences, leaving me slightly bitter and twisted, somewhat of a security nerd, but at the same time fairly pragmatic and always mindful of working to business goals and strategy.

One or the major influences to my current view of the world, was a role working for OSI Software from 2000-2004, via a former subsidiary company called WiredCity. OSISoft develop products in the process control space, and their core product is a data historian called PI. At WiredCity, we took this product out of the process control market and right into the IT enterprise and OSISoft now market this product as "IT Monitor". I’ll talk about PI/IT monitor in detail in a moment, but humour me and just accept my word that it is a hellishly fast number crunching database for storing lots of juicy performance data.

An addition I like to read all the various best practice frameworks and methodologies and I write about them a fair bit. As a result of this interest, I have a SharePoint IT portal template that I have developed over the last couple of years, designed around the guiding principle of passive compliance. That is, by utilising the portal for IT various operational tasks, structured in a certain way, you implicitly address some COBiT/ISO27001 controls as well as leverage ITIL principles. I designed it in such a way that an auditor could take a look at it and it would demonstrate the assurance around IT controls for operational system support. It also had the added benefit of being a powerful addition to disaster recovery strategy. (In the second article, to be published soon, I will describe my IT portal in more detail).

Finally, I have SQL Reporting Services integrated with SharePoint, used to present enterprise data from various databases into the IT Portal – also as part of the principle of passive compliance via business intelligence.

Recently, I was called in to help conduct a demonstration of the aforementioned PI software, so I decided to add PI functionality to my existing "passive compliance" IT portal to integrate asset and control data (like change/configuration management) along with real-time performance data. All in all I was very pleased with the outcome as it was done in a day with pretty impressive effect. I was able to do this with minimal coding, utilising various features of all three of the above applications with a few other components and pretty much wrote no code at all.

Below I have built a conceptual diagram of the solution. Unfortunately I don’t have Visio installed, but I found a great freeware alternative 😉

Image0003

I know, there is a lot to take in here (click to enlarge), but if you look in the center of the diagram, you will see a mock up of a SharePoint display. All of the other components drawn around it combine to produce that display. I’ll now talk about the main combination, PI and SQL Reporting Services.

A slice of PI

Okay so let’s explain PI because I think most people have a handle on SharePoint :-).

To the right is the terminator looking at data from a PI historian showing power flow in California. So this product is not a lightweight at all. It’s heritage lies in this sort of critical industrial monitoring.

Just to get the disclaimers out of the way, I do not work for OSISoft anymore nor are they even aware of this post. Just so hard-core geeks don’t flame me and call me a weenie, let me just say that I love RRDTool and SmokePing and prefer Zabbix over Nagios. Does that make me cool enough to make comment on this topic now? 🙂  

Like RRDTool, PI is a data historian, designed and optimised for time-series data.

"Data historian? Is that like a database of some kind?", you may ask. The answer is yes, but its not a relational database like SQL Server or Oracle. Instead, it is a "real-time, time series" data store. The English translation of that definition, is that PI is extremely efficient at storing time based performance data.

"So what, you can store that in SQL Server, mySQL or Oracle", I hear you say. Yes you most certainly can. But PI was designed from the ground up for this kind of data, whereas relational databases are not. As a result, PI is blisteringly fast and efficient. Pulling say, 3 months of data that was collected at 15 second intervals would literally take seconds to do, with no loss of fidelity, even going back months.

As an example, let’s say you needed to monitor CPU usage of a critical server. PI could collect this data periodically, save it into the historian for later view/review/reporting or analysis. Getting data into the historian can be performed a number of ways. OSISoft have written ‘interfaces’ to allow collection of data from sources such as SNMP, WMI, TCP-Response, Windows Performance Monitor counters, Netflow and many others.

The main point is that once the data is inside the historian, it really doesn’t matter whether the data was collected via SNMP, Performance Monitor, a custom script, etc. All historian data can now be viewed, compared, analysed and reported via a variety of tools in a consistent fashion.

SQL Reporting Services

For those of you not aware, Reporting Services has been part of SQL Server since SQL 2000 and allows for fairly easy generation of reports out of SQL databases. More recently, Microsoft updated SQL Server 2005 with tight integration with SharePoint. Now when creating a report server report, it is "published" to SharePoint in a similar manner to the publishing of InfoPath forms.

Creating reports is performed via two ways, but I am only going to discuss the Visual Studio method. Using Visual Studio, you are able to design a tailored report, consisting of tables and charts. An example of a SQL Reporting Services Report in visual studio is below. (from MSDN)

 

The interesting thing about SQL Reporting services is that it can pull data from data sources other than SQL Server databases. Data sources include Oracle, Web Services, ODBC and OLE-DB. Depending on your data source, reports can be parameterised (did I just make up a new word? 🙂 ). This is particularly important to SharePoint as you will soon see. It essentially means that you can feed your report values that customise the output of that report. In doing so, reports can be written and published once, yet be flexible in the sort of data that is returned.

Below is a basic example:

Here is a basic SQL statement that retrieves three fields from a data table called "picomp2". Those fields are "Tag", "Time" and "Value". This example selects values only where "Time" is between 12-1pm on July 28th and where "tag" contains the string "MYSERVER"

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%MYSERVER%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM')

Now what if we wanted to make the value for TAG flexible? So instead of "MYSERVER", use the string "DISK" or "PROCESSOR". Fortunately for most data sources, SQL Reporting Services allows you to pass parameters into the SQL. Thus, consider this modified version of the above SQL statement.

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%' + ? + '%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM') 

Look carefully at the WHERE clause in the above line. Instead of specifying %MYSERVER%, I have modified it to %?%. The question mark has special meaning. It means that you will be prompted to specify the string to be added to the SQL on the fly. Below I illustrate the sequence using three screenshots. The first screenshot shows the above SQL inside a visual studio report project. Clicking the exclamation mark will execute this SQL.

image

Immediately we get asked to fill out the parameter as shown below. (I have added the string "DISK")

image

Clicking OK, and the SQL will now be executed against the datasource, with the matching results returned as shown below. Note the all data returned contains the word "disk" in the name. (I have scrubbed identifiable information to protect the innocent).

image

Reporting Services and SharePoint Integration

Now we get to the important bit. As mentioned earlier, SharePoint and SQL Reporting Services are now tightly integrated. I am not going to explain this integration in detail, but what I am going to show you is how a parameterised query like the example above is handled in SharePoint.

In short, if you want to display a Reporting Services report in SharePoint, you use a web part called the "SQL Server Reporting Services Report Viewer"

image

After dropping this webpart onto a SharePoint page, you pick the report to display, and if it happens to be a parameterised report, you see a screen that looks something like the following.

image

Notice anything interesting? The webpart recognises that the report requires a parameter and asks for you to enter it. As you will see in the second article, this is very useful indeed! But first let’s get reporting services talking to the PI historian.

Fun with OLEDB

So, I have described (albeit extremely briefly) enough information about PI and Reporting services. I mentioned earlier that PI is not a relational database, but a time series database. This didn’t stop OSISoft from writing an OLEDB provider 🙂

Thus it is possible to get SQL reporting services to query PI using SQL syntax. In fact the SQL example that I showed in the previous section was actually querying the PI historian.

To get reporting services to be able to talk to PI, I need to create a report server Data Source as shown below. When selecting the data source type, I choose OLE DB from the list. The subsequent screen allows you to pick the specific OLEDB provider for PI from the list.

image image

Now I won’t go into the complete details of completing the configuration of the PI OLE DB provider, as my point here is to demonstrate the core principle of using OLE DB to allow SQL Reporting Services to query a non-relational data store.

Once the data source had been configured and tested (see the test button in the above screenshot), I was able to then create my SQL query and then design a report layout. Here is the sample SQL again.

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%' + ? + '%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM') 

As I previously explained, this SQL statement contains a parameter, which is passed to the report when it is run, thereby providing the ability to generate a dynamic report.

Using Visual Studio I created a new report and added a chart from the toolbox. Once again the purpose of this post is not to teach how to create a report layout, but below is a screenshot to illustrate the report layout being designed. You will see that I have previewed my design and it has displayed a textbox (top left) allowing the parameter to be entered for the report to be run. The report has pulled the relevant data from the PI historian and rendered it in a nice chart that I created.

image

Conclusion

Right! I think that’s about enough for now. To sum up this first post, we talked a little about my IT portal and the principle of "passive compliance". We examined OSISoft’s PI software and how it can be used to monitor your enterprise infrastructure. We then took a dive into SQL Reporting services and I illustrated how we can access PI historian data via OLE DB.

In the second and final post, I will introduce my IT Portal template in a brief overview, and will then demonstrate how I was able to integrate PI data into my IT portal to combine IT asset data with real-time performance metrics with no code 🙂

I hope that you found this post useful. Look out for the second half soon where this will all come together nicely

cheers

Paul

 



Latest article on SharePoint Magazine posted…

The second article in the tribute to the humble leave form series has just been posted at SharePoint Magazine.

I eagerly await the "cease and desist" letter Matt Groenings legal team 🙂

Enjoy!

image



Got my MCT (blatant plug alert)

A really quick note:

As of last month I am now a Microsoft Certified Trainer, and combined with being a certified MOSS07 technology specialist, hopefully means that my SharePoint bedside manner should be enough to charm and dazzle even your most difficult users, whether senior management or the most scary technical geek.

<blatant plug>

My training style reflects my writing style, so if you like any of the material presented on this blog and feel that it would be useful delivered in person or tailored for your organisation or circumstances, then please do not hesitate to get into contact. 🙂

</blatant plug>

thanks

Paul Culmsee



Darth Sidious reads the same books as me

image Now, readers would know that I really lay on the pop culture references pretty thick. I find it works well and makes ordinary, sometimes mundane, topics much more interesting and easy to explain. I’ve used Brittany Spears, Ikea, Kung Fu, Death metal, Dr Phil and countless others.

But I have never used Star Wars references in my post and probably never will. Why? I would like you to take some time to have a good read of this blog. I am having trouble finding the words to convey how brilliant it is.

http://sithsigma.wordpress.com/

My own company is a play on words on the much hyped/maligned Six Sigma methodologies, but this is so much more clever!

On this blog, both Darth Vader and Darth Sidious offer advice on strategy, project management and general business leadership and management topics. Some absolutely brilliant content there too, all set against the backdrop of what it takes to manage an evil empire. When you think about it, a Death Star is a pretty serious undertaking and to build it on time and on budget takes some pretty impressive management talent. So, despite whether you are an Empire kind of guy, or prefer being rebel scum, you have to concede that Vader and Sidious know how to manage a team. Sure, they made some mistakes (certainly their disaster recovery and risk management strategy were definitely flawed), but most organisations have a misfocussed attitude to security.

Aside from laughing hysterically when reading their material, I am certain that both Sith lords read the same strategy and management books that I do.

Here are some classic quotes..

In this essay on how performance metrics impact employee behaviour, Darth Sidious cites a recent example

…if your compensation system is based on rewarding people for speed, but product or service quality is severely lacking for some reason – even though you’ve mandated quality, it doesn’t make a difference what you mandate if what you’re measuring doesn’t support that goal (or even worse, is opposite to it).

That may seem like an obvious example, but it’s more common than you think. For the Clone Wars we ordered 100M Clones, and we wanted them ready in time to trick Obi Wan, so the Clone factory sacrificed on quality and what we ended up getting was 35% of the Clones being totally useless.

Now *that* is a real-world example that I can relate to!  Here is another gem from Sidious, explaining how the empire maintains a skills inventory to help them understand a team’s strengths and weaknesses.

Say you’re in a team that specializes in using the Force to electrocute captured rebels, or run a Network Engineering team. When it comes to hiring your instinct is to focus on the obvious and primary skill of the team.

Need to fill in an electrocutioner position? Then you’re probably looking for someone who’s learned how to channel the powers of the Force into electricity. Need to fill in a Network engineer position? You probably are looking for a hardcore networking/router/firewall guy.

Probably my favourite article is the Sith version of my "Project Fail" series where Darth Sidious offers advice on strategy, vision and goals. He breaks it down to:

  • Vision
  • Corporate objectives – eg "Increase delivery time of star destroyers by 10% over the next 12 months"
  • Nested Objectives
  • Alignment of Projects
  • Executive/Sith Lord Sponsorship – "if an executive sponsor, Sith Lord, finds out you spent a large amount of galactic credits and it didn’t pay off, now you’re in deep water with no one to support you. A Sith Lord is liable to feed you to a Panna Monster in such a case"
  • ROI

I could go on and on, but I could never do the site justice. The thing I really like about whoever authors this site is that they have managed to find the perfect balance between entertainment value with really insightful and clever messages behind the humour. It is a goal I have been trying to attain in my writing also, but I have to take my hat off to Sith Sigma for nailing it perfectly.

If they would have me I’d write on that site about collaboration under the pseudonym of Jar-Jar Binks 😉

Go take a look now. Tell em Jar-Jar sent you 😉 .

Paul



Thinking SharePoint Part 4 – Lessons from Kung Fu Panda

Article originally published for EndUserSharePoint.com reproduced here.

image

Greetings, my cleverworkarounds kung-fu students. Paul here again to talk once more about Zen and the art of SharePoint. Now I don’t want to appear all arrogant and pretentious, but for this post you can all call me "sifu" :-). I don’t deserve the title in the slightest but since I am writing it you are all forced to live in my fantasy world for a while :-).

I have previously written extensively on SharePoint project failure. In some ways that particular series is just as much about "thinking SharePoint" as this series, but I really do not want to rehash the content there. At the same time, I must confess I was trying to think of a way to round off this particular series of articles with a nice logical conclusion and was lost for awhile. But after watching Kung Fu Panda, I realised exactly how I can end it. So this post is the last in this series – for now anyway.

The thing about using pop culture references as I tend to do is that there is always a risk that some readers may not have seen the movie or heard the album that I refer to. So, if you haven’t seen Kung Fu Panda yet, I want you to visit this website and watch the trailer. http://www.kungfupanda.com/. Having done that, I now want you to read this article, and picture my voice as one of those old kung fu dudes with the long wispy beards offering riddle-like advice that makes no sense. If you can’t picture that, use Yoda instead.

Continue reading “Thinking SharePoint Part 4 – Lessons from Kung Fu Panda”



IT and the Corporate Immune Mechanism – the "Mother Hen" reflex

Recently, I came across the blog of Dux Raymond, a Project Manager, forthcoming author and trainer who looks at SharePoint from a project management perspective. Being rather interested in that area myself, I read his "Empowered by SharePoint" post.

He wrote about the theme of user empowerment that SharePoint provides and made this quote:

Typically, if a project manager wanted a collaborative platform (other than email) to facilitate sharing of project documents, schedule, contacts, and status updates, he or she would need the IT/IS department’s intervention and assistance to set it up. In addition to this, IT/IS would need to define the appropriate access privileges to limit who has access to these project information. Now, realistically, do you think IT/IS will get it done ASAP?

Dux went on to describe the features of SharePoint that can be used to empower users to be more efficient and productive.

The problem with this ideal (which I wholeheartedly agree with by the way) is that there is the problem of the corporate immune mechanism that likes to get in the way of any ideal that disagrees with its own. I wrote about the corporate immune mechanism in my kung-fu themed "You’re not ready" post (check the cool youtube clips of Jackie Chan, Jet Li and Tony Jaa 🙂 ). In that post, I defined the corporate immune mechanism as "the living embodiment of human nature’s resistance to change".

Now you might rightly ask me, "What the hell are you talking about? Why would users resist being empowered?"

The answer is that they are not! In fact, the users are not the problem. Sorry to disappoint you nerds reading this post. The corporate immune mechanism, in many cases, is the very department likely to be pushing the business down the SharePoint path in the first place… 

(cue the suspense music from the movie Jaws)

It is the IT Department!!! Nooooooo! How can these sweet, innocent looking people below be perpetrators of the corporate immune mechanism?

image

Shocked? Appalled at my statement? I can hear the indignant comments now…

  • "What! Us? We are the ones who understand technology" (Luddite IT Manager)
  • "What pills are you taking? – We are the innovators" (VB6 Developer)
  • "Use linux ‘cos Microsoft suck" (Technical Geek)
  • "Shut up and reboot" (Helpdesk Guy)
  • "No, you can’t do that because it’s insecure and I said so" (Security Guy)
  • "Users simply cannot be trusted" (System Administrator downloading mp3s on emule)

20 Years of users

imageLike everything else in this world, IT departments are a product of the experience of the team members and the culture of the organisation. Dealing with users is sometimes not fun. IT pretty much sees the user population as a bunch of rebellious, yet naive teenagers. Leave them alone and they will soon run amok and someone will get hurt. If it happens often enough, the teenagers are grounded and not allowed out of the house, except in controlled circumstances.

So, imagine dealing with rebellious teenagers for 20 years. Is it any wonder IT people are a little messed-up? 🙂

If the information worker revolution and the empowerment that comes with it means the IT administrators are out of the loop, then many IT administrators will push back to ensure that they are in the loop. They can’t help this because it has become a control reflex that I call it the "Mother Hen" reflex.

The mother hen reflex should be understood and not ridiculed, as it is often the user’s past actions that has created the reflex. But once ingrained, the reflex can start to stifle productivity in many different ways. For example, for an employee not being able to operate at full efficiency because they are waiting 2 days for a helpdesk request to be actioned is simply not smart business.

Worse still, a vicious circle emerges. Frustrated with a lack of response, the user will take matters into their own hands to improve their efficiency. But this simply plays into the hands of the mother hen reflex and for IT this reinforces the reason why such controls are needed. You just can’t trust those dog-gone users! More controls required!

So, if you think SharePoint is going to empower you, then you either ensure that the business takes ownership of it, with a sponsor senior enough to take on IT, or you had better start getting really friendly with your IT administrators. Many IT departments would positively have a heart attack, allowing end users the ability to say, modify the look and feel of a SharePoint site, use SharePoint Designer to create a workflow, modify permissions on items in a library, create a sub-site, create lists, modify lists with additional columns and the like.

I think it is a fully justifiable point of view when IT is looked at in isolation to the rest of the gears and pulleys that make up the organisational "machine". In IT’s mind, they are doing the right thing, yet they may very well be doing the organisation and SharePoint a disservice. They are scared that users are going to screw it all up and they will be left with picking up the pieces. But does the cost of IT’s compensation measures justify the real risk in terms of dollars and cents?

A recent example of the mother hen reflex completely going off the rails where inefficiency turned into real-life risk occurred in San Francisco with complete lockout of a fibre optic network by one rogue Cisco administrator. http://www.infoworld.com/archives/emailPrint.jsp?R=printThis&A=/article/08/07/18/30FE-sf-network-lockout_1.html. This is an example where in the administrator’s mind, he is being security-conscious, yet ultimately he was the biggest security risk of all.

Closer to SharePoint home, there was a thread on a SharePoint mailing list a month or so ago about managing SharePoint security groups, and the general consensus was that the groups should be set up and managed in Active Directory and then added into SharePoint. I am of the opinion that it is not as clear cut as this. IT generally controls Active Directory so group membership changes would remain in the perpetual 2 day turnaround that is typical of helpdesk SLA for the enterprise organisation.

But consider this. If for example, a project team has a project administrator who is trusted to manage access to all paper records, HR and payroll systems, then I have no problem delegating them the rights to manage a project sub-site or add/remove visitors to the site without having to wait 2 days for the request to get through IT’s helpdesk system. SharePoint provides the necessary audit trails, versioning and recycle bin recovery and the accountability can be pushed out to the project team. A win-win in my book.

Out of rehab

I’ll admit, a few years back I had a scary mother hen reflex. But later I realised that while I did a lot of security policy and compliance work, I was accountable for the IT service, not the information assets provided by that service. If I kept the service running, backed up and provided assurance as to the recoverability, then accountability for the information lay elsewhere.

Sadly, SharePoint does not make any of this easy. Complexity breeds confusion and it is exceptionally difficult to know *everything*. Fear of the unknown also breeds the mother hen reflex too. So, for that reason I am completely sympathetic to why IT departments tend to be very controlling, yet I recognise that it is too inward facing and can actually be damaging to organisational productivity and innovation too.

Unfortunately I don’t have any answers on where the line should be drawn either. That’s a function of many factors. However, what I can tell you is that empowerment is not always guaranteed.

 

Thanks for reading

Paul



Thinking SharePoint Part 3 – A tale of two clients

My third post on "Thinking SharePoint" for www.endusersharepoint.com reproduced here.

Hi all

[Quick reference: Part 1 and Part 2]

If you have followed the first two articles in this series, I have been attempting to talk about SharePoint "head-space". In other words, SharePoint success is so much more a people issue than a technical or architectural one. As a result, it can be a little difficult to write about!

As a MOSS2007 product specialist and architect, I have slowly developed a kind of spider sense that allows me to pick out likely problematic implementations fairly early in the process. This spider sense is most definitely not along the lines of "oh these guys know nothing about application development/security/collaboration/[insert word here]". Often there are excellent, really knowledgeable staff on hand with exemplary credentials. It is instead a feeling that I previously described as "organisational maturity". Want a better explanation than that? How about something like the "oh they are *so* not ready for what they are getting themselves into" factor.

In Part 2, I’ve touched on the potent combination of differing personality types and the different stages of learning along with all of the new SharePoint features and options at your disposal. If you did not read part 2, then I strongly suggest you do so before continuing with this article because I am going to revisit the learning types stuff here.

Another way to describe the "unconsciously incompetent" stage of learning (that sounds much less insulting 😉 ) can be summed up as "you don’t know what you don’t know". The "Ikea guy" example in the last post is a perfect example of low organisational maturity. In my example, the poor unloved Ikea guy turns up at a house to install an Ikea modular storage solution and has to satisfy the conflicting requirements of a family so dysfunctional that the Simpsons seem pretty tame by comparison. It is clear from an outside perspective, that they have called in the "Ikea guy" way too early in the piece and in fact he is completely the wrong guy to call anyway! Wrong guy? Who should we be calling then?

image    image

Who you really need is someone like Carson and the boys from "queer eye for the straight guy" – Either them or Doctor Phil. They might come on a bit strong at first, but they work by winning your trust, building your respect and slowly but surely give you the confidence to change your old, bad habits. Before long the dysfunctional family have turned a corner, to the amazement of yourself and those around you. As an added bonus, you had a lot of fun along the way and your dress sense has improved as your stress levels have dropped 🙂

You still need the Ikea guy, but at least now the family no longer argues so much over which drawer your socks should be stored in!

Of course, the ultimate SharePoint consultant is this mythical person. can deal with your emotional issues *and* install the system! 🙂

philTheIkeaMan (2)

Workshops, workshops!

Where possible, I always undertake SharePoint engagements in an advisory capacity before any time and cost estimates are made. Why? Because invariably, many/most clients start from a position of unconscious incompetence. Not just in term of the product itself, but in terms of a shared understanding of the problem with their colleagues and co-participants. Anyone who has been to a Microsoft sponsored SharePoint seminar and thought that SharePoint is the answer to their prayers is definitely in the first stage of their learning. In fact, when a client wants to skip the advisory stage and get straight into the "just tell me how much it costs", my spider senses tingle…

Thus, I run *plenty* of workshops. I don’t believe that "IT Integrators" who, for example, specialise in Exchange, Cisco networking and firewall type security are overly well suited to perform SharePoint implementations. Equally, I’m not convinced that a "Web Design House" is also particularly suited either. Yes, there is a big technical/architectural component (Ikea guys), but most of the work is in the facilitation, dialogue and requirements gathering stage of the process (Carson/Dr Phil guys). In other words, getting people from that "unconsciously incompetent" phase to "consciously incompetent" stage of learning.

I tend to keep the workshops to no more than two hours in a single day, with a break after an hour for everyone to recharge their brain cells with a jolt of caffeine :-). I also keep the workshops to 4 people or less and split into multiple workshops if there are more than 4 participants.

My goal in these workshops are threefold.

  • Teach some product basics, answer common questions and set the scene
  • Get participants thinking talking about what SharePoint means to them, and what they want to get out of it
  • Assess the personality/competency/maturity level among participants ("unconsciously incompetent" versus "consciously incompetent’)

Achieving these goals usually takes two to three workshops and it is unwise to pack all of those goals into one workshop anyway. The first workshop is all about the product basics (goal 1 above). I do not go into massive detail, just enough so that participants are not flying completely blind with their understanding of the product. Signs of success of this workshop are the shared realisation from participants of the huge potential of the product in certain areas, and an appreciation of the fact that there are a lot of organisational issues that will affect success, and thus it is much more than just whacking in the CD and running SETUP.EXE.

Politics, politics!

I prefer to wait a day or two before the second workshops, as it gives participants a chance to take in the content of the first. When we meet for the second time, I do a quick recap on workshop 1, and then we start talking through requirements, issues, constraints and risks. One sure sign of organisational maturity among participants is how long this workshop takes. This is a factor of the scope of the perceived "problem" to be solved, but more importantly, often this is the first time the participants have actually *talked though* a problem together (aside from previously all agreeing that it is sub-optimal in the first place). It is very easy for these workshop to go over time, or to finish unresolved.

Additionally in this sort of workshops, you can fairly quickly assess the political dynamic of the group (goal 3). Participants always have different agendas or belief on what needs to be done to solve organisational problems. Often participants have locked horns with each-other way before SharePoint came on the scene, and it doesn’t take long to see where the dynamics lie. Understanding this dynamic allows you to tailor your facilitation and teaching approach and build trust and respect among all of the participants.

This can be a frustrating stage among participants, especially while a shared understanding is still being developed. But right here is the root of project failure – not just SharePoint.

What I will do now is tell you briefly about two different client engagements that I was involved in some time back. Both of these client engagements happened at around the same time, and each client happened to be in the same vertical market, although they had nothing to do with each-other. Both were ultimately successful projects in terms of delivery, but one was much more successful in terms of laying a foundation for future projects. If any aspect of these two tales resonate with you, please send a comment through at the end of this article.

Client 1

The first client was a tender that had a fixed time constraint (spider-sense goes gangbusters at this point). However the client had attended one of my seminars where we talked about how to approach a SharePoint project. (The subject matter being a more distilled version of my various posts such as this one). Thus, I had the chance to sit down and have a long chat with the client in an informal environment and felt the theme of our seminar had resonated with them and they had a solid appreciation why we approach SharePoint projects the way we do. 

Despite the very tight time frame I was able to conduct a couple of workshops in two groups and we were able to agree on deliverables that were realistic and achievable. However the workshops were an interesting experience. There were too many people and the group dynamic was clearly political in nature and there was open, sometimes rigorous debate, about specifics of the deliverables. The skill levels varied, as did the agendas. This client also outsourced IT support, so was somewhat light on the ground in terms of infrastructure and application development skills. Thus, we made a conscious decision to stick to SharePoint designer and go out-of-the-box for this initial engagement as we felt that the timeframe and process maturity constraints meant that we would be better served using tools that were easy to make modifications to. We had made this clear (or so we thought ) to the client when we responded to the tender.

By the end of the project two deliverables had expectation mismatches in their functionality. A couple of sticking points were actually how SharePoint was architected as a product and the miscommunication stemmed from a lack of understanding of how the product worked in particular regards. To change the behaviour would require disproportionate custom development work that would very likely be redundant fairly quickly, once users started using the product. Additionally, custom application development for SharePoint adds to governance and they were not yet ‘ready’ to make that leap.

Now it is important to note here that the client was not really at fault. You can hardly blame someone when they "don’t know what they don’t know". The failure was on my part, in that I did not do enough to ensure that we had a shared understanding and full awareness of the constraints of our approach. At the time I thought that we had achieved this milestone, but looking back, it as clear that principally due to the very tight time-frames that we had to operate under, we under-invested in this part of the project.

Fortunately in this case, we were able to agree on workarounds that got the project over the line on-time, made logical sense and did not have any major impact on either party. But the major "lesson learnt" from this project is that we never actually guided the client properly to the "consciously incompetent" stage of their learning.

Client 2

Client 2 was an interesting case. They attended the same seminar as Client 1, principally because a competing integrator had sold them the idea of using SharePoint for their Internet site. Upon seeing the high cost of the licensing, the competing integrator then showed them "all of the other great features" that they would get from investing in SharePoint and overloaded them on recycled Microsoft "6 pillar" marketing material. The client liked all of these new features of course, and thus broadened the scope of the deliverables to help justify the cost.

Result? Even more confusion (and at this point they still had not actually *used* SharePoint). So they attended my seminar for some direction and I was subsequently engaged in an advisory capacity to help them make sense of it all.

I used the workshop approach as described above, and the first workshop was heavy going, because it took a while to sort out all of the marketing fluff from reality. They had been the victim of one too many PowerPoint slide decks and thus jumped around from topic to topic. I answered as best I could, occasionally having to use geek-speak but mostly was able to keep it pitched at the right sort of level. The client then had to reconcile the reality of SharePoint against this overly broad scope that they had created for themselves. It was frustrating for them, and I really felt for them. I even went as far as to offer to discuss things over a beer. (It’s amazing how much more progress you can make over beer 😉 .

A week went by, and the client called me back in. That week they had spent a lot of time soul searching and debating where, when and how SharePoint should be tackled. In the end they had re-examined the corporate strategy, which was a high level, 3 year plan that had been signed off by the organisation previously. They then examined the IT department’s 3 year plan which was developed to support the organisational strategy.

They came to the conclusion, that their external web site was not the place to start, for various reasons. Instead, they identified a much smaller scope project that slotted in perfectly with both the IT department strategy and more importantly, the organisational strategy. They asked me for feedback and I was very enthusiastic in how well they had done, considering the hard-slog workshops from the week before.

At this point, the transition from "unconsciously incompetent" to "consciously incompetent" was well underway

I met with them a few days later. Since the entire team was behind the agreed project, they had talked to the organisation stakeholders, mapped out and documented the process before I had arrived. They also were eager to learn SharePoint, and since the scope of this project was not large and the shared understanding was high, we were able to use this project as the training exercise to learn various concepts from Farm Administration, libraries, lists and columns, SharePoint Designer workflow and InfoPath Forms Services. When we hit an obstacle, we collectively were able to find creative, yet simple ways to get around them.

That smaller scope project was successfully delivered, and the benefits were significant. The team now has a much better understanding of the product, its constraints and limitations. It was now much easier to plan for and tackle the more significant project of web content management (Internet site) as there was a much greater level of shared understanding between participants. They felt more confident in their knowledge of the product, and now feel confident that they would be able to manage the expectations of the organisation.

All in all, it was a great engagement and I came away with a huge respect for that client. More importantly, the relationship continues to this day.

Conclusion

I wonder if the story of the first client is a familiar one to readers, either as being and end-user or being involved in the project itself? 

Certainly the second client had frustrations at the start, as they realised that SharePoint was not the panacea they were looking for, they stopped, took stock and co-operatively reassessed the situation. Unconstrained from a fixed time deadline, they realised that more thought was required. That action potentially saved them a lot of stress and heartache which they would have experienced had they ploughed on ahead with an ambitions project. It has now given them a great foundation to build upon for the future.

Thanks for reading

Paul Culmsee



Using google to find potentially misconfigured SharePoint sites

Those in the security community who have ever performed vulnerability assessment/penetration testing will know of the Google Hacking database. Google is actually a very handy tool to look for potentially vulnerable sites, due to the fact that it will crawl anything it finds. Therefore, if you have misconfigured an externally facing web-based application, at some point the crawler will come along and your misconfiguration will end up in Google’s giant cache.

Extending this risk to SharePoint is not such a stretch. For example, type the following into a Google search…

"view all site content" "sign in" "people and groups"

What do you see?

Scary, huh?

Now to be fair, I have to make some points here.

  • Many of these sites are legitimately meant to be accessible to the public
  • I am not disclosing a SharePoint vulnerability, or any issue with the security of the product. Hence why this is not posted to say, bugtraq and I am not making a big deal of it beyond this post.
  • SharePoint is secure by default in the sense that privileged operations are protected by granular access permissions and anonymous access must be explicitly granted.
  • It is extremely unlikely that you will be able to change anything – as this is read-only anonymous access explicitly granted by the SharePoint administrator. Areas of the site not marked anonymous (e.g site settings) should be safe from modification
  • If there is an error here, then it is human error around configuration of the product.

But as a "bad guy", when you decide to target an organisation, you go through a phase of gathering as much information as possible. Some of these sites expose domain names, user account names and other personal details. Such information can be used to gather additional information. For example, knowing a person’s name, I could set up a fake email address, myspace or facebook account in that person’s name and target their colleagues using social engineering techniques. Using anonymity tools like tor in combination with say, WGET, you could sponge all of the data and documents on such sites for offline analysis.

Documents left inside public document libraries expose internal system names, acronyms and details that paint a fuller picture of the internal organisational set-up. Such information can be used for bogus telephone surveys for the purpose of obtaining more information, targeted Trojans disguised as patches, etc. On occasion, particularly sensitive information can be found within these publicly accessible lists and libraries. (Consider the risk if a data connection library or client list was contained in a site like this).

Additionally, when I see domain names, it gives me a pretty good idea of the topology of the SharePoint infrastructure also. Why? Well, for example, if I see domain names, then people are signing in using their domain accounts. Therefore, the SharePoint server has to be part of the Active Directory and is very likely residing on their internal network and published to the Internet via ISA or some other reverse proxy or port forwarding technique.

All in all, it should be a wake up call to SharePoint administrators about the risks of information disclosure when setting up public-facing SharePoint sites.

Google does not forgive (or forget).

Thanks

Paul Culmsee



« Previous PageNext Page »

Today is: Tuesday 10 March 2026 -