Why I’ve been quiet…

Send to Kindle

As you may have noticed, this blog has been a bit of a dead zone lately. There are several very good reasons for this – one being that a lot of my creative energy has been going into co-writing a book – and I thought it was time to come clean on it.

So first up, just because I get asked this all the time, the book is definitely *not* “A humble tribute to the leave form – The Book”! In fact, it’s not about SharePoint per se, but rather the deeper dark arts of team collaboration in the face of really complex or novel problems.

It was late 2006 when my own career journey took an interesting trajectory, as I started getting into sensemaking and acquiring the skills necessary to help groups deal with really complex, wicked problems. My original intent was to reduce the chances of SharePoint project failure but in learning these skills, now find myself performing facilitation, goal alignment and sensemaking in areas miles away from IT. In the process I have been involved with projects of considerable complexity and uniqueness that make IT look pretty easy by comparison. The other fringe benefit is being able to sit in a room and listen to the wisdom of some top experts in their chosen disciplines as they work together.

Through this work and the professional and personal learning that came with it, I now have some really good case studies that use unique (and I mean, unique) approaches to tackling complex problems. I have a keen desire to showcase these and explain why our approaches worked.

My leanings towards sensemaking and strategic issues would be apparent to regular readers of CleverWorkarounds. It is therefore no secret that this blog is not really much of a technical SharePoint blog these days. The articles on branding, ROI, and capacity planning were written in 2007, just before the mega explosion of interest in SharePoint. This time around, there are legions of excellent bloggers who are doing a tremendous job on giving readers a leg-up onto this new beast known as SharePoint 2010.

BBP (3)

So back to the book. Our tentative title is “Beyond Best Practices” and it’s an ambitious project, co-authored with Kailash Awati – the man behind the brilliant eight to late blog. I had been a fan of Kailash’s work for a long time now, and was always impressed at the depth of research and effort that he put into his writing. Kailash is a scarily smart guy with two PHD’s under his belt and to this day, I do not think I have ever mentioned a paper or author to him that he hasn’t read already. In fact, usually he has read it, checked out the citations and tells me to go and read three more books!

Kailash writes with the sort of rigour that I aspire to and will never achieve, thus when the opportunity of working with him on a book came up, I knew that I absolutely had to do it and that it would be a significant undertaking indeed.

To the left is a mock-up picture to try and convey where we are going with this book. See the guy on the right? Is he scratching his head in confusion, saluting or both? (note, this is our mockup and the real thing may look nothing like this)

This book dives into the seedy underbelly of organisational problem solving, and does so in a way that no other book has thus far attempted. We examine why the very notion of “best practices” often makes no sense and have such a high propensity to go wrong. We challenge some mainstream ideas by shining light on some obscure, but highly topical and interesting research that some may consider radical or heretical. To counter the somewhat dry nature of some of this research (the topics are really interesting but the style in which academics write can put insomniacs to sleep), we give it a bit of the cleverworkarounds style treatment and are writing in a conversational style that loses none of the rigour, but won’t have you nodding off on page 2. If you liked my posts where I use odd metaphors like boy bands to explain SharePoint site collections, the Simpsons to explain InfoPath or death metal to explain records versus collaborative document management, then you should enjoy our journey through the world of cognitive science, memetics, scientific management and Willy Wonka (yup – Willy Wonka!).

Rather than just bleat about what the problems with best-practices are, we will also tell you what you can do to address these issues. We back up this advice by presenting a series of practical case studies, each of which illustrates the techniques used to address the inadequacies of best practices in dealing with wicked problems. In the end, we hope to arm our readers with a bunch of tools and approaches that actually work when dealing with complex issues. Some of these case studies are world unique and I am very proud of them.

Now at this point in the writing, this is not just an idea with an outline and a catchy title. We have been at this for about six months, and the results thus far (some 60-70,000 words) have been very, very exciting. Initially, we really had no idea whether the combination of our writing styles would work – whether we could take the degree of depth and skill of Kailash with my low-brow humour and my quest for cheap laughs (I am just as likely to use a fart joke if it helps me get a key point across)…

… But signs so far are good so stay tuned 🙂

Thanks for reading

 

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Complexity bites: When SharePoint = Risk

Send to Kindle

I think as you age, you become more and more like your parents. Not so long ago I went out paintballing with some friends and we all agreed that the 16-18 year olds who also happened to be there were all obnoxious little twerps who needed a good kick in the rear. At the same time, we also agreed that we were just as obnoxious back when we were that age. Your perspective changes as you learn and your experience grows, but you don’t forget where you came from.

I now find myself saying stuff to my kids that my parents said to me, I think today’s music is crap, I have taken a liking to drinking quality scotch. Essentially all I need now to complete the metamorphosis to being my father is for all my hair to fall out!

So when I write an article whining about an assertion that IT has a credibility issue and has gone backwards in its ability to cope with various challenges, I fear that I have now officially become my parents. I’ll sound like grandpa who always tells you that life was so much simpler back in the 1940’s.

Consequences of complexity…

Before I go and dump on IT as a discipline, how about we dump on finance as a discipline, just so you can be assured that my cynicism extends far beyond nerds.

I previously wrote about how Sarbanes Oxley legislation was designed to, yet ultimately failed to, provide assurance to investors and regulators that public companies had adequate controls over their financial risk. As I write this, we are in the midst of a once in a generation-or-two credit crisis where some seven hundred billion dollars ($700,000,000,000) of US taxpayers’ money will be used to take ownership of crap assets (foreclosed or unpaid mortgages).

Part of the problem with the credit crisis was through the use of "collateralized debt obligations". This is a fancy, yet complex, way of taking a bunch of mortgages, and turning them into an "asset" that someone else who has some spare cash invests in. If you are wondering why the hell someone would invest in such a thing, then consider people with home loans, supposedly happily paying interest on those mortgages. It is that interest that finds its way to the holder (investor) of the CDO. So a CDO is supposedly an income stream.

Now if that explanation makes your eyes glaze over then I have bad news for you: that’s supposed to be the easy part. The reality is that the CDO’s are actually extremely complex things. They can be backed by residential property, commercial property, something called mortgage backed securities, corporate loans – essentially anything that someone is paying interest on can find its way into a CDO that someone else buys into, to get the income stream from the interest paid.

To provide "assurance" that these CDO’s are "safe", ratings agencies give them a mark that investors rely upon when making their investment. So a "AAA" CDO is supposed to have been given the tick of approval by experts in debt instrument style finance.

Here’s the rub about rating agencies. Below is a news article from earlier in the year with some great quotes

http://www.nytimes.com/2008/03/23/business/23how.html?pagewanted=print

Credit rating agencies, paid by banks to grade some of the new products, slapped high ratings on many of them, despite having only a loose familiarity with the quality of the assets behind these instruments.

Even the people running Wall Street firms didn’t really understand what they were buying and selling, says Byron Wien, a 40-year veteran of the stock market who is now the chief investment strategist of Pequot Capital, a hedge fund. “These are ordinary folks who know a spreadsheet, but they are not steeped in the sophistication of these kind of models,” Mr. Wien says. “You put a lot of equations in front of them with little Greek letters on their sides, and they won’t know what they’re looking at.”

Mr. Blinder, the former Fed vice chairman, holds a doctorate in economics from M.I.T. but says he has only a “modest understanding” of complex derivatives. “I know the basic understanding of how they work,” he said, “but if you presented me with one and asked me to put a market value on it, I’d be guessing.”

What do we see here? How many people really *understand* what’s going on underneath the complexity?

Of course, we now know that many of the mortgages backing these CDO’s were made to people with poor credit history, or with a high risk of not being able to pay the loans back. Jack up the interest rate or the cost of living and people foreclose or do not pay the mortgage. When that happens en masse, we have a glut of houses for sale, forcing down prices, lowering the value of the assets, eliminating the "income stream" that CDO investors relied upon, making them pretty much worthless.

My point is that the complexity of the CDO’s were such that even a guy with a doctorate in economics only had a ‘modest understanding’ of them. Holy crap! If he doesn’t understand it then who the hell does?

Thus, the current financial crisis is a great case study in the relationship between complexity and risk.

Consequences of complexity (IT version)…

One thing about doing what I do, is that you spent a lot of time on-site. You get to see the IT infrastructure  and development at many levels. But more importantly, you also spend a lot of time talking to IT staff and organisation stakeholders with a very wide range of skills and experience. Finally and most important of all, you get to see first hand organisational maturity at work.

My conclusion? IT is completely f$%#ed up across all disciplines and many will have their own mini equivalent of the US $700 billion dollar haemorrhage. Not only that, it is far worse today than it previously was – and getting worse! IT staff are struggling with ever accelerating complexity and the "disconnect" between IT and the business is getting worse as well. To many businesses, the IT department has a credibility problem, but to IT the feeling is completely mutual 🙂

You can find a nice thread about this topic on slashdot. My personal favourite quote from that thread is this one

Let me just say, after 26 years in this business, of hearing this every year, the systems just keep getting more complex and harder to maintain, rather than less and easier.

Windows NT was supposed to make it so anyone who could use Windows could manage a server.

How many MILLION MSCEs do we have in the world now?

Storage systems with Petabytes of data are complex things. Cloud computing is a complex thing. Supercomputing clusters are complex things. World-spanning networks are complex things.

No offense intended, but the only people who think things are getting easier are people who don’t know how they work in the first place

Also there is this…

There are more software tools, programming languages, databases, report writers, operating systems, networking protocols, etc than ever before. And all these tools have a lot more features than they used to. It’s getting increasingly harder to know "some" of them well. Gone are the days when just knowing DOS, UNIX, MVS, VMS, and OS/400 would basically give you knowledge of 90% of the hardware running. Or knowing just Assembly/C/Cobol/C++ would allow you to read and maintain most of the source code being used. So I would argue that the need for IT staff is going to continue to increase.

I think the "disconnect" between IT and Business has a lot more to do with the fact that business "knows" they depend on IT, but they are frustrated that IT can’t seem to deliver what they want when they want it. On the other side, IT has to deal with more and more tools and IT staff has to learn more and more skills. And to increase frustration in IT, business users frequently don’t deliver clear requirements or they "change" their mind in the middle of projects….

So it seems that I am not alone 🙂

I mentioned previously that more often than not, SQL Server is poorly maintained – I see it all the time. Yet today I was speaking to a colleague who is a storage (SAN) and VMware virtualisation god. I asked him what the average VMware setup was like and his answer was similar to my SQL Server and SharePoint experience. In his experience, most of them were sub-optimally configured, poorly maintained, poorly documented and he could not provide any assurance as to the stability of the platform.

These sorts of quality assurance issues are rampant in application development too. I see the same thing most definitely in the security realm too.

As the above quote sates, "it’s increasingly harder to know *some* of them well". These days I am working with specialists who live and breathe their particular discipline (such as storage, virtualisation, security or comms). Those disciplines over time grow more complex and sub-disciplines appear.

Pity then, the poor developer/sysadmin/IT manager who is trying to keep a handle on all of this and try to provide a decent service to their organisation!

Okay, so what? IT has always been complex – I sound like a Gartner cliche. What’s this got to do with SharePoint?

Consequences of SharePoint complexity…

SharePoint, for a number of reasons, is one of those products that has a way of really laying bare any gaps that the organisation has in terms of their overall maturity around technology and strategy.

Why?

Because it is so freakin’ complex! That complexity transcends IT disciplines and goes right to the heart organisational/people issues as well.

It’s bad enough getting nerds to agree on something, let alone organisation-wide stakeholders!

Put simply, if you do a half-arsed job of putting SharePoint in, you will be punished in so many ways! The simple fact is that the odds are against you before you even start because it only takes a mistake in one particular part of the complex layers of hardware, systems, training, methodology, information architecture and governance, to devalue the whole project.

When I first started out, I was helping organizations get SharePoint installed. However lately I am visiting a lot of sites where SharePoint has already been installed, but it has not been a success. There are various reasons;I have cited them in detail in the project failure series, so I won’t rehash all that here. (I’d suggest reading parts three, four and five in particular).

I am firmly of the conclusion that much of SharePoint is more art than science, and what’s more, the organisation has to be ready to come with you. Due to differing learning styles and poor communication of strategy, this is actually pretty rare. Unfortunately, IT are not the people who are well suited to "getting the organisation ready for SharePoint."

If that wasn’t enough, then there is this question. If IT already struggle to manage the underlying infrastructure and systems that underpin SharePoint, then how can you have any assurance that IT will have a "governance epiphany" and start doing things the right way?

This translates to risk, people! I will be writing all about risk in a similar style to the CFO Return on Investment series very soon. I am very interested in methods to quantify the risk brought about by the complexity of SharePoint and the IT services it relies on. For me, I see a massive parallel from the complexity factor in the current financial crisis and I think that a lot can be learned from it. SOX was supposed to provide assurance and yet did nothing to prevent the current crisis. Therefore, SOX represents a great example of mis-focused governance where a lot of effort can be put in for no tangible gain.

A quick test of "assurance"…

Governance is like learning to play the guitar. It takes practice, and it does not give up its secrets easily and despite good intent, you will be crap at it for a while. It is easy to talk about, but putting it into practice is another thing.

Just remember this. The whole point of the exercise is to provide *assurance* to stakeholders. When you set any rule, policy, procedure, standard (or similar), just ask yourself: Does this provide me the assurance I need that gives me confidence to vouch for the service I am providing? Just because you may be adopting ITIL principles, does *not* mean that you are necessarily providing the right sort assurance that is required.

I’ll leave you with a somewhat biased, yet relatively easy litmus test that you can use to test your current level of assurance.

It might be simplistic, but if you are currently scared to apply a service pack to SharePoint, then you might have an assurance issue. 🙂

 

Thanks for reading

 

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

"Ain’t it cool?" – Integrating SharePoint and real-time performance data – Part 2

Send to Kindle

Hi again

This article is the second half of a pair of articles explaining how I integrated real-time performance data with an SharePoint based IT operational portal, designed around the principle of passive compliance with legislative or organisational controls.

In the first post, I introduced the PI product by OSIsoft, and explained how SQL Reporting services is able to generate reports from more than just SQL Server databases. I demonstrated how I created a report server report from performance data stored in the PI historian via an OLE DB provider for PI, and I also demonstrated how I was able to create a report that accepted a parameter, so that the output of the report could be customised.

I also showed how a SharePoint provides a facility to enter parameter data when using the report viewer web part.

We will now conclude this article by explaining a little about my passively compliant IT portal, and how I was able to enhance it with seamless integration with the real-time performance data stored in the PI historian.

Just to remind you, here is my conceptual diagram in "acoustic Visio" format

The IT portal

This is the really ultra brief explanation of the thinking that went into my IT portal

I spent a lot of time thinking about how critical IT information could be stored in SharePoint to achieve the goals of quick and easy access to information, make tasks like change/configuration management more transparent and efficient, as well as capture knowledge and documentation. I was influenced considerably by ISO17799 as it was called back then, especially in the area of asset management. I liked the use of the term "IT Assets" in ISO17799 and the strong emphasis on ownership and custodianship.

ISO defined asset as "any tangible or intangible thing that has value to an organization". It maintained that "…to achieve and maintain appropriate protection of organizational assets. All assets should be accounted for and have a nominated owner. Owners should be identified for all assets and the responsibility for the maintenance of appropriate controls should be assigned. The implementation of specific controls may be delegated by the owner as appropriate but the owner remains responsible for the proper protection of the assets."

That idea of delegation is that an owner of an asset can delegate the day-to-day management of that asset to a custodian, but the owner still bears ultimate responsibility.

So I developed a portal around this idea, but soon was hit by some constraints due to the broad ISO definition of an asset. Since assets have interdependencies, geeks have a tendency to over-complicate things and product a messy web of interdependencies. After some trial and error, as well as some soul searching I was able to come up with a 3 tier model that worked.

I changed the use of the word "asset", and split it into three broad asset types.

  • Devices (eg Server, SAN, Switch, Router, etc)
  • IT Services (eg Messaging, Databases, IP Network, etc)
  • Information Assets (eg Intranet, Timesheets,
image

The main thing to note about this model is to explain the different between an IT Service and an Information Asset. The distinction is in the area of ownership. In the case of an "Information Asset", the ownership of that asset is not IT. IT are a service provider, and by definition the IT view of the world is different to the rest of the organisation. An "IT Service" on the other hand, is always owned by IT and it is the IT services that underpin information assets.

So there is a hierarchical relationship there. You can’t have an information asset without an IT service providing it. Accountabilities are clear also. IT own the service, but are not responsible for the information asset itself – that’s for other areas of the organisation. (an Information Asset can also depend on other information assets as well as many IT services.

While this may sound so obvious that its not worth writing, my experience is that IT department often view information assets and the services providing those assets as one and the same thing.

Devices and Services

So, as an IT department, we provide a variety of services to the organisation. We provide them with an IP network, potentially a voice over IP system, a database subsystem, a backup and recovery service, etc.

It is fairly obvious that each IT service consists of a combination of IT devices (and often other IT services). an IP network is an obvious one and a basic example. The devices that underpin the "IP Network" service are routers, switches and wireless access points.

For devices we need to store information like

  • Serial Number
  • Warranty Details
  • Physical Location
  • Vendor information
  • Passwords
  • Device Type
  • IP Address
  • Change/Configuration Management history
  • IT Services that depend on this device (there is usually more than 1)

For services, we need to store information like

  • Service Owner
  • Service Custodian
  • Service Level Agreement (uptime guarantees, etc)
  • Change/Configuration Management history
  • IT Devices that underpin this service (there is usually more than 1)
  • Dependency relationships with other IT services
  • Information Assets that depend on this IT service

Keen eyed ITIL practitioners will realise that all I am describing here is a SharePoint based CMDB. I have a site template, content types, lists, event handlers and workflows that allow the above information to be managed in SharePoint. Below is three snippets showing sections of the portal, drilling down into the device view by location (click to expand), before showing the actual information about the server "DM01"

image image

image

Now the above screen is the one that I am interested in. You may also notice that the page above is a system generated page, based on the list called "IT Devices". I want to add real-time performance data to this screen, so that as well as being able to see asset information about a device, I also want to see its recent performance.

Modifying a system page

I talked about making modifications to system pages in detail in part 3 of my branding using Javascript series. Essentially, a system page is an automatically generated ASPX page that SharePoint creates. Think about what happens each time you add a column to a list or library. The NewForm.aspx, Editform.Aspx and Dispform.aspx are modified as they have to be rebuild to display the new or modified column.

SharePoint makes it a little tricky to edit these pages on account of custom modifications running the risk of breaking things. But as I described in the branding series, using the ToolPaneView hack does the job for us in a safe manner.

So using this hack, I was able to add a report viewer web part to the Dispform.aspx of the "IT devices" list as shown below.

image image

imageimage

Finally, we have our report viewer webpart, linked to our report that accesses PI historian data. As you can see below, the report that I created actually is expecting two parameters to be supplied. These parameters will be used to retrieve specific performance data and turn it into a chart.

image

Web Part Connection Magic

Now as it stands, the report is pretty much useless to us in the sense that we have to enter parameters to it manually, to get it to actually present us the information that we want. But on the same page as this report is a bunch of interesting information about a particular device, such as its name, IP Address, location and description. Wouldn’t it be great if we could somehow pass the device name (or some other device information) to the report web part automatically.

That way, each time you opened up a device entry, the report would retrieve performance information for the device currently being viewed. That would be very, very cool.

Fortunately for us it can be easily done. The report services web part, like many other web parts is connectable. This means that it can accept information from other web parts. This means that it is possible to have the parameters automatically passed to the report! 

Wohoo!

So here is how I am going to do this. I am going to add two new columns to my device list. Each column will be the parameter passed to the report. This way, I can tailor the report being generated on a device by device basis. For example, for a SAN device I might want to report on disk I/O, but a server I might want CPU. If I store the parameter as a column, the report will be able to retrieve whatever performance data I need.

Below shows the device list with the additional two columns added. the columns are called TAGPARAM1 and TAGPARAM2. The next screen below, shows the values I have entered for each column against the device DM01. These values will be passed to the report server report and used to find matching performance data.

image image

So the next question becomes, how do I now transparently pass these two parameters to the report? We now have the report and the parameters on the same page, but no obvious means to pass the value of TagParam1 and TagParam2 to the report viewer web part.

The answer my friends, is to use a filter web part!

Using the toolpane view hack, we once again edit the view item page for the Device List. We now need to add two additional web parts (because we have two parameters). Below is the web part to add.

image

The result should be a screen looking like the figure below

image

Filter web parts are not visible when a page is rendered in the browser. They are instead used to pass data between other web parts. There are various filter web parts that work in different ways. The Page Field filter is capable of passing the value of any column to another web part.

Confused? Check out how I use this web part below…

The screen above shows that the two Page Field filters web parts are not configured. They are prompting you to open the tool pane and configure them. Below is the configuration pane for the page field filter. Can you see how it has enumerated all of the columns for the "IT device" list? In the second and third screen we have chosen TagParam1 for the first page filter and TagParam2 for the second page filter web part.

image image image

Now take a look at the page in edit mode. The page filters now change display to say that they are not connected. All we have done so far is tell the web parts which columns to grab the parameter values from

image

Almost Home – Connecting the filters

So now we need to connect each Page Field filter web part to the report viewer web part. This will have the effect of passing to the report viewer web part, the value of TagParam1 and TagParam2. Since these values change from device to device, the report will display unique data for each device.

To to connect each page filter web part you click the edit dropdown for each page filter. From the list of choices, choose "Connections", and it will expand out to the choice of "Send Filter Values To". If you click on this, you will be promoted to send the filter values to the report viewer web part on the page. Since in my example, the report viewer web part requires two parameters, you will be asked to choose which of the two parameters to send the value to.

image image

Repeat this step for both page filter web parts and something amazing happens, we see a performance report on the devices page!! The filter has passed the values of TagParam1 and tagParam2 to the report and it has retrieved the matching data!

image

Let’s now save this page and view it in all of its glory! Sweet eh!

image 

Conclusion (and Touchups)

So let’s step back and look at what we have achieved. We can visit our IT Operations portal, open the devices list and immediately view real-time performance statistics for that device. Since I am using a PI historian, the performance data could have been collected via SNMP, netflow, ping, WMI, Performance Monitor counters, a script or many, many methods. But we do not need to worry about that, we just ask PI for the data that we want and display it using reporting services.

Because the parameters are stored as additional metadata with each device, you have complete control over the data being presented back to SharePoint. You might decide that servers should always return CPU stats, but a storage area network return disk I/O stats. It is all controllable just by the values you enter into the columns being used as report parameters.

The only additional thing that I did was to use my CleverWorkArounds Hide Field Web Part, to subsequently hide the TagParam1 and TagParam2 fields from display, so that when IT staff are looking at the integrated asset and performance data, the ‘behind the scenes’ glue is hidden from them.

So looking at this from a IT portal/compliance point of view, we now have an integrated platform where we can:

  • View device asset information (serial number, purchase date, warranty, physical location)
  • View IT Service information (owners, custodians and SLA’s)
  • View Information Asset information (owners, custodians and SLA’s)
  • Understand the relationships between devices, services and information assets
  • Access standards, procedures and work instructions pertaining to devices, services and information assets
  • Manage change and configuration management for devices, services and information assets
  • Quickly and easily view detailed, real time performance statistics of devices

All in all, not a bad afternoons work really! And not one line of code!

As i said way back at the start of the first article, this started out as a quick idea for a demo and it seems to have a heck of a lot of potential. Of course, I used PI but there is no reason why you can’t use similar techniques in your own IT portals to integrate your operational and performance data into the one place.

I hope that you enjoyed this article and I look forward to feedback.

<Blatant Plug>Want an IT Portal built in passive compliance? Then let’s talk!</Blatant Plug>

cheers

Paul Culmsee

 

 

 

 

OSISoft addendum

Now someone at OSISoft at some point will read this article and wonder why I didn’t write about RTWebparts. Essentially PI has some web parts that can be used to display historian data in SharePoint. There were two reasons why I did not mention them.

  1. To use RTWebparts you have to install a lot of PI components onto your web front end servers. Nothing wrong with that, but with Report Services, those components only need to go onto the report server. For my circumstances and what I had to demonstrate, this was sufficient.
  2. This post was actually not about OSISoft or PI per se. It was used to demonstrate how it is possible to use SharePoint to integrate performance and operational information into one integrated location. In the event that you have PI in your enterprise and want to leverage it with SharePoint, I suggest you contact me about it because we do happen to be very good at it 🙂

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

"Ain’t it cool?" – Integrating SharePoint and real-time performance data – Part 1

Send to Kindle

Hi

This is one of those nerdy posts in the category of "look at me! look at me! look at what I did, isn’t it cool?". Normally application developers write posts like this, demonstrating some cool bit of code they are really proud of. Being only a part-time developer and more of a security/governance/compliance kind of guy, my "aint it cool" post is a little different.

So if you are a non developer and you are already tempted to skip this one, please reconsider. If you are a CIO, IT manager, Infrastructure manager or are simply into ITIL, COBiT or compliance around IT operations in general, this post may have something for you. It offers something for knowledge managers too. Additionally it gives you a teensy tiny glimpse into my own personal manifesto of how you can integrate different types of data to achieve the sort of IT operational excellence that is a step above where you may be now.

Additionally, if you are a Cisco nerd or an infrastructure person who has experience with monitoring, you will also find something potentially useful here.

In this post, I am going to show you how I leveraged three key technologies, along with a dash of best practice methodology to create an IT Portal that I think is cool.

The Premise – "Passive Compliance"

In my career I have filled various IT roles and drunk the kool-aid of most of the vendors, technologies, methodologies and practices. Nowadays I am a product of all of these influences, leaving me slightly bitter and twisted, somewhat of a security nerd, but at the same time fairly pragmatic and always mindful of working to business goals and strategy.

One or the major influences to my current view of the world, was a role working for OSI Software from 2000-2004, via a former subsidiary company called WiredCity. OSISoft develop products in the process control space, and their core product is a data historian called PI. At WiredCity, we took this product out of the process control market and right into the IT enterprise and OSISoft now market this product as "IT Monitor". I’ll talk about PI/IT monitor in detail in a moment, but humour me and just accept my word that it is a hellishly fast number crunching database for storing lots of juicy performance data.

An addition I like to read all the various best practice frameworks and methodologies and I write about them a fair bit. As a result of this interest, I have a SharePoint IT portal template that I have developed over the last couple of years, designed around the guiding principle of passive compliance. That is, by utilising the portal for IT various operational tasks, structured in a certain way, you implicitly address some COBiT/ISO27001 controls as well as leverage ITIL principles. I designed it in such a way that an auditor could take a look at it and it would demonstrate the assurance around IT controls for operational system support. It also had the added benefit of being a powerful addition to disaster recovery strategy. (In the second article, to be published soon, I will describe my IT portal in more detail).

Finally, I have SQL Reporting Services integrated with SharePoint, used to present enterprise data from various databases into the IT Portal – also as part of the principle of passive compliance via business intelligence.

Recently, I was called in to help conduct a demonstration of the aforementioned PI software, so I decided to add PI functionality to my existing "passive compliance" IT portal to integrate asset and control data (like change/configuration management) along with real-time performance data. All in all I was very pleased with the outcome as it was done in a day with pretty impressive effect. I was able to do this with minimal coding, utilising various features of all three of the above applications with a few other components and pretty much wrote no code at all.

Below I have built a conceptual diagram of the solution. Unfortunately I don’t have Visio installed, but I found a great freeware alternative 😉

Image0003

I know, there is a lot to take in here (click to enlarge), but if you look in the center of the diagram, you will see a mock up of a SharePoint display. All of the other components drawn around it combine to produce that display. I’ll now talk about the main combination, PI and SQL Reporting Services.

A slice of PI

Okay so let’s explain PI because I think most people have a handle on SharePoint :-).

To the right is the terminator looking at data from a PI historian showing power flow in California. So this product is not a lightweight at all. It’s heritage lies in this sort of critical industrial monitoring.

Just to get the disclaimers out of the way, I do not work for OSISoft anymore nor are they even aware of this post. Just so hard-core geeks don’t flame me and call me a weenie, let me just say that I love RRDTool and SmokePing and prefer Zabbix over Nagios. Does that make me cool enough to make comment on this topic now? 🙂  

Like RRDTool, PI is a data historian, designed and optimised for time-series data.

"Data historian? Is that like a database of some kind?", you may ask. The answer is yes, but its not a relational database like SQL Server or Oracle. Instead, it is a "real-time, time series" data store. The English translation of that definition, is that PI is extremely efficient at storing time based performance data.

"So what, you can store that in SQL Server, mySQL or Oracle", I hear you say. Yes you most certainly can. But PI was designed from the ground up for this kind of data, whereas relational databases are not. As a result, PI is blisteringly fast and efficient. Pulling say, 3 months of data that was collected at 15 second intervals would literally take seconds to do, with no loss of fidelity, even going back months.

As an example, let’s say you needed to monitor CPU usage of a critical server. PI could collect this data periodically, save it into the historian for later view/review/reporting or analysis. Getting data into the historian can be performed a number of ways. OSISoft have written ‘interfaces’ to allow collection of data from sources such as SNMP, WMI, TCP-Response, Windows Performance Monitor counters, Netflow and many others.

The main point is that once the data is inside the historian, it really doesn’t matter whether the data was collected via SNMP, Performance Monitor, a custom script, etc. All historian data can now be viewed, compared, analysed and reported via a variety of tools in a consistent fashion.

SQL Reporting Services

For those of you not aware, Reporting Services has been part of SQL Server since SQL 2000 and allows for fairly easy generation of reports out of SQL databases. More recently, Microsoft updated SQL Server 2005 with tight integration with SharePoint. Now when creating a report server report, it is "published" to SharePoint in a similar manner to the publishing of InfoPath forms.

Creating reports is performed via two ways, but I am only going to discuss the Visual Studio method. Using Visual Studio, you are able to design a tailored report, consisting of tables and charts. An example of a SQL Reporting Services Report in visual studio is below. (from MSDN)

 

The interesting thing about SQL Reporting services is that it can pull data from data sources other than SQL Server databases. Data sources include Oracle, Web Services, ODBC and OLE-DB. Depending on your data source, reports can be parameterised (did I just make up a new word? 🙂 ). This is particularly important to SharePoint as you will soon see. It essentially means that you can feed your report values that customise the output of that report. In doing so, reports can be written and published once, yet be flexible in the sort of data that is returned.

Below is a basic example:

Here is a basic SQL statement that retrieves three fields from a data table called "picomp2". Those fields are "Tag", "Time" and "Value". This example selects values only where "Time" is between 12-1pm on July 28th and where "tag" contains the string "MYSERVER"

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%MYSERVER%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM')

Now what if we wanted to make the value for TAG flexible? So instead of "MYSERVER", use the string "DISK" or "PROCESSOR". Fortunately for most data sources, SQL Reporting Services allows you to pass parameters into the SQL. Thus, consider this modified version of the above SQL statement.

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%' + ? + '%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM') 

Look carefully at the WHERE clause in the above line. Instead of specifying %MYSERVER%, I have modified it to %?%. The question mark has special meaning. It means that you will be prompted to specify the string to be added to the SQL on the fly. Below I illustrate the sequence using three screenshots. The first screenshot shows the above SQL inside a visual studio report project. Clicking the exclamation mark will execute this SQL.

image

Immediately we get asked to fill out the parameter as shown below. (I have added the string "DISK")

image

Clicking OK, and the SQL will now be executed against the datasource, with the matching results returned as shown below. Note the all data returned contains the word "disk" in the name. (I have scrubbed identifiable information to protect the innocent).

image

Reporting Services and SharePoint Integration

Now we get to the important bit. As mentioned earlier, SharePoint and SQL Reporting Services are now tightly integrated. I am not going to explain this integration in detail, but what I am going to show you is how a parameterised query like the example above is handled in SharePoint.

In short, if you want to display a Reporting Services report in SharePoint, you use a web part called the "SQL Server Reporting Services Report Viewer"

image

After dropping this webpart onto a SharePoint page, you pick the report to display, and if it happens to be a parameterised report, you see a screen that looks something like the following.

image

Notice anything interesting? The webpart recognises that the report requires a parameter and asks for you to enter it. As you will see in the second article, this is very useful indeed! But first let’s get reporting services talking to the PI historian.

Fun with OLEDB

So, I have described (albeit extremely briefly) enough information about PI and Reporting services. I mentioned earlier that PI is not a relational database, but a time series database. This didn’t stop OSISoft from writing an OLEDB provider 🙂

Thus it is possible to get SQL reporting services to query PI using SQL syntax. In fact the SQL example that I showed in the previous section was actually querying the PI historian.

To get reporting services to be able to talk to PI, I need to create a report server Data Source as shown below. When selecting the data source type, I choose OLE DB from the list. The subsequent screen allows you to pick the specific OLEDB provider for PI from the list.

image image

Now I won’t go into the complete details of completing the configuration of the PI OLE DB provider, as my point here is to demonstrate the core principle of using OLE DB to allow SQL Reporting Services to query a non-relational data store.

Once the data source had been configured and tested (see the test button in the above screenshot), I was able to then create my SQL query and then design a report layout. Here is the sample SQL again.

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%' + ? + '%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM') 

As I previously explained, this SQL statement contains a parameter, which is passed to the report when it is run, thereby providing the ability to generate a dynamic report.

Using Visual Studio I created a new report and added a chart from the toolbox. Once again the purpose of this post is not to teach how to create a report layout, but below is a screenshot to illustrate the report layout being designed. You will see that I have previewed my design and it has displayed a textbox (top left) allowing the parameter to be entered for the report to be run. The report has pulled the relevant data from the PI historian and rendered it in a nice chart that I created.

image

Conclusion

Right! I think that’s about enough for now. To sum up this first post, we talked a little about my IT portal and the principle of "passive compliance". We examined OSISoft’s PI software and how it can be used to monitor your enterprise infrastructure. We then took a dive into SQL Reporting services and I illustrated how we can access PI historian data via OLE DB.

In the second and final post, I will introduce my IT Portal template in a brief overview, and will then demonstrate how I was able to integrate PI data into my IT portal to combine IT asset data with real-time performance metrics with no code 🙂

I hope that you found this post useful. Look out for the second half soon where this will all come together nicely

cheers

Paul

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why do SharePoint Projects Fail? – Part 8

Send to Kindle

Hi

Well, here we are at part 8 in a series of posts dedicated to the topic of SharePoint project failure. Surely after 7 posts, you would think that we are exhausting the various factors that can have a negative influence on time, budget and SharePoint deliverables? Alas no! My urge to peel back this onion continues unabated and thus I present one final post in this series.

Now if I had my time again, I would definitely re-order these posts, because this topic area is back in the realm of project management. But not to worry. I’ll probably do a ‘reloaded’ version of this series at some point in the future or make an ebook that is more detailed and more coherently written, along with contributions from friends.

In the remote chance that you are hitting this article first up, it is actually the last of a long series written over the last couple of months (well, last for now anyway). We started this series with an examination of the pioneering work by Horst Rittell in the 1970’s and subsequently examined some of the notable historical references to wicked problems in IT. From there, we turned our attention to SharePoint specifically and why it, as a product, can be a wicked problem. We looked at the product from viewpoints, specifically, project managers, IT infrastructure architects and application developers.

In the last article, we once again drifted away from SharePoint directly and looked at senior management and project sponsors contribution. Almost by definition, when looking at the senior management level, it makes no sense to be product specific, since at this level it is always about business strategy.

In this post, I’d like to examine when best practice frameworks, the intent of which is to reduce risk of project failure, actually have the opposite effect. We will look at why this is the case in some detail.

CleverWorkarounds tequila shot rating..

 image image image image  For readers with a passing interest in best practice frameworks and project management.

imageimageimage For nerds who swear they will never leave the "tech stuff."

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Selling MOSS (The moral of the story)

Send to Kindle

I hope that you had a bit of fun with my first “choose your own adventure” story. (Do yourself a favour and read that first!)

Writing that one was most fun. Did you suddenly think of the names of current and former colleagues as you read it? 🙂

Anyway, now it is time for you to sit on my virtual knee and listen to the moral of that story because believe it or not, I actually had a really important point to get across.

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

SharePoint Branding Part 7 -The ‘governance’ of it all..

Send to Kindle

Well, here we are! After delving into dark arts where everybody but metrosexual web designers fear to tread (HTML and CSS), we then delved into the areas that metrosexual web designers truly fear to tread (packaging, deployment and even some c# code!). Finally, we get to the area where everybody is interested until it happens to get in their way! (Ooh, I am a cynical old sod tonight).

That is Governance!

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

IT Governance Standards: COBIT, ISO17799/27001, ITIL and PMBOK – Part 4

Send to Kindle

The content of this blog is essentially material I compiled for training sessions that I ran last year. It was originally PowerPoint, but I hope that this blog version is useful. Some of the legislative stuff is probably now out of date, and it was for an Australian audience – moral of the story is to do your own research.

For part 1 of this presentation, view this post, part 2 this post and part 3 here.

ISO17799/27001

  • Started life as BS7799-1 and 2
  • BS7799-1 became ISO17799. BS7799-2 is recently ISO27001
  • There is an Australian version "AS/NZS ISO/IEC 17799:2006"
  • Internationally recognized standards for best practice to information security management
  • High level and broad in scope
  • Not a technical standard
  • Not product or technology driven

ISO 17799 is a direct descendant of part of the British Standard Institute (BSI) Information Security Management standard BS 7799. The BSI (www.bsi-global.com) has long been proactive in the evolving arena of Information Security.

The BS 7799 standard consists of:

  • Part 1: Information Technology-Code of practice for information security management
  • Part 2: Information security management systems-Specification with guidance for use.

BS7799 was revised several times, and by 2000 information security had become headline news and a concern to computer users worldwide. Demand grew for an internationally recognized information security standard under the aegis of an internationally recognized body, such as the ISO. This demand led to the "fast tracking" of BS 7799 Part 1 by the BSI, culminating in its first release by ISO as ISO/IEC 17799:2000 in December 2000.

In 2006, adoption of BS 7799 Part 2 for ISO standardization was completed and now forms ISO27001.

ISO17799 vs. ISO27001

  • ISO17799 is a code of practice – like COBIT it deals with ‘what’, not ‘how’.
  • ISO27001 This is the ‘specification’ for an Information Security Management System (ISMS). It is the means to measure, monitor and control security management from a top down perspective. It essentially explains how to apply ISO 17799 and it is this part that can currently be certified against

Unlike COBIT, ISO17799 does not include any maturity model sections for evaluation. (incidentally, nor does ISO9000)

ISO 17799 is a code of practice for information security. It details hundreds of specific controls which may be applied to secure information and related assets. It comprises 115 pages organized over 15 major sections.

ISO 27001 is a specification for an Information Security Management System, sometimes abbreviated to ISMS. It is the foundation for third party audit and certification. It is quite small because it doesn’t actually list the controls, but more a methodology for auditing and measuring. It comprises 34 pages over 8 major sections.

ISO17799 is:

  • an implementation guide based on suggestions.
  • used as a means to evaluate and build a sound and comprehensive information security program.
  • a list of controls an organization "should" consider.

ISO27001 is:

  • an auditing guide based on requirements.
  • used as a means to audit an organizations information security management system.
  • a list of controls an organization "shall" address.

ISO17799 Domains

  • Security Policy
  • Security Organization
  • Asset Management
  • Personnel Security
  • Physical and Environmental Security
  • Communications and Operations Management
  • Access Control
  • System Development and Maintenance
  • Business Continuity Management
  • Risk Assessment and Treatment
  • Compliance

ISO17799 divides up security into 12 domains.

Within each domain, information security control objectives (if you recall that is he same terminology as COBIT) are specified and a range of controls are outlined that are generally regarded as best practice means of achieving those objectives.

For each of the controls, implementation guidance is provided.

Specific controls are not mandated since each organization is expected to undertake a structured information security risk assessment process to determine its requirements before selecting controls that are appropriate to its particular circumstances (the introduction section outlines a risk assessment process

ISO/IEC 17799 is expected to be renamed ISO/IEC 27002 in 2007. The ISO/IEC 27000 series has been reserved for information security matters with a handful of related standards such as ISO/IEC 27001 having already been released and others such as ISO/IEC 27004 – Information Security Management Metrics and Measurement – currently in draft.

We will examine Asset Management domain as an example of ISO17799.

ISO17799 in relation to ITIL

  • ISO 17799 only addresses the selection and management of information security controls.
  • It is not interested in underlying implementation details. For example:
    • ISO 17799 is not interested that you have the latest and greatest logging and analysis products.
    • ISO 17799 is not interested in HOW you log.
    • Product selection is usually an operational efficiency issue (i.e. ITIL)
  • ISO 17799 is interested in:
    • WHAT you log (requirements).
    • WHY you log what you do (risk mitigation).
    • WHEN you log (tasks and schedules, window of vulnerability).
    • WHO is assigned log analysis duty (roles and responsibilities).
  • Satisfying these ISO 17799 interests produces defensible specifications and configurations that may ultimately influence product selection and deployment. (feeds into ITIL)

ISO17799 Example: Asset Management

  • 7.1 Responsibility for assets:
    • Objective:
    • To achieve and maintain appropriate protection of organizational assets. All assets should be accounted for and have a nominated owner.
    • Owners should be identified for all assets and the responsibility for the maintenance of appropriate controls should be assigned. The implementation of specific controls may be delegated by the owner as appropriate but the owner remains responsible for the proper protection of the assets.
  • 7.2 Information Classification
  • Implementation guidance is offered for all controls

Domain: Asset Management

7 Asset management

Control 7.1 Responsibility for assets

Control Objective: To achieve and maintain appropriate protection of organizational assets. All assets should be accounted for and have a nominated owner. Owners should be identified for all assets and the responsibility for the maintenance of appropriate controls should be assigned. The implementation of specific controls may be delegated by the owner as appropriate but the owner remains responsible for the proper protection of the assets.

7.1.1 Inventory of assets

Control

All assets should be clearly identified and an inventory of all important assets drawn up and maintained.

Implementation guidance

An organization should identify all assets and document the importance of these assets. The asset inventory should include all information necessary in order to recover from a disaster, including type of asset, format, location, backup information, license information, and a business value. The inventory should not duplicate other inventories unnecessarily, but it should be ensured that the content is aligned. In addition, ownership (see 7.1.2) and information classification (see 7.2) should be agreed and documented for each of the assets. Based on the importance of the asset, its business value and its security classification, levels of protection commensurate with the importance of the assets should be identified

Other information

There are many types of assets, including:

  • information: databases and data files, contracts and agreements, system documentation, research information, user manuals, training material, operational or support procedures, business continuity plans, fallback arrangements, audit trails, and archived information;
  • software assets: application software, system software, development tools, and utilities;
  • physical assets: computer equipment, communications equipment, removable media, and other equipment;
  • services: computing and communications services, general utilities, e.g. heating, lighting,power, and air-conditioning;
  • people, and their qualifications, skills, and experience;
  • intangibles, such as reputation and image of the organization.

Inventories of assets help to ensure that effective asset protection takes place, and may also be required for other business purposes, such as health and safety, insurance or financial (asset management) reasons. The process of compiling an inventory of assets is an important prerequisite of risk management

7.1.2 Ownership of assets

Control

All information and assets associated with information processing facilities should be owned2 by a designated part of the organization.

Implementation guidance

The asset owner should be responsible for:

  • ensuring that information and assets associated with information processing facilities are appropriately classified;
  • defining and periodically reviewing access restrictions and classifications, taking into account applicable access control policies.

Ownership may be allocated to:

  • a business process;
  • a defined set of activities;
  • an application; or
  • a defined set of data.

Other information

Routine tasks may be delegated, e.g. to a custodian looking after the asset on a daily bsis, but the responsibility remains with the owner.

In complex information systems it may be useful to designate groups of assets, which act together to provide a particular function as ‘services’. In this case the service owner is responsible for the delivery of the service, including the functioning of the assets, which provide it.

7.1.3 Acceptable use of assets

Control

Rules for the acceptable use of information and assets associated with information processing facilities should be identified, documented, and implemented.

Implementation guidance

All employees, contractors and third party users should follow rules for the acceptable use of information and assets associated with information processing facilities, including:

  • rules for electronic mail and Internet usages (see 10.8);
  • guidelines for the use of mobile devices, especially for the use outside the premises of the organization (see 11.7.1);

Specific rules or guidance should be provided by the relevant management. Employees, contractors and third party users using or having access to the organization’s assets should be aware of the limits existing for their use of organization’s information and assets associated with information processing facilities, and resources. They should be responsible for their use of any information processing resources, and of any such use carried out under their responsibility.

The term ‘owner’ identifies an individual or entity that has approved management responsibility for controlling the production, development, maintenance, use and security of the assets. The term ‘owner’ does not mean that the person actually has any property rights to the asset.

ISO17799 Example: Asset Management

  • 7.1 Responsibility for assets:
  • 7.2 Information Classification
    • Objective:
    • To ensure that information receives an appropriate level of protection.
    • Information should be classified to indicate the need, priorities, and expected degree of protection when handling the information.
    • Information has varying degrees of sensitivity and criticality. Some items may require an additional level of protection or special handling. An information classification scheme should be used to define an appropriate set of protection levels and communicate the need for special handling measures
  • Implementation guidance is offered for all controls

7.2 Information classification

Objective: To ensure that information receives an appropriate level of protection.

Information should be classified to indicate the need, priorities, and expected degree of protection when handling the information.

Information has varying degrees of sensitivity and criticality. Some items may require an additional level of protection or special handling. An information classification scheme should be used to define an appropriate set of protection levels and communicate the need for special handling measures.

7.2.1 Classification guidelines

Control

Information should be classified in terms of its value, legal requirements, sensitivity, and criticality to the organization.

Implementation guidance

Classifications and associated protective controls for information should take account of business needs for sharing or restricting information and the business impacts associated with such needs. Classification guidelines should include conventions for initial classification and reclassification over time; in accordance with some predetermined access control policy (see 11.1.1). It should be the responsibility of the asset owner (see 7.1.2) to define the classification of an asset, periodically review it, and ensure it is kept up to date and at the appropriate level. The classification should take account of the aggregation effect mentioned in 10.7.2. Consideration should be given to the number of classification categories and the benefits to be gained from their use. Overly complex schemes may become cumbersome and uneconomic to use or prove impractical. Care should be taken in interpreting classification labels on documents from other organizations, which may have different definitions for the same or similarly named labels.

Other Information

The level of protection can be assessed by analyzing confidentiality, integrity and availability and any other requirements for the information considered.

Information often ceases to be sensitive or critical after a certain period of time, for example, when the information has been made public. These aspects should be taken into account, as over-classification can lead to the implementation of unnecessary controls resulting in additional expense.

Considering documents with similar security requirements together when assigning classification levels might help to simplify the classification task.

In general, the classification given to information is a shorthand way of determining how this information is to be handled and protected.

7.2.2 Information labeling and handling

Control

An appropriate set of procedures for information labeling and handling should be developed and implemented in accordance with the classification scheme adopted by the organization.

Implementation guidance

Procedures for information labeling need to cover information assets in physical and electronic formats.

Output from systems containing information that is classified as being sensitive or critical should carry an appropriate classification label (in the output). The labeling should reflect the classification according to the rules established in 7.2.1. Items for consideration include printed reports, screen displays, recorded media (e.g. tapes, disks, CDs), electronic messages, and file transfers. For each classification level, handling procedures including the secure processing, storage, transmission, declassification, and destruction should be defined. This should also include the procedures for chain of custody and logging of any security relevant event. Agreements with other organizations that include information sharing should include procedures to identify the classification of that information and to interpret the classification labels from other organizations.

Other Information

Labeling and secure handling of classified information is a key requirement for information sharing arrangements. Physical labels are a common form of labeling. However, some information assets, such as documents in electronic form, cannot be physically labeled and electronic means of labeling need to be used. For example, notification labeling may appear on the screen or display. Where labeling is not feasible, other means of designating the classification of information may be applied, e.g. via procedures or meta-data.

Benefits of Best Practices

  • Avoiding re-inventing wheels
  • Reducing dependency on technology experts
  • Increasing the potential to utilize less-experienced staff if properly trained
  • Making it easier to leverage external assistance
  • Overcoming vertical silos and nonconforming behavior
  • Reducing risks and errors
  • Improving quality
  • Improving the ability to manage and monitor
  • Increasing standardization leading to cost reduction
  • Improving trust and confidence from management and partners
  • Creating respect from regulators and other external reviewers
  • Safeguarding and proving value
  • Helps strengthen supplier/customer relations, make contractual obligations easier to monitor and enforce

There are also some obvious, but pragmatic, rules that management ought to follow:

  • Treat the implementation initiative as a project activity with a series of phases rather than a ‘one-off’ step.
  • Remember that implementation involves cultural change as well as new processes. Therefore, a key success factor is the enablement and motivation of these changes.
  • Make sure there is a clear understanding of the objectives.
  • Manage expectations. In most enterprises, achieving successful oversight of IT takes time and is a continuous improvement process.
  • Focus first on where it is easiest to make changes and deliver improvements and build from there one step at a time.
  • Obtain top management buy-in and ownership. This needs to be based on the principles of best managing the IT investment.
  • Avoid the initiative becoming perceived as a purely bureaucratic exercise.
  • Avoid the unfocused checklist approach.

Free Information: Aligning CT , ITIL and ISO 17799 for Business Benefit: http://www.itgovernance.co.uk/files/Aligning%20ITIL,%20CobiT,%2017799.pdf

  • IT best practices need to be aligned to business requirements and integrated with one another and with internal procedures.
  • COBIT can be used at the highest level, providing an overall control framework based on an IT process model that should generically suit every organization.
  • Specific practices and standards such as ITIL and ISO 17799 cover discrete areas and can be mapped to the COBIT framework, thus providing an hierarchy of guidance materials.
 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

IT Governance Standards: COBIT, ISO17799/27001, ITIL and PMBOK – Part 3

Send to Kindle

The content of this blog is essentially material I compiled for training sessions that I ran last year. It was originally PowerPoint, but I hope that this blog version is useful. Some of the legislative stuff is probably now out of date, and it was for an Australian audience – moral of the story is to do your own research.

For part 1 of this presentation, view this post and part 2 this post.

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

IT Governance Standards: COBIT, ISO17799/27001, ITIL and PMBOK – Part 2

Send to Kindle

The content of this blog is essentially material I compiled for training sessions that I ran last year. It was originally PowerPoint, but I hope that this blog version is useful. Some of the legislative stuff is probably now out of date, and it was for an Australian audience – moral of the story is to do your own research.

For part 1 of this presentation, view this post.

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle