A Very Potter Audit – A Best Practices Parable

Send to Kindle

Once upon a time there lived a rather round wizard named Hocklart who worked at the FogWorts school of witchcraft and wizardry. Hocklart was a very proud wizard, perhaps the proudest in all of FogWorts. His pride did not stem from being a great wizard or a great teacher; in reality, he was neither of those. In fact Hocklart was never much good at wizardry itself, but he knew a lot of people who were – and therein lay the reason for his pride. For what Hocklart lacked in magic ability, he more than made up for with his attention to detail, love of process and determination to rise to the top. From the day he arrived at FogWorts as one apprentice amongst many, he was the first to realise that the influential wizards liked to unwind on Friday nights with a cold ale at the Three and a Half Broomsticks Inn. Hocklart sacrificed many Friday nights at that pub, shouting rounds of frothy brew to thirsty senior wizards, befriending them all, listening to their stories and building up peerless knowledge of FogWorts organisational politics and juicy gossip.

This organisational knowledge brought just enough influence for Hocklart to climb the corporate ladder ahead of his more magically adept colleagues and presently he was very proud. As far as Hocklart was concerned, he had the most important job in all of FogWorts – Manager of the Department Responsible for the Integrity of Potions (or DRIP for short).

You see, in schools of witchcraft and wizardry, wizards and witches concoct all sorts of potions for all sorts of magical purposes. Potions of course require various ingredients in just the right amount and often prepared in just the right way. Some of these ingredients are highly dangerous and need to be handed with utmost care, while others might be harmless by themselves, but dangerous when mixed with something else or prepared incorrectly. Obviously one has to be careful in such a situation because a mix-up could be potentially life threatening or at the very least, turn you into some sort of rodent or small reptile.

The real reason why Hocklart was proud was because of his DRIP track record. You see, over the last six years, Hocklart had ensured that Fogwarts met all its statutory regulatory requirements as per the International Spell-casters Standards Organisation (ISSO). This included the “ISSO 9000 and a half” series of standards for quality management as well as the “ISSO 14000 and a sprinkle more” series for Environmental Management and Occupational Health and Safety. (Like all schools of witchcraft and wizardry, Fogwarts needed to maintain these standards to keep their license to operate current and in good standing).

When Hocklart became manager of DRIP, he signed himself up for a week-long training course to understand the family of ISSO standards in great detail. Enlightened by this training, he now appreciated the sort of things the ISSO auditors would likely audit FogWorts on. Accordingly, he engaged expensive consultants from an expensive consultancy to develop detailed management plans in accordance with wizardry best practices. To deliver this to the detail that Hocklart required, the consultants conjured a small army of business analysts, enterprise architects, system administrators, coordinators, admin assistants, documenters, quality engineers and asset managers who documented all relevant processes that were considered critical to safety and quality for potions.

Meticulous records were kept of all activities and these were sequestered in a secure filing room which was, among other things, guaranteed to be spell-proof. Hocklart was particularly fond of this secure filing room, with its rows and rows of neatly labelled, colour coded files that lovingly held Material Safety Data Sheets (MSDS) for each potion ingredient. These sheets provided wizards the procedures for handling or working with the ingredients in a safe manner, including information of interest to wizards such as fulmination point, spell potency, extra-magical strength, reversal spells as well as routine data such as boiling point, toxicity, health effects, first aid, reactivity, storage, disposal, protective equipment and spill-handling procedures. All potion ingredients themselves were stored in the laboratory in jars with colour coded lids that represented the level of hazard and spell-potency. Ever the perfectionist, Hocklart ensured that all jars had the labels perfectly aligned, facing the front. The system was truly was a thing of beauty and greatly admired by all and sundry, including past ISSO auditors, who were mesmerised by what they saw (especially the colour coded filing system and the symmetry of the labels of the jars).

And so it came to pass that for six years Hocklart, backup up by his various consultants and sub-contractors, saw off every ISSO auditor who ever came to audit things. All of them left FogWorts mightily impressed, telling awestruck tales of Hocklart’s quality of documentation, attention to detail and beautiful presentation. This made Hocklart feel good inside. He was a good wizard…nay, a great one: no one in the wizard-world had emerged from an ISSO audit unscathed more than twice in a row…

On the seventh year of his term as FogWarts head of DRIP, Hocklart’s seventh audit approached. Although eagerly waiting to impress the new auditor (as he did with all the previous auditors), Hocklart did not want to appear overly prepared, so he tried to look as nonchalant as possible by casually reviewing a draft memo he was working on as the hour approached. Only you and I, and of course Hocklart himself, knew that in the weeks prior to today, Hocklart was at his meticulous best in his preparation. He had reviewed all of the processes and documentation and made sure it was all up to date and watertight. There was no way fault could be found.

Presently, there was a rap on the open door, and in walked the auditor.

“Potter – Chris Potter,” the gentleman introduced himself. “Hocklart I presume?”

Hocklart had never met Potter before so as they shook hands he sized up his opponent. The first thing he noticed was that Potter wasn’t carrying anything – no bag, notebook and not even a copy of the ISSO standards. “Have you been doing this sort of work long?” he enquired.

“Long enough,” came the reply. “Let’s go for a walk…”

“Sure,” replied Hocklart. “Where would you like to go?”

For what seemed like an uncomfortably long time to Hocklart, Potter was silent. Then he replied, “Let’s go and have a look at the lab.”

Ha! Nice try, thought Hocklart as he led the auditor to the potion laboratory. Yesterday I had the lab professionally cleaned with a high potency Kleenit spell and we did a stocktake of the ingredients the week before.

Potter cast his sharp eyes around as they walked (as is common with auditors), but remained silent. Soon enough they arrived at a gleaming, most immaculate lab, with nothing out of place. Without a word, Potter surveyed the scene and walked to the shelves of jars that held the ingredients, complete with colour coded lids and perfectly aligned labels. He picked up one of the red labelled jars that contained Wobberworm mucus – a substance that, while not fatal, was known to cause damage if not handled with care. Holding the jar, he turned to Hocklart.

“You have a Materials Safety Data Sheet for this?”

Hocklart grinned. “Absolutely… would you like to see it?”

Potter did not answer. Instead he continued to examine the jar. After another uncomfortable silence, Potter looked up announcing, “I’ve just got this in my eyes.” His eyes fixed on Hocklart.

Hocklart looked at Potter in confusion. The Wobberworm mucus was certainly not in Potter’s eyes because the jar had not been opened.

“What?” he asked hesitantly.

Potter, eyes unwaveringly locked on Hocklart, remained silent. The silence seemed an eternity to Hocklart. A quick glance at his watch then Potter, holding up the jar in his hand, repeated more slowly, “I’ve just got this in my eyes.”

Hocklart’s heart rate began to rise. What is this guy playing at? He asked himself. Potter, meanwhile, looked at his watch again, looked back at Hocklart and sighed. “It’s been a minute now and my eye’s really starting to hurt. I risk permanent eye damage here… What should you be doing?”

A trickle of sweat rolled down Hocklart’s brow. He had not anticipated this at all.

Potter waited, sighed again and grated, “Where is the Materials Safety Data Sheet with the treatment procedure?”

A cog finally shifted in Hocklart’s mind as he realised what Potter was doing. Whilst he was mightily annoyed that Potter had caught him off guard (he would have to deal with that later), right now however he had to play Potter’s game and win.

“We have a secure room with all of that information,” he replied proudly. “I can’t have any of the other wizards messing with my great filing system, it’s my system…”

“Well,” Potter grated, “let’s get in there. My eye isn’t getting any better standing here.”

Hocklart gestured to a side door. “They are in there.” But as he said it his heart skipped a beat as a sense of dread came over him.

“It’s… “ he stammered then cleared his throat. “It’s locked.”

Potter looked straight into Hocklart with a stare that seemed to pierce his very soul. “Now I’m in agony,” he stated. “Where is the key?”

“I keep it in my office…” he replied.

“Well,” Potter said, “I now have permanent scarring on my eye and have lost partial sight. You better get it pronto…”

Hocklart continued to stare at Potter for a moment in disbelief, before turning and running out of the room as fast as his legs could carry his rotund body.

It is common knowledge that wizards are not known for being renowned athletes, and Hocklart was no exception. Nevertheless, he hurtled down corridors, up stairs and through open plan cubicles as if he was chased by a soulsucker. He steamed into his office, red faced and panting. Sweat poured from his brow as he flung a picture from the wall, revealing a safe. With shaking hands, he entered the combination and got it wrong twice before managing to open the safe door. He grabbed the key, turned and made for the lab as if his life depended on it.

Potter was standing exactly where he was, and said nothing as Hocklart surged into the room and straight to the door. He unlocked the door and burst into the secure room. Recalling the jar had a red lid, Hocklart made a beeline for the shelf of files with red labels, grabbed the one labelled with the letter W for Wobberworm and started to flick through it. To his dismay, there was no sign of a material safety data sheet for Wobberworm mucus.

“It’s…it’s not…it’s not here,” he stuttered weakly.

“Perhaps it was filed under “M” for mucus?” Potter offered.

“Yes that must be it”, cried Hocklart (who at this stage was ready to grasp at anything). He grabbed the file labelled M and flicked through each page. Sadly, once again there was no sign of Mucus or Wobberworm.

“Well,” said Potter looking at his watch again. “I’m now permanently blind in one eye… let’s see if we can save the other one eh? Perhaps there is a mismatch between the jar colour and the file?”

Under normal circumstances, Hocklart would snort in derision at such a suggestion, but with the clock ticking and one eye left to save, it seemed feasible.

“Dammit”, he exclaimed, “Someone must have mixed up the labels.” After all, while Wobberworm mucus was damaging, it was certainly not fatal and therefore did not warrant a red cap on the jar. This is why I can’t trust anyone with my system! he thought, as he grabbed two orange files (one labelled W for Wobberworm and one labelled M for mucus) and opened them side by side so he could scan them at the same time.

Eureka! On the fifth page of the file labelled M, he found the sheet for Wobberworm mucus. Elated, he showed the sheet to Potter, breathing a big sigh of relief. He had saved the other eye after all.

Potter took the sheet and studied it. “It has all the necessary information, is up to date – and the formatting is really nice I must say.” He handed the sheet back to Hocklart. “But your system is broken”

Hocklart was still panting from his sprint to his office and back and as you can imagine, was absolutely infuriated at this. How dare this so-called auditor call his system broken. It had been audited for six years until now, and Potter had pulled a nasty trick on him.

“My system is not broken”, he spat vehemently. “The information was there, it was current and properly maintained. I just forgot my key that’s all. Do you even know how much effort it takes to maintain this system to this level of quality?”

A brief wave of exasperation flickered across Potter’s face.

“You still don’t get it…” he countered. “What was my intent when I told you I spilt Wobberworm mucus in my eye?”

To damn well screw me over, thought Hocklart, before icily replying “I don’t care what your intent was, but it was grossly unfair what you did. You were just out to get me because we have passed ISSO audits for the last six years.”

“No,” replied Potter. “My intent was to see whether you have confused the system with the intent of the system.”

Potter gestured around the room to the files. “This is all great eye candy,” he said, “you have dotted the I’s, and crossed the T’s. In fact this is probably the most comprehensive system of documentation I have ever seen. But the entire purpose of this system is to keep people safe and I just demonstrated that it has failed.“

Hocklart was incredulous. “How can I demonstrate the system works when you deliberately entrapped me?”, he spat in rage.

Potter sighed. “No wizards can predict when they will have an accident you know,” he countered. “Then it wouldn’t be an accident would it? For you, this is all about the system and not about the outcome the system enables. It is all about keeping the paperwork up to date and putting it in the files… I’m sorry Hocklart, but you have lost sight of the fact the system is there to keep people safe. Your organisation is at significant risk and you are blind to that risk. You think you have mitigated it when in fact you have made it worse. For all the time, effort and cost, you have not met the intent of the ISSO standards.“

Hocklart’s left eye started to twitch as he struggled to stop himself from throwing red jars at Potter. “Get out of my sight,” he raged. “I will be reporting your misconduct to my and your superiors this afternoon. I don’t know how you can claim to be an auditor when you were clearly out to entrap me. I will not stand for it and I will see you disciplined for this!”

Potter did not answer. He turned from Hocklart, put the jar of Wobberworm mucus back on the shelf where he had found it and turned to leave.

“For pete’s sake”, Hocklart grated, “the least you could do is face the label to the front like the other jars!”

=========================

 

I wrote this parable after being told the real life version of a audit by a friend of mine… This is very much based on a true story. My Harry Potter obsessed daughter also helped me with some of the finer details. Thanks to Kailash and Mrs Cleverworkarounds for fine tuning…

.

Paul Culmsee

HGBP_Cover-236x300

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why I’ve been quiet…

Send to Kindle

As you may have noticed, this blog has been a bit of a dead zone lately. There are several very good reasons for this – one being that a lot of my creative energy has been going into co-writing a book – and I thought it was time to come clean on it.

So first up, just because I get asked this all the time, the book is definitely *not* “A humble tribute to the leave form – The Book”! In fact, it’s not about SharePoint per se, but rather the deeper dark arts of team collaboration in the face of really complex or novel problems.

It was late 2006 when my own career journey took an interesting trajectory, as I started getting into sensemaking and acquiring the skills necessary to help groups deal with really complex, wicked problems. My original intent was to reduce the chances of SharePoint project failure but in learning these skills, now find myself performing facilitation, goal alignment and sensemaking in areas miles away from IT. In the process I have been involved with projects of considerable complexity and uniqueness that make IT look pretty easy by comparison. The other fringe benefit is being able to sit in a room and listen to the wisdom of some top experts in their chosen disciplines as they work together.

Through this work and the professional and personal learning that came with it, I now have some really good case studies that use unique (and I mean, unique) approaches to tackling complex problems. I have a keen desire to showcase these and explain why our approaches worked.

My leanings towards sensemaking and strategic issues would be apparent to regular readers of CleverWorkarounds. It is therefore no secret that this blog is not really much of a technical SharePoint blog these days. The articles on branding, ROI, and capacity planning were written in 2007, just before the mega explosion of interest in SharePoint. This time around, there are legions of excellent bloggers who are doing a tremendous job on giving readers a leg-up onto this new beast known as SharePoint 2010.

BBP (3)

So back to the book. Our tentative title is “Beyond Best Practices” and it’s an ambitious project, co-authored with Kailash Awati – the man behind the brilliant eight to late blog. I had been a fan of Kailash’s work for a long time now, and was always impressed at the depth of research and effort that he put into his writing. Kailash is a scarily smart guy with two PHD’s under his belt and to this day, I do not think I have ever mentioned a paper or author to him that he hasn’t read already. In fact, usually he has read it, checked out the citations and tells me to go and read three more books!

Kailash writes with the sort of rigour that I aspire to and will never achieve, thus when the opportunity of working with him on a book came up, I knew that I absolutely had to do it and that it would be a significant undertaking indeed.

To the left is a mock-up picture to try and convey where we are going with this book. See the guy on the right? Is he scratching his head in confusion, saluting or both? (note, this is our mockup and the real thing may look nothing like this)

This book dives into the seedy underbelly of organisational problem solving, and does so in a way that no other book has thus far attempted. We examine why the very notion of “best practices” often makes no sense and have such a high propensity to go wrong. We challenge some mainstream ideas by shining light on some obscure, but highly topical and interesting research that some may consider radical or heretical. To counter the somewhat dry nature of some of this research (the topics are really interesting but the style in which academics write can put insomniacs to sleep), we give it a bit of the cleverworkarounds style treatment and are writing in a conversational style that loses none of the rigour, but won’t have you nodding off on page 2. If you liked my posts where I use odd metaphors like boy bands to explain SharePoint site collections, the Simpsons to explain InfoPath or death metal to explain records versus collaborative document management, then you should enjoy our journey through the world of cognitive science, memetics, scientific management and Willy Wonka (yup – Willy Wonka!).

Rather than just bleat about what the problems with best-practices are, we will also tell you what you can do to address these issues. We back up this advice by presenting a series of practical case studies, each of which illustrates the techniques used to address the inadequacies of best practices in dealing with wicked problems. In the end, we hope to arm our readers with a bunch of tools and approaches that actually work when dealing with complex issues. Some of these case studies are world unique and I am very proud of them.

Now at this point in the writing, this is not just an idea with an outline and a catchy title. We have been at this for about six months, and the results thus far (some 60-70,000 words) have been very, very exciting. Initially, we really had no idea whether the combination of our writing styles would work – whether we could take the degree of depth and skill of Kailash with my low-brow humour and my quest for cheap laughs (I am just as likely to use a fart joke if it helps me get a key point across)…

… But signs so far are good so stay tuned 🙂

Thanks for reading

 

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

"Ain’t it cool?" – Integrating SharePoint and real-time performance data – Part 2

Send to Kindle

Hi again

This article is the second half of a pair of articles explaining how I integrated real-time performance data with an SharePoint based IT operational portal, designed around the principle of passive compliance with legislative or organisational controls.

In the first post, I introduced the PI product by OSIsoft, and explained how SQL Reporting services is able to generate reports from more than just SQL Server databases. I demonstrated how I created a report server report from performance data stored in the PI historian via an OLE DB provider for PI, and I also demonstrated how I was able to create a report that accepted a parameter, so that the output of the report could be customised.

I also showed how a SharePoint provides a facility to enter parameter data when using the report viewer web part.

We will now conclude this article by explaining a little about my passively compliant IT portal, and how I was able to enhance it with seamless integration with the real-time performance data stored in the PI historian.

Just to remind you, here is my conceptual diagram in "acoustic Visio" format

The IT portal

This is the really ultra brief explanation of the thinking that went into my IT portal

I spent a lot of time thinking about how critical IT information could be stored in SharePoint to achieve the goals of quick and easy access to information, make tasks like change/configuration management more transparent and efficient, as well as capture knowledge and documentation. I was influenced considerably by ISO17799 as it was called back then, especially in the area of asset management. I liked the use of the term "IT Assets" in ISO17799 and the strong emphasis on ownership and custodianship.

ISO defined asset as "any tangible or intangible thing that has value to an organization". It maintained that "…to achieve and maintain appropriate protection of organizational assets. All assets should be accounted for and have a nominated owner. Owners should be identified for all assets and the responsibility for the maintenance of appropriate controls should be assigned. The implementation of specific controls may be delegated by the owner as appropriate but the owner remains responsible for the proper protection of the assets."

That idea of delegation is that an owner of an asset can delegate the day-to-day management of that asset to a custodian, but the owner still bears ultimate responsibility.

So I developed a portal around this idea, but soon was hit by some constraints due to the broad ISO definition of an asset. Since assets have interdependencies, geeks have a tendency to over-complicate things and product a messy web of interdependencies. After some trial and error, as well as some soul searching I was able to come up with a 3 tier model that worked.

I changed the use of the word "asset", and split it into three broad asset types.

  • Devices (eg Server, SAN, Switch, Router, etc)
  • IT Services (eg Messaging, Databases, IP Network, etc)
  • Information Assets (eg Intranet, Timesheets,
image

The main thing to note about this model is to explain the different between an IT Service and an Information Asset. The distinction is in the area of ownership. In the case of an "Information Asset", the ownership of that asset is not IT. IT are a service provider, and by definition the IT view of the world is different to the rest of the organisation. An "IT Service" on the other hand, is always owned by IT and it is the IT services that underpin information assets.

So there is a hierarchical relationship there. You can’t have an information asset without an IT service providing it. Accountabilities are clear also. IT own the service, but are not responsible for the information asset itself – that’s for other areas of the organisation. (an Information Asset can also depend on other information assets as well as many IT services.

While this may sound so obvious that its not worth writing, my experience is that IT department often view information assets and the services providing those assets as one and the same thing.

Devices and Services

So, as an IT department, we provide a variety of services to the organisation. We provide them with an IP network, potentially a voice over IP system, a database subsystem, a backup and recovery service, etc.

It is fairly obvious that each IT service consists of a combination of IT devices (and often other IT services). an IP network is an obvious one and a basic example. The devices that underpin the "IP Network" service are routers, switches and wireless access points.

For devices we need to store information like

  • Serial Number
  • Warranty Details
  • Physical Location
  • Vendor information
  • Passwords
  • Device Type
  • IP Address
  • Change/Configuration Management history
  • IT Services that depend on this device (there is usually more than 1)

For services, we need to store information like

  • Service Owner
  • Service Custodian
  • Service Level Agreement (uptime guarantees, etc)
  • Change/Configuration Management history
  • IT Devices that underpin this service (there is usually more than 1)
  • Dependency relationships with other IT services
  • Information Assets that depend on this IT service

Keen eyed ITIL practitioners will realise that all I am describing here is a SharePoint based CMDB. I have a site template, content types, lists, event handlers and workflows that allow the above information to be managed in SharePoint. Below is three snippets showing sections of the portal, drilling down into the device view by location (click to expand), before showing the actual information about the server "DM01"

image image

image

Now the above screen is the one that I am interested in. You may also notice that the page above is a system generated page, based on the list called "IT Devices". I want to add real-time performance data to this screen, so that as well as being able to see asset information about a device, I also want to see its recent performance.

Modifying a system page

I talked about making modifications to system pages in detail in part 3 of my branding using Javascript series. Essentially, a system page is an automatically generated ASPX page that SharePoint creates. Think about what happens each time you add a column to a list or library. The NewForm.aspx, Editform.Aspx and Dispform.aspx are modified as they have to be rebuild to display the new or modified column.

SharePoint makes it a little tricky to edit these pages on account of custom modifications running the risk of breaking things. But as I described in the branding series, using the ToolPaneView hack does the job for us in a safe manner.

So using this hack, I was able to add a report viewer web part to the Dispform.aspx of the "IT devices" list as shown below.

image image

imageimage

Finally, we have our report viewer webpart, linked to our report that accesses PI historian data. As you can see below, the report that I created actually is expecting two parameters to be supplied. These parameters will be used to retrieve specific performance data and turn it into a chart.

image

Web Part Connection Magic

Now as it stands, the report is pretty much useless to us in the sense that we have to enter parameters to it manually, to get it to actually present us the information that we want. But on the same page as this report is a bunch of interesting information about a particular device, such as its name, IP Address, location and description. Wouldn’t it be great if we could somehow pass the device name (or some other device information) to the report web part automatically.

That way, each time you opened up a device entry, the report would retrieve performance information for the device currently being viewed. That would be very, very cool.

Fortunately for us it can be easily done. The report services web part, like many other web parts is connectable. This means that it can accept information from other web parts. This means that it is possible to have the parameters automatically passed to the report! 

Wohoo!

So here is how I am going to do this. I am going to add two new columns to my device list. Each column will be the parameter passed to the report. This way, I can tailor the report being generated on a device by device basis. For example, for a SAN device I might want to report on disk I/O, but a server I might want CPU. If I store the parameter as a column, the report will be able to retrieve whatever performance data I need.

Below shows the device list with the additional two columns added. the columns are called TAGPARAM1 and TAGPARAM2. The next screen below, shows the values I have entered for each column against the device DM01. These values will be passed to the report server report and used to find matching performance data.

image image

So the next question becomes, how do I now transparently pass these two parameters to the report? We now have the report and the parameters on the same page, but no obvious means to pass the value of TagParam1 and TagParam2 to the report viewer web part.

The answer my friends, is to use a filter web part!

Using the toolpane view hack, we once again edit the view item page for the Device List. We now need to add two additional web parts (because we have two parameters). Below is the web part to add.

image

The result should be a screen looking like the figure below

image

Filter web parts are not visible when a page is rendered in the browser. They are instead used to pass data between other web parts. There are various filter web parts that work in different ways. The Page Field filter is capable of passing the value of any column to another web part.

Confused? Check out how I use this web part below…

The screen above shows that the two Page Field filters web parts are not configured. They are prompting you to open the tool pane and configure them. Below is the configuration pane for the page field filter. Can you see how it has enumerated all of the columns for the "IT device" list? In the second and third screen we have chosen TagParam1 for the first page filter and TagParam2 for the second page filter web part.

image image image

Now take a look at the page in edit mode. The page filters now change display to say that they are not connected. All we have done so far is tell the web parts which columns to grab the parameter values from

image

Almost Home – Connecting the filters

So now we need to connect each Page Field filter web part to the report viewer web part. This will have the effect of passing to the report viewer web part, the value of TagParam1 and TagParam2. Since these values change from device to device, the report will display unique data for each device.

To to connect each page filter web part you click the edit dropdown for each page filter. From the list of choices, choose "Connections", and it will expand out to the choice of "Send Filter Values To". If you click on this, you will be promoted to send the filter values to the report viewer web part on the page. Since in my example, the report viewer web part requires two parameters, you will be asked to choose which of the two parameters to send the value to.

image image

Repeat this step for both page filter web parts and something amazing happens, we see a performance report on the devices page!! The filter has passed the values of TagParam1 and tagParam2 to the report and it has retrieved the matching data!

image

Let’s now save this page and view it in all of its glory! Sweet eh!

image 

Conclusion (and Touchups)

So let’s step back and look at what we have achieved. We can visit our IT Operations portal, open the devices list and immediately view real-time performance statistics for that device. Since I am using a PI historian, the performance data could have been collected via SNMP, netflow, ping, WMI, Performance Monitor counters, a script or many, many methods. But we do not need to worry about that, we just ask PI for the data that we want and display it using reporting services.

Because the parameters are stored as additional metadata with each device, you have complete control over the data being presented back to SharePoint. You might decide that servers should always return CPU stats, but a storage area network return disk I/O stats. It is all controllable just by the values you enter into the columns being used as report parameters.

The only additional thing that I did was to use my CleverWorkArounds Hide Field Web Part, to subsequently hide the TagParam1 and TagParam2 fields from display, so that when IT staff are looking at the integrated asset and performance data, the ‘behind the scenes’ glue is hidden from them.

So looking at this from a IT portal/compliance point of view, we now have an integrated platform where we can:

  • View device asset information (serial number, purchase date, warranty, physical location)
  • View IT Service information (owners, custodians and SLA’s)
  • View Information Asset information (owners, custodians and SLA’s)
  • Understand the relationships between devices, services and information assets
  • Access standards, procedures and work instructions pertaining to devices, services and information assets
  • Manage change and configuration management for devices, services and information assets
  • Quickly and easily view detailed, real time performance statistics of devices

All in all, not a bad afternoons work really! And not one line of code!

As i said way back at the start of the first article, this started out as a quick idea for a demo and it seems to have a heck of a lot of potential. Of course, I used PI but there is no reason why you can’t use similar techniques in your own IT portals to integrate your operational and performance data into the one place.

I hope that you enjoyed this article and I look forward to feedback.

<Blatant Plug>Want an IT Portal built in passive compliance? Then let’s talk!</Blatant Plug>

cheers

Paul Culmsee

 

 

 

 

OSISoft addendum

Now someone at OSISoft at some point will read this article and wonder why I didn’t write about RTWebparts. Essentially PI has some web parts that can be used to display historian data in SharePoint. There were two reasons why I did not mention them.

  1. To use RTWebparts you have to install a lot of PI components onto your web front end servers. Nothing wrong with that, but with Report Services, those components only need to go onto the report server. For my circumstances and what I had to demonstrate, this was sufficient.
  2. This post was actually not about OSISoft or PI per se. It was used to demonstrate how it is possible to use SharePoint to integrate performance and operational information into one integrated location. In the event that you have PI in your enterprise and want to leverage it with SharePoint, I suggest you contact me about it because we do happen to be very good at it 🙂

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

"Ain’t it cool?" – Integrating SharePoint and real-time performance data – Part 1

Send to Kindle

Hi

This is one of those nerdy posts in the category of "look at me! look at me! look at what I did, isn’t it cool?". Normally application developers write posts like this, demonstrating some cool bit of code they are really proud of. Being only a part-time developer and more of a security/governance/compliance kind of guy, my "aint it cool" post is a little different.

So if you are a non developer and you are already tempted to skip this one, please reconsider. If you are a CIO, IT manager, Infrastructure manager or are simply into ITIL, COBiT or compliance around IT operations in general, this post may have something for you. It offers something for knowledge managers too. Additionally it gives you a teensy tiny glimpse into my own personal manifesto of how you can integrate different types of data to achieve the sort of IT operational excellence that is a step above where you may be now.

Additionally, if you are a Cisco nerd or an infrastructure person who has experience with monitoring, you will also find something potentially useful here.

In this post, I am going to show you how I leveraged three key technologies, along with a dash of best practice methodology to create an IT Portal that I think is cool.

The Premise – "Passive Compliance"

In my career I have filled various IT roles and drunk the kool-aid of most of the vendors, technologies, methodologies and practices. Nowadays I am a product of all of these influences, leaving me slightly bitter and twisted, somewhat of a security nerd, but at the same time fairly pragmatic and always mindful of working to business goals and strategy.

One or the major influences to my current view of the world, was a role working for OSI Software from 2000-2004, via a former subsidiary company called WiredCity. OSISoft develop products in the process control space, and their core product is a data historian called PI. At WiredCity, we took this product out of the process control market and right into the IT enterprise and OSISoft now market this product as "IT Monitor". I’ll talk about PI/IT monitor in detail in a moment, but humour me and just accept my word that it is a hellishly fast number crunching database for storing lots of juicy performance data.

An addition I like to read all the various best practice frameworks and methodologies and I write about them a fair bit. As a result of this interest, I have a SharePoint IT portal template that I have developed over the last couple of years, designed around the guiding principle of passive compliance. That is, by utilising the portal for IT various operational tasks, structured in a certain way, you implicitly address some COBiT/ISO27001 controls as well as leverage ITIL principles. I designed it in such a way that an auditor could take a look at it and it would demonstrate the assurance around IT controls for operational system support. It also had the added benefit of being a powerful addition to disaster recovery strategy. (In the second article, to be published soon, I will describe my IT portal in more detail).

Finally, I have SQL Reporting Services integrated with SharePoint, used to present enterprise data from various databases into the IT Portal – also as part of the principle of passive compliance via business intelligence.

Recently, I was called in to help conduct a demonstration of the aforementioned PI software, so I decided to add PI functionality to my existing "passive compliance" IT portal to integrate asset and control data (like change/configuration management) along with real-time performance data. All in all I was very pleased with the outcome as it was done in a day with pretty impressive effect. I was able to do this with minimal coding, utilising various features of all three of the above applications with a few other components and pretty much wrote no code at all.

Below I have built a conceptual diagram of the solution. Unfortunately I don’t have Visio installed, but I found a great freeware alternative 😉

Image0003

I know, there is a lot to take in here (click to enlarge), but if you look in the center of the diagram, you will see a mock up of a SharePoint display. All of the other components drawn around it combine to produce that display. I’ll now talk about the main combination, PI and SQL Reporting Services.

A slice of PI

Okay so let’s explain PI because I think most people have a handle on SharePoint :-).

To the right is the terminator looking at data from a PI historian showing power flow in California. So this product is not a lightweight at all. It’s heritage lies in this sort of critical industrial monitoring.

Just to get the disclaimers out of the way, I do not work for OSISoft anymore nor are they even aware of this post. Just so hard-core geeks don’t flame me and call me a weenie, let me just say that I love RRDTool and SmokePing and prefer Zabbix over Nagios. Does that make me cool enough to make comment on this topic now? 🙂  

Like RRDTool, PI is a data historian, designed and optimised for time-series data.

"Data historian? Is that like a database of some kind?", you may ask. The answer is yes, but its not a relational database like SQL Server or Oracle. Instead, it is a "real-time, time series" data store. The English translation of that definition, is that PI is extremely efficient at storing time based performance data.

"So what, you can store that in SQL Server, mySQL or Oracle", I hear you say. Yes you most certainly can. But PI was designed from the ground up for this kind of data, whereas relational databases are not. As a result, PI is blisteringly fast and efficient. Pulling say, 3 months of data that was collected at 15 second intervals would literally take seconds to do, with no loss of fidelity, even going back months.

As an example, let’s say you needed to monitor CPU usage of a critical server. PI could collect this data periodically, save it into the historian for later view/review/reporting or analysis. Getting data into the historian can be performed a number of ways. OSISoft have written ‘interfaces’ to allow collection of data from sources such as SNMP, WMI, TCP-Response, Windows Performance Monitor counters, Netflow and many others.

The main point is that once the data is inside the historian, it really doesn’t matter whether the data was collected via SNMP, Performance Monitor, a custom script, etc. All historian data can now be viewed, compared, analysed and reported via a variety of tools in a consistent fashion.

SQL Reporting Services

For those of you not aware, Reporting Services has been part of SQL Server since SQL 2000 and allows for fairly easy generation of reports out of SQL databases. More recently, Microsoft updated SQL Server 2005 with tight integration with SharePoint. Now when creating a report server report, it is "published" to SharePoint in a similar manner to the publishing of InfoPath forms.

Creating reports is performed via two ways, but I am only going to discuss the Visual Studio method. Using Visual Studio, you are able to design a tailored report, consisting of tables and charts. An example of a SQL Reporting Services Report in visual studio is below. (from MSDN)

 

The interesting thing about SQL Reporting services is that it can pull data from data sources other than SQL Server databases. Data sources include Oracle, Web Services, ODBC and OLE-DB. Depending on your data source, reports can be parameterised (did I just make up a new word? 🙂 ). This is particularly important to SharePoint as you will soon see. It essentially means that you can feed your report values that customise the output of that report. In doing so, reports can be written and published once, yet be flexible in the sort of data that is returned.

Below is a basic example:

Here is a basic SQL statement that retrieves three fields from a data table called "picomp2". Those fields are "Tag", "Time" and "Value". This example selects values only where "Time" is between 12-1pm on July 28th and where "tag" contains the string "MYSERVER"

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%MYSERVER%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM')

Now what if we wanted to make the value for TAG flexible? So instead of "MYSERVER", use the string "DISK" or "PROCESSOR". Fortunately for most data sources, SQL Reporting Services allows you to pass parameters into the SQL. Thus, consider this modified version of the above SQL statement.

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%' + ? + '%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM') 

Look carefully at the WHERE clause in the above line. Instead of specifying %MYSERVER%, I have modified it to %?%. The question mark has special meaning. It means that you will be prompted to specify the string to be added to the SQL on the fly. Below I illustrate the sequence using three screenshots. The first screenshot shows the above SQL inside a visual studio report project. Clicking the exclamation mark will execute this SQL.

image

Immediately we get asked to fill out the parameter as shown below. (I have added the string "DISK")

image

Clicking OK, and the SQL will now be executed against the datasource, with the matching results returned as shown below. Note the all data returned contains the word "disk" in the name. (I have scrubbed identifiable information to protect the innocent).

image

Reporting Services and SharePoint Integration

Now we get to the important bit. As mentioned earlier, SharePoint and SQL Reporting Services are now tightly integrated. I am not going to explain this integration in detail, but what I am going to show you is how a parameterised query like the example above is handled in SharePoint.

In short, if you want to display a Reporting Services report in SharePoint, you use a web part called the "SQL Server Reporting Services Report Viewer"

image

After dropping this webpart onto a SharePoint page, you pick the report to display, and if it happens to be a parameterised report, you see a screen that looks something like the following.

image

Notice anything interesting? The webpart recognises that the report requires a parameter and asks for you to enter it. As you will see in the second article, this is very useful indeed! But first let’s get reporting services talking to the PI historian.

Fun with OLEDB

So, I have described (albeit extremely briefly) enough information about PI and Reporting services. I mentioned earlier that PI is not a relational database, but a time series database. This didn’t stop OSISoft from writing an OLEDB provider 🙂

Thus it is possible to get SQL reporting services to query PI using SQL syntax. In fact the SQL example that I showed in the previous section was actually querying the PI historian.

To get reporting services to be able to talk to PI, I need to create a report server Data Source as shown below. When selecting the data source type, I choose OLE DB from the list. The subsequent screen allows you to pick the specific OLEDB provider for PI from the list.

image image

Now I won’t go into the complete details of completing the configuration of the PI OLE DB provider, as my point here is to demonstrate the core principle of using OLE DB to allow SQL Reporting Services to query a non-relational data store.

Once the data source had been configured and tested (see the test button in the above screenshot), I was able to then create my SQL query and then design a report layout. Here is the sample SQL again.

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%' + ? + '%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM') 

As I previously explained, this SQL statement contains a parameter, which is passed to the report when it is run, thereby providing the ability to generate a dynamic report.

Using Visual Studio I created a new report and added a chart from the toolbox. Once again the purpose of this post is not to teach how to create a report layout, but below is a screenshot to illustrate the report layout being designed. You will see that I have previewed my design and it has displayed a textbox (top left) allowing the parameter to be entered for the report to be run. The report has pulled the relevant data from the PI historian and rendered it in a nice chart that I created.

image

Conclusion

Right! I think that’s about enough for now. To sum up this first post, we talked a little about my IT portal and the principle of "passive compliance". We examined OSISoft’s PI software and how it can be used to monitor your enterprise infrastructure. We then took a dive into SQL Reporting services and I illustrated how we can access PI historian data via OLE DB.

In the second and final post, I will introduce my IT Portal template in a brief overview, and will then demonstrate how I was able to integrate PI data into my IT portal to combine IT asset data with real-time performance metrics with no code 🙂

I hope that you found this post useful. Look out for the second half soon where this will all come together nicely

cheers

Paul

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Why do SharePoint Projects Fail? – Part 8

Send to Kindle

Hi

Well, here we are at part 8 in a series of posts dedicated to the topic of SharePoint project failure. Surely after 7 posts, you would think that we are exhausting the various factors that can have a negative influence on time, budget and SharePoint deliverables? Alas no! My urge to peel back this onion continues unabated and thus I present one final post in this series.

Now if I had my time again, I would definitely re-order these posts, because this topic area is back in the realm of project management. But not to worry. I’ll probably do a ‘reloaded’ version of this series at some point in the future or make an ebook that is more detailed and more coherently written, along with contributions from friends.

In the remote chance that you are hitting this article first up, it is actually the last of a long series written over the last couple of months (well, last for now anyway). We started this series with an examination of the pioneering work by Horst Rittell in the 1970’s and subsequently examined some of the notable historical references to wicked problems in IT. From there, we turned our attention to SharePoint specifically and why it, as a product, can be a wicked problem. We looked at the product from viewpoints, specifically, project managers, IT infrastructure architects and application developers.

In the last article, we once again drifted away from SharePoint directly and looked at senior management and project sponsors contribution. Almost by definition, when looking at the senior management level, it makes no sense to be product specific, since at this level it is always about business strategy.

In this post, I’d like to examine when best practice frameworks, the intent of which is to reduce risk of project failure, actually have the opposite effect. We will look at why this is the case in some detail.

CleverWorkarounds tequila shot rating..

 image image image image  For readers with a passing interest in best practice frameworks and project management.

imageimageimage For nerds who swear they will never leave the "tech stuff."

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Selling MOSS (The moral of the story)

Send to Kindle

I hope that you had a bit of fun with my first “choose your own adventure” story. (Do yourself a favour and read that first!)

Writing that one was most fun. Did you suddenly think of the names of current and former colleagues as you read it? 🙂

Anyway, now it is time for you to sit on my virtual knee and listen to the moral of that story because believe it or not, I actually had a really important point to get across.

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

SharePoint Branding Part 7 -The ‘governance’ of it all..

Send to Kindle

Well, here we are! After delving into dark arts where everybody but metrosexual web designers fear to tread (HTML and CSS), we then delved into the areas that metrosexual web designers truly fear to tread (packaging, deployment and even some c# code!). Finally, we get to the area where everybody is interested until it happens to get in their way! (Ooh, I am a cynical old sod tonight).

That is Governance!

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

IT Governance Standards: COBIT, ISO17799/27001, ITIL and PMBOK – Part 4

Send to Kindle

The content of this blog is essentially material I compiled for training sessions that I ran last year. It was originally PowerPoint, but I hope that this blog version is useful. Some of the legislative stuff is probably now out of date, and it was for an Australian audience – moral of the story is to do your own research.

For part 1 of this presentation, view this post, part 2 this post and part 3 here.

ISO17799/27001

  • Started life as BS7799-1 and 2
  • BS7799-1 became ISO17799. BS7799-2 is recently ISO27001
  • There is an Australian version "AS/NZS ISO/IEC 17799:2006"
  • Internationally recognized standards for best practice to information security management
  • High level and broad in scope
  • Not a technical standard
  • Not product or technology driven

ISO 17799 is a direct descendant of part of the British Standard Institute (BSI) Information Security Management standard BS 7799. The BSI (www.bsi-global.com) has long been proactive in the evolving arena of Information Security.

The BS 7799 standard consists of:

  • Part 1: Information Technology-Code of practice for information security management
  • Part 2: Information security management systems-Specification with guidance for use.

BS7799 was revised several times, and by 2000 information security had become headline news and a concern to computer users worldwide. Demand grew for an internationally recognized information security standard under the aegis of an internationally recognized body, such as the ISO. This demand led to the "fast tracking" of BS 7799 Part 1 by the BSI, culminating in its first release by ISO as ISO/IEC 17799:2000 in December 2000.

In 2006, adoption of BS 7799 Part 2 for ISO standardization was completed and now forms ISO27001.

ISO17799 vs. ISO27001

  • ISO17799 is a code of practice – like COBIT it deals with ‘what’, not ‘how’.
  • ISO27001 This is the ‘specification’ for an Information Security Management System (ISMS). It is the means to measure, monitor and control security management from a top down perspective. It essentially explains how to apply ISO 17799 and it is this part that can currently be certified against

Unlike COBIT, ISO17799 does not include any maturity model sections for evaluation. (incidentally, nor does ISO9000)

ISO 17799 is a code of practice for information security. It details hundreds of specific controls which may be applied to secure information and related assets. It comprises 115 pages organized over 15 major sections.

ISO 27001 is a specification for an Information Security Management System, sometimes abbreviated to ISMS. It is the foundation for third party audit and certification. It is quite small because it doesn’t actually list the controls, but more a methodology for auditing and measuring. It comprises 34 pages over 8 major sections.

ISO17799 is:

  • an implementation guide based on suggestions.
  • used as a means to evaluate and build a sound and comprehensive information security program.
  • a list of controls an organization "should" consider.

ISO27001 is:

  • an auditing guide based on requirements.
  • used as a means to audit an organizations information security management system.
  • a list of controls an organization "shall" address.

ISO17799 Domains

  • Security Policy
  • Security Organization
  • Asset Management
  • Personnel Security
  • Physical and Environmental Security
  • Communications and Operations Management
  • Access Control
  • System Development and Maintenance
  • Business Continuity Management
  • Risk Assessment and Treatment
  • Compliance

ISO17799 divides up security into 12 domains.

Within each domain, information security control objectives (if you recall that is he same terminology as COBIT) are specified and a range of controls are outlined that are generally regarded as best practice means of achieving those objectives.

For each of the controls, implementation guidance is provided.

Specific controls are not mandated since each organization is expected to undertake a structured information security risk assessment process to determine its requirements before selecting controls that are appropriate to its particular circumstances (the introduction section outlines a risk assessment process

ISO/IEC 17799 is expected to be renamed ISO/IEC 27002 in 2007. The ISO/IEC 27000 series has been reserved for information security matters with a handful of related standards such as ISO/IEC 27001 having already been released and others such as ISO/IEC 27004 – Information Security Management Metrics and Measurement – currently in draft.

We will examine Asset Management domain as an example of ISO17799.

ISO17799 in relation to ITIL

  • ISO 17799 only addresses the selection and management of information security controls.
  • It is not interested in underlying implementation details. For example:
    • ISO 17799 is not interested that you have the latest and greatest logging and analysis products.
    • ISO 17799 is not interested in HOW you log.
    • Product selection is usually an operational efficiency issue (i.e. ITIL)
  • ISO 17799 is interested in:
    • WHAT you log (requirements).
    • WHY you log what you do (risk mitigation).
    • WHEN you log (tasks and schedules, window of vulnerability).
    • WHO is assigned log analysis duty (roles and responsibilities).
  • Satisfying these ISO 17799 interests produces defensible specifications and configurations that may ultimately influence product selection and deployment. (feeds into ITIL)

ISO17799 Example: Asset Management

  • 7.1 Responsibility for assets:
    • Objective:
    • To achieve and maintain appropriate protection of organizational assets. All assets should be accounted for and have a nominated owner.
    • Owners should be identified for all assets and the responsibility for the maintenance of appropriate controls should be assigned. The implementation of specific controls may be delegated by the owner as appropriate but the owner remains responsible for the proper protection of the assets.
  • 7.2 Information Classification
  • Implementation guidance is offered for all controls

Domain: Asset Management

7 Asset management

Control 7.1 Responsibility for assets

Control Objective: To achieve and maintain appropriate protection of organizational assets. All assets should be accounted for and have a nominated owner. Owners should be identified for all assets and the responsibility for the maintenance of appropriate controls should be assigned. The implementation of specific controls may be delegated by the owner as appropriate but the owner remains responsible for the proper protection of the assets.

7.1.1 Inventory of assets

Control

All assets should be clearly identified and an inventory of all important assets drawn up and maintained.

Implementation guidance

An organization should identify all assets and document the importance of these assets. The asset inventory should include all information necessary in order to recover from a disaster, including type of asset, format, location, backup information, license information, and a business value. The inventory should not duplicate other inventories unnecessarily, but it should be ensured that the content is aligned. In addition, ownership (see 7.1.2) and information classification (see 7.2) should be agreed and documented for each of the assets. Based on the importance of the asset, its business value and its security classification, levels of protection commensurate with the importance of the assets should be identified

Other information

There are many types of assets, including:

  • information: databases and data files, contracts and agreements, system documentation, research information, user manuals, training material, operational or support procedures, business continuity plans, fallback arrangements, audit trails, and archived information;
  • software assets: application software, system software, development tools, and utilities;
  • physical assets: computer equipment, communications equipment, removable media, and other equipment;
  • services: computing and communications services, general utilities, e.g. heating, lighting,power, and air-conditioning;
  • people, and their qualifications, skills, and experience;
  • intangibles, such as reputation and image of the organization.

Inventories of assets help to ensure that effective asset protection takes place, and may also be required for other business purposes, such as health and safety, insurance or financial (asset management) reasons. The process of compiling an inventory of assets is an important prerequisite of risk management

7.1.2 Ownership of assets

Control

All information and assets associated with information processing facilities should be owned2 by a designated part of the organization.

Implementation guidance

The asset owner should be responsible for:

  • ensuring that information and assets associated with information processing facilities are appropriately classified;
  • defining and periodically reviewing access restrictions and classifications, taking into account applicable access control policies.

Ownership may be allocated to:

  • a business process;
  • a defined set of activities;
  • an application; or
  • a defined set of data.

Other information

Routine tasks may be delegated, e.g. to a custodian looking after the asset on a daily bsis, but the responsibility remains with the owner.

In complex information systems it may be useful to designate groups of assets, which act together to provide a particular function as ‘services’. In this case the service owner is responsible for the delivery of the service, including the functioning of the assets, which provide it.

7.1.3 Acceptable use of assets

Control

Rules for the acceptable use of information and assets associated with information processing facilities should be identified, documented, and implemented.

Implementation guidance

All employees, contractors and third party users should follow rules for the acceptable use of information and assets associated with information processing facilities, including:

  • rules for electronic mail and Internet usages (see 10.8);
  • guidelines for the use of mobile devices, especially for the use outside the premises of the organization (see 11.7.1);

Specific rules or guidance should be provided by the relevant management. Employees, contractors and third party users using or having access to the organization’s assets should be aware of the limits existing for their use of organization’s information and assets associated with information processing facilities, and resources. They should be responsible for their use of any information processing resources, and of any such use carried out under their responsibility.

The term ‘owner’ identifies an individual or entity that has approved management responsibility for controlling the production, development, maintenance, use and security of the assets. The term ‘owner’ does not mean that the person actually has any property rights to the asset.

ISO17799 Example: Asset Management

  • 7.1 Responsibility for assets:
  • 7.2 Information Classification
    • Objective:
    • To ensure that information receives an appropriate level of protection.
    • Information should be classified to indicate the need, priorities, and expected degree of protection when handling the information.
    • Information has varying degrees of sensitivity and criticality. Some items may require an additional level of protection or special handling. An information classification scheme should be used to define an appropriate set of protection levels and communicate the need for special handling measures
  • Implementation guidance is offered for all controls

7.2 Information classification

Objective: To ensure that information receives an appropriate level of protection.

Information should be classified to indicate the need, priorities, and expected degree of protection when handling the information.

Information has varying degrees of sensitivity and criticality. Some items may require an additional level of protection or special handling. An information classification scheme should be used to define an appropriate set of protection levels and communicate the need for special handling measures.

7.2.1 Classification guidelines

Control

Information should be classified in terms of its value, legal requirements, sensitivity, and criticality to the organization.

Implementation guidance

Classifications and associated protective controls for information should take account of business needs for sharing or restricting information and the business impacts associated with such needs. Classification guidelines should include conventions for initial classification and reclassification over time; in accordance with some predetermined access control policy (see 11.1.1). It should be the responsibility of the asset owner (see 7.1.2) to define the classification of an asset, periodically review it, and ensure it is kept up to date and at the appropriate level. The classification should take account of the aggregation effect mentioned in 10.7.2. Consideration should be given to the number of classification categories and the benefits to be gained from their use. Overly complex schemes may become cumbersome and uneconomic to use or prove impractical. Care should be taken in interpreting classification labels on documents from other organizations, which may have different definitions for the same or similarly named labels.

Other Information

The level of protection can be assessed by analyzing confidentiality, integrity and availability and any other requirements for the information considered.

Information often ceases to be sensitive or critical after a certain period of time, for example, when the information has been made public. These aspects should be taken into account, as over-classification can lead to the implementation of unnecessary controls resulting in additional expense.

Considering documents with similar security requirements together when assigning classification levels might help to simplify the classification task.

In general, the classification given to information is a shorthand way of determining how this information is to be handled and protected.

7.2.2 Information labeling and handling

Control

An appropriate set of procedures for information labeling and handling should be developed and implemented in accordance with the classification scheme adopted by the organization.

Implementation guidance

Procedures for information labeling need to cover information assets in physical and electronic formats.

Output from systems containing information that is classified as being sensitive or critical should carry an appropriate classification label (in the output). The labeling should reflect the classification according to the rules established in 7.2.1. Items for consideration include printed reports, screen displays, recorded media (e.g. tapes, disks, CDs), electronic messages, and file transfers. For each classification level, handling procedures including the secure processing, storage, transmission, declassification, and destruction should be defined. This should also include the procedures for chain of custody and logging of any security relevant event. Agreements with other organizations that include information sharing should include procedures to identify the classification of that information and to interpret the classification labels from other organizations.

Other Information

Labeling and secure handling of classified information is a key requirement for information sharing arrangements. Physical labels are a common form of labeling. However, some information assets, such as documents in electronic form, cannot be physically labeled and electronic means of labeling need to be used. For example, notification labeling may appear on the screen or display. Where labeling is not feasible, other means of designating the classification of information may be applied, e.g. via procedures or meta-data.

Benefits of Best Practices

  • Avoiding re-inventing wheels
  • Reducing dependency on technology experts
  • Increasing the potential to utilize less-experienced staff if properly trained
  • Making it easier to leverage external assistance
  • Overcoming vertical silos and nonconforming behavior
  • Reducing risks and errors
  • Improving quality
  • Improving the ability to manage and monitor
  • Increasing standardization leading to cost reduction
  • Improving trust and confidence from management and partners
  • Creating respect from regulators and other external reviewers
  • Safeguarding and proving value
  • Helps strengthen supplier/customer relations, make contractual obligations easier to monitor and enforce

There are also some obvious, but pragmatic, rules that management ought to follow:

  • Treat the implementation initiative as a project activity with a series of phases rather than a ‘one-off’ step.
  • Remember that implementation involves cultural change as well as new processes. Therefore, a key success factor is the enablement and motivation of these changes.
  • Make sure there is a clear understanding of the objectives.
  • Manage expectations. In most enterprises, achieving successful oversight of IT takes time and is a continuous improvement process.
  • Focus first on where it is easiest to make changes and deliver improvements and build from there one step at a time.
  • Obtain top management buy-in and ownership. This needs to be based on the principles of best managing the IT investment.
  • Avoid the initiative becoming perceived as a purely bureaucratic exercise.
  • Avoid the unfocused checklist approach.

Free Information: Aligning CT , ITIL and ISO 17799 for Business Benefit: http://www.itgovernance.co.uk/files/Aligning%20ITIL,%20CobiT,%2017799.pdf

  • IT best practices need to be aligned to business requirements and integrated with one another and with internal procedures.
  • COBIT can be used at the highest level, providing an overall control framework based on an IT process model that should generically suit every organization.
  • Specific practices and standards such as ITIL and ISO 17799 cover discrete areas and can be mapped to the COBIT framework, thus providing an hierarchy of guidance materials.
 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

IT Governance Standards: COBIT, ISO17799/27001, ITIL and PMBOK – Part 3

Send to Kindle

The content of this blog is essentially material I compiled for training sessions that I ran last year. It was originally PowerPoint, but I hope that this blog version is useful. Some of the legislative stuff is probably now out of date, and it was for an Australian audience – moral of the story is to do your own research.

For part 1 of this presentation, view this post and part 2 this post.

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

IT Governance Standards: COBIT, ISO17799/27001, ITIL and PMBOK – Part 2

Send to Kindle

The content of this blog is essentially material I compiled for training sessions that I ran last year. It was originally PowerPoint, but I hope that this blog version is useful. Some of the legislative stuff is probably now out of date, and it was for an Australian audience – moral of the story is to do your own research.

For part 1 of this presentation, view this post.

Continue reading

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle