Demystifying SharePoint Performance Management Part 8 – More on SQL and SQLIO

This entry is part 8 of 11 in the series Perf
Send to Kindle

Well here we are at part 8 of my series on making SharePoint performance management that little bit easier to understand. What is interesting about this series is its timing. If by some minute chance that the marketing tsunami has passed you by at the time I write this, SharePoint 2013 public beta was released. Much is being made about its stated requirement of 24GB of RAM for a “Single server with a built-in database or single server that uses SQL Server”. While the reality is that requirements depends on what components that you are working with, this series of articles should be just as useful in relation to SharePoint 2013 as for any other version.

Now, if you have been following events thus far, we have been spending some time examining disk performance, as that is a very common area where a sub optimal configuration can result in a poor experience for users. In part 6, we looked at the relationship between the performance metrics of disk latency, IOPS and MBPS. We also touched on the IO characteristics (nerd speak for the manner in which something reads and writes to disk) of SQL Server and some SharePoint components. In the last post, we examined the windows performance counters that one would use to quickly monitor latency and IOPS in particular. We then finished off by taking a toe dip into the coolness of the SQLIO utility, that is a great tool for stress testing your storage infrastructure by pulverising it with different IO read and write patterns.

In this post, we will spend a bit of time taking SQLIO to the next step and I will show you how you can run a comprehensive disk infrastructure stress test. Luckily for the both of us, others have done the hard work for us and we can reap the benefits of their expertise and insights. First up however, I would like to kick things off by spending a little time showing you the relationship between SQLIO results and performance monitor counters. This helps to reinforce what the reported numbers mean.

Performance Monitor and SQLIO

In the previous post when we used Windows Performance Monitor, we plotted IOPS and Latency by watching the counters as they occurred in real-time. While this is nice for a quick analysis, nothing is actually stored for later analysis. Fortunately, performance monitor has the capability to run a trace and collect a much larger data set for a more detailed analysis later. So first up, lets use performance monitor to collect disk performance data while we run a SQLIO stress test. After the test has been run, we will then review the trace data and validate it against the results that SQLIO reports.

So go ahead and start up performance monitor (and consult part 7 of this series if you are unsure of how to do this). Looking at the top left of the performance manager, you should see several options listed under “Performance”. Click on “Data Collector Sets” and look for a sub menu called “User Defined”. Now right click on “User Defined” and choose “New –> Data Collector Set” as shown below:

image

This will start a wizard that will ask us to define what performance counters we are interested in and how often to sample performance. I have pasted screenshots of the sequence below (click to enlarge any particular one). First up we need to give a name to this collection of counters and as you can see below, I called mine “Disk IO Experiments”. Once we have given it a name, we have to choose the type of performance data we want to collect. Tick the “Performance counter” button and ensure the others are left unticked.

image  image

Next we need to pick what specific counters we need. We will use the same counters that we used in part 7, with the addition of two additional ones. To remind you of part 7, the counters we looked at were:

  • Avg. Disk sec/Read   – (measures latency by looking at how long in seconds, a read of data from the disk took)
  • Avg. Disk sec/Write  –  (measures latency by looking at how long in seconds, a write of data to the disk took)
  • Disk Reads/sec  –  (measures IOPS by looking at the rate of read operations on the disk per second)
  • Disk Writes/sec  – (measures IOPS by looking at the rate of write operations on the disk per second)

In addition to these counters, we will also add two more to the collector set

  • Avg. Disk Bytes/Read – (Measures size of each read request by reporting the number of bytes each used)
  • Avg. Disk Bytes/Write – (Measures size of each write request by reporting the number of bytes each)

We will use these counters to see if the size of the IO request than SQLIO uses is reported correctly.

Depending on your configuration, choose the PhysicalDisk or LogicalDisk  performance object (consult part 7 for the difference between PhysicalDisk and LogicalDisk). You will then find the performance counters I listed above. Before you do anything else, make sure that you pick the right disk or partition from the “Instances of selected object” section. We need to specifically pick the disk or partition that SQLIO is going to stress test. Now you select each of the aforementioned six performance counters and click the “Add” button. Finally, make sure that you pick the sample interval to be 1 second as shown below. This is really important because it makes it easy to compare to SQLIO which reports on a per second basis.

image  image

At this point you do not need to configure anything else, so click the “Finish” button, rather than the “next” button, and the collector should now be ready to go. It will not start by default, but since there is no fun in that, let’s collect some data. Right click on your shiny new data collector set and choose “Start”.

image  image

Once started, performance monitor is collecting the values of the six counters each and every second. Now let’s run a SQLIO command to give it something to measure. In this example, I am going to run SQLIO with random 8KB writes. But to make it interesting, I will use two threads and simulate 8 outstanding IO requests in the queue. If you recall by grocery check-out metaphor of part 6, this is like having 8 people with full shopping carts waiting in line for a single check-out operator. Since the guy at the back of the line has to wait for the seven people in front of him to be processed, he has to wait longer. So with eight outstanding IO requests, latency should increase as each IO request will be sitting in a queue behind the seven other requests.

By the way, if none of that made sense, then you did not read part 6 and part 7. I urge you to read them before continuing here, because I am assuming prior knowledge of SQLIO and disk latency characteristics and the big trolley theory..

Here is the SQLIO command and below is the result…

SQLIO –kW –b8 –frandom –s120 –t2 –o8 –BH –LS F:\testfile.dat

image

Now take a note of the results reported. IOPS was 526, MBs/sec was 4.11 and as expected, the average latency was much larger than the SQLIO tests we ran in part 7. In this case, latency was 29 milliseconds.

Let’s now compare this to what performance monitor captured. First up, return to Performance Monitor, and stop your data collector set by right clicking on it and choosing “Stop”. Now if you cast your eye to the top left navigation pane, you should see an option called “Reports” listed under “Performance”. Click on “Reports” and look for a sub menu called “User Defined”. Expand “User Defined” and hey presto! Your data collector set should be listed…

image  image

Expand the data collector set and you will find a report for the data you just collected. The naming convention is the server name and the date of the collection. Click on this and you will then see the performance data for that collection in the right pane. At the bottom you can see the six performance counters we chose and just by looking at the graph, you can clearly see when SQLIO started and stopped.

image

Now we have to do one additional step to make sure that we are comparing apples with apples. Performance monitor will calculate its averages based on the total time displayed. As you can see above, I did not run SQLIO straight away, but the performance counters were collected each second nonetheless. Therefore we have a heap of zero values that will bring the averages down and mislead. Fear not though, it is fairly easy (although not completely obvious) to zoom into the time we are interested in. If you look closely, just below the performance graph, where the time is reported, there is a sliding scale. If you click and drag the left and right boxes, you can highlight a specific time you are interested in. This will be shown in the performance graph too, so using this tool, we can get more specific about the time we are interested in. Then in the toolbar above the graph, you will see a zoom button. Click it and watch the magic…

image  image

As you can see below, now we are looking at the performance data for the period when the SQLIO was run. (Now it should be noted that windows performance monitor isn’t particularly granular here. I had to fiddle with the sliding scale a couple of times to accurately set the exact times when SQLIO was started and then stopped.)

image

Now let’s look at the results reported by performance monitor. The screenshot above is looking at the number of Disk Writes per second. Let’s zoom into the figures for the time period and example the average result over the sample period. To save you squinting, I have pasted it below and called out the counter in question. Performance monitor has reported average “Disk Writes/Sec” as 525.423. This is entirely consistent with SQLIO’s reported IOPS of 526.

image

Latency (reported in seconds via the counter Avg. Disk sec/Write) is also fairly consistent with SQLIO. The figure from performance monitor was 0.03 seconds (30 milliseconds). SQLIO reported 29 milliseconds.

image

What about IO size? Well, that’s what Avg disk bytes/write is for… Let’s take a look shall we? Yup.. 8192 kilobytes, which is exactly the parameters specified.

image

SQL IO characteristics revisited (and an awesome script)

Now that we understand what SQLIO is telling us via examining windows performance monitor counters, I’d like to return to the topic of SQL IO patterns. Back at the end of part 6, I spent some time talking about SQL and SharePoint IO characteristics. As a quick recap, I mentioned SQL reads and writes to databases via 8KB pages. Now based on me telling you that, you might assume that if you had to open a large document from SharePoint (say 1MB  or 1024KB), SQL would make 128 IO requests of 8KB each.

While that would be a reasonable assumption, its also wrong. You see, I also mentioned that SQL Server also has a read-ahead algorithm. This algorithm means that means SQL will try and proactively retrieve data pages that are going to be used in the immediate future. As a result, even though a single page is only 8KB, it is not unusual to see SQL read data from disk in a much wider range if it thinks the next few 8KB pages are likely to be asked for anyway. Now as an aside, if you are running SQL Enterprise edition, the possible read-ahead range is from 1 to 128 pages (other editions of SQL max out at 32 pages). Assuming SQL Enterprise edition, this translates to between 8KB and 1024KB for a single IO operation. Think about this for a second… based on the 1MB document example I used in the previous paragraph, it is technically possible that this could be serviced with a single IO request by an enterprise edition of SQL server.

Okay, so essentially SQL has varying IO characteristics when it comes to reading from and writing to databases. But there is still more to it. This is because there are a myriad of SQL IO operations that we did not even consider in part 6. As an example, we have not spoken about the IO characteristics of how SQL writes to transaction logs (which is sequential as opposed to random IO, and does not use 8k pages at all). Another little known fact with transaction logs is that SQL has to wait for them to be “flushed to stable media” before the data page itself can be flushed to stable media. This is known as Write Ahead Logging and is used for data integrity purposes. What is means though is that if logging has a lot of latency, the rest of SQL server can potentially suffer as well (and if it was not obvious before, yet another good reason why people recommend putting SQL data and log files on different disks).

Now I am not going to delve deep into SQL IO patterns any more than this, because we are now getting into serious nerdy territory. However what I will say is this: by understanding the characteristics of these IO patterns, we have the opportunity to change the parameters we pass to SQLIO and more accurately reflect real-world SQL characteristics in our testing. Luckily for all of us, others have already done the hard work in this area. First up, Bob Duffy created a table that summarises SQL Server IO patterns based on the type of operations being performed. Even better than that… Niels Grove-Rasmussen wrote a completely brilliant post, where not only did he list the IO patterns that SQL is likely to exhibit, he wrote a PowerShell script that then runs 5 minute SQLIO simulations for each and every one of them!

I have not pasted the script here, but you will find it at Niels article. What I will say though is that aside from the obvious 8KB random reads and writes that we have concentrated on thus far, Niels listed several other common SQL IO patterns that his SQLIO script tests:

  • 1 KB sequential writes to the log file (small log writes)
  • 64 KB sequential writes to the log file (bulk log writes)
  • 8 KB random reads to the log file (rollbacks)
  • 64 KB sequential writes to the data files (checkpoints, reindex, bulk inserts)
  • 64 KB sequential reads to the data files (read-ahead, reindex, checkdb)
  • 128 KB sequential reads to the data files (read-ahead, reindex, checkdb)
  • 128 KB sequential writes to the data files (bulk inserts, reindex)
  • 256 KB sequential reads to the data files (read-ahead, reindex)
  • 1024 KB sequential reads to the data files (enterprise edition read-ahead)
  • 1 MB sequential reads to the data files (backups)

The script actually handles more combinations than those listed above because it also tests for differing number of threads (-t ) and outstanding requests (-o ). All in all, over 570 combinations of IO patterns are tested. Be warned here… given that each test takes 5 minutes to run by default, with a 60 second wait time in between each test, be prepared to give this script at least 2 days to let it run its course!

The script itself is dead simple to run. Just open a powershell window, and save Niels script to the SQLIO installation folder. From there, change to that directory and issue the command:

./SQLIO_Batch.ps1

Then come back in 3 days! Seriously though, depending on your requirements, you can modify the parameters of the script to reduce the number of scenarios based on editing the first 7 lines of code which is quite self explanatory

$Drive = @('G', 'H', 'I', 'J')
$IO_Kind = @('W', 'R') # Write before read so that there is something to read.
$Threads = @(2, 4, 8)
#$Threads = @(2, 4, 8, 16, 32, 64)
$Seconds = 10*60 # Five minutes
$Factor = @('random', 'sequential')
$Outstanding = @(1, 2, 4, 8, 16, 32, 64, 128)
$BlockSize = @(1, 8, 64, 128, 256, 1024)

Now if this wasn’t cool enough, Niels also written a second script that parses the output from all of the SQLIO tests. This can produce a CSV file that allows you to perform further analysis in excel. To run this script, we need to know the same of the output file of the first script. By default the filename is SQLIO_Result.<date>.txt. For example:

./SQLIO-Parse.ps1 -ResultFileName ‘SQLIO_Result.2010-12-24.txt’

By default the parse script outputs to the screen, but modifying it to write to CSV file is really easy. All one has to do is comment out the second last line of code and uncomment the last one as shown below:

#$Sqlio | Format-Table -Property Kind,Threads,Seconds,Drive,Stripe,Outstanding,Size,IOs,MBs,Latency_min,Latency_avg,Latency_max -AutoSize

$Sqlio | Export-Csv SQLIO_Parse.csv

Below is an example of the report in Excel. Neat eh?

image

Conclusion and coming up next…

By now, you should be a SQLIO guru and have a much better idea of the sort of IO patterns that SQL Server has beyond just reading from and writing to databases. We have covered the IO patterns of transaction logs, as well as examined a terrific PowerShell script that not only runs all of the IO scenarios that you need to, but parses the output to produce a CSV file for deeper analysis. In short, you now have the tools you need to run a pretty good disk infrastructure stress test and start some interesting conversations with your storage gurus.

However at this point I feel there some pieces missing to this disk puzzle:

  1. We have not yet brought the discussion back to lead and lag indicators. So while we know how to hammer disk infrastructure, how can we be more proactive and specify minimum conditions of satisfaction for our disk infrastructure?
  2. Microsoft treatment of disk performance (and in particular IOPS and latency) in their performance documentation is inconsistent and in my opinion, confuses more than it clarifies. So in the next post, we are going to look at these two issues. In doing so, we are going to leave SQLIO and Performance Monitor behind and examine two other utilities including one that is lesser known, but highly powerful.

Until then, thanks for reading

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Demystifying SharePoint Performance Management Part 7 – Getting at Latency, IOPS and MBPS

This entry is part 7 of 11 in the series Perf
Send to Kindle

Hi all, and welcome to part 7 of this series on SharePoint performance planning. This is the point of the series where I realise that I have much more to write about than I intended. Last time this happened I never got around to finishing the series (*blush* … a certain tribute to a humble leave form ). Like that series, I now have no idea how many posts I will end up doing, but I will keep soldiering on nonetheless.

Recapping the last two posts of this series in particular, we have been looking at the relationship between the performance measures of Disk latency, Disk I/O per second (IOPS) and Disk Megabytes transferred per second (MBPS). We spent most of part 6 looking at the relationship between these three performance metrics by specifically focusing on how the size of an IO request affects things. If you recall, a couple of key points were made:

  • In general, the larger the IO request being made, the more latency there will be, resulting in less IOPS but increased MBPS.
  • Latency is significantly affected by whether an IO requests is sequential or random. To demonstrate this, I used a tool called SQLIO to simulate disk IO to generate some performance stats that demonstrated both IOPS and MBPS improved by some 750% when compared to random IO.

We finished the post by examining the way SQL server performs IO requests and what SharePoint components are IOPS heavy. In short, SQL Server uses a range of request sizes for database reads and writes between 8K and 1024KB. The reason for the range (for reads anyhow) is the read-ahead algorithm (gory detail here), in which SQL attempts to proactively retrieve data that are going to be used in the immediate future. A read-ahead may result in a much larger I/O request being made than a single 8KB page, but much better performance because in effect, SQL is pulling more data from each I/O operation.

In this episode (and the next one)…

Our focus in this post and the next one is similar to part 3, in that we are now going to do some real work and some of it will involve the command line. Therefore also like part 3, if you are one of those project manager types who utilise the wussy “I’m business, not technical” excuse, I want you to persist and try this stuff out. Given that I wrote this series with you in mind, put that damn iPad down, get out your laptop and reload this article! You can try all of the steps below out on your PC while you are reading this.

Now for the tech types reading this, on account of my intention to “demystify” SharePoint performance, I will be more verbose that what you guys need. But consider it this way – I am doing you guys a favour because next time your PM or BA’s eyes start to glaze when you start explaining performance and capacity planning to them, you can point them to this series and tell them that there is no excuse.

This article is going to cover two areas. First up let’s look at what we can do with Windows inbuilt Performance Monitor tool in terms of monitoring Latency and IOPS in particular. Next we will look at a popular tool for stress testing disk infrastructure that gives us visibility into MBPS.

The basics: Performance Monitor 101

Just in case you have never done it before, type in PERFMON on any Windows box at the start button or the command line (by the way, I am assuming Windows 7 or Windows 2008 Server here).

image

If you did that, then you are looking at the classic tool used to understand how a PC or server is performing. Looking at the top left of the resultant window, you should see several options listed under “Performance”. Click on “Performance Monitor” and watch the magic. Congratulations… you now know how to measure CPU as that is the default performance counter displayed.

image

Performance monitor can easily be used to take a look at disk IOPS and latency. Right click on the graph and from the menu choose Add Counters… This will provide you with a long list of “performance objects” (a fancy word for a logical grouping of performance counters)

image

From the list of performance objects, scroll up and find “LogicalDisk”. Move your cursor to the arrow to the right of the LogicalDisk counters and click on it. You should see a list of disk related performance counters appear as shown below.

image   image

Note:  You could have chosen the performance object called PhysicalDisk instead of LogicalDisk. The difference between them is that physical disk counters only consider each hard drive, not the way it is partitioned. The Logical Disk performance object monitors logical partitions of a disk. As a general role (for non techy types reading this), go with LogicalDisk.

Right then… now currently, all of the possible performance counters for LogicalDisk are currently selected, but for now we are only interested in latency and IOPS, which are represented by four counters:

Latency: Avg. Disk sec/Read
Avg. Disk sec/Write
Measures the average time, in seconds, of a read of data to the disk. (Therefore 5ms will be shown as 0.005)
Measures the average time, in seconds, of a write of data to the disk

MS Technet Note: Numbers also vary across different storage configurations (SAN cache size/utilization can impact this greatly)
IOPS Disk Reads/sec:
Disk Writes/sec:
The rate of read operations on the disk per second.
The rate of write operations on the disk per second.
MS Technet Note: This number varies based on the size of I/O’s issued. Practical limit of 100-140/sec per disk spindle, however consult with hardware vendor for more accurate estimation.

Go ahead and select these four counters (use the Ctrl key and click each one to select more than one counter). Now we have to choose which disk or partition that we want to monitor. Below where you chose the performance counters, you will see a label with the suitably unclear title of “Instances of selected object” (I have highlighted it below). From here, choose the hard drive or partition you are interested in. Finally, click the “Add” button at the very bottom and you should see your selected counters listed in the “Added counters” window.

image   image

Click the OK button and you should now be seeing these counters doing their thing. Each performance counter you added is listed below the graph showing the performance data collected in real time. The display shows a time period of 100 seconds and is refreshed each second. Also, a neat feature that some people don’t know about it is to click on one of the counters and then hold down Ctrl and type the letter “H”. This is the shortcut key for highlighting the selected counter and the currently selected counter should now be black. Additionally, you should be able to now use the up and down arrow keys to cycle through the counters and highlight each.

image

At this point, try copying some files or open some applications and watch the effect. You should see a spike in disk related activity reflected in the IOPS and latency counters above. There you go business analysts, you officially have monitored disk performance! Wasn’t so hard was it?

Now that we are monitoring some interesting counters, how about we really give the disk something to chew on! Smile

Upping the ante with SQLIO

SQLIO is an old tool nowadays, but still highly relevant and extremely useful. Despite being named SQLIO, it actually has very little to do with SQL Server! It was provided by Microsoft to help determine the I/O capacity that a server can handle. SQLIO allows you to test a combination of I/O sizes for read/write operations, both sequentially and randomly. Thus, it is useful for stress testing the disk infrastructure for any IO intensive application. Now be warned… you can absolutely smash your disk infrastructure with this tool, so don’t go running this in production without some sort of official clearance. Furthermore, if you want to use SQLIO to test your SAN, be sure to consider the other servers and applications that might be using it. There is potential to adversely affect them.

You can download SQLIO from Microsoft here. It will run on any recent Windows OS, so you can try it on your own PC (now you know why I told you to put your iPad away earlier). Installing SQLIO is very simple, just run SQLIO.MSI and it will install by default into C:\Program Files(x86)\SQLIO folder.

Note: If you want a great tutorial on installing and using SQLIO, look no further than MCM Brent Ozar’s 2009 article entitled SQLIO Tutorial: How to Test Disk Performance).

SQLIO works by reading from and writing to one or more test files, so the first thing we need to do with SQLIO is to set up a configuration file that specifies the location and size of these test files. The configuration file is called PARAM.TXT and is found in the installation folder. Each line of the configuration file represents a test file, its size and a couple of other parameters. The options on each line of the param.txt file are as follows:

  • <Path to test file> Full path and name of the test file to be used.
  • <Number of threads (per test file)>
  • <Mask > Set to 0x0
  • <Size of test file in MB> Ideally, this should be large enough so that the test file will be larger than any cache resident on the SAN (or RAID controller).

Of these 4 parameters, only the first one (the location of the file) and last one (the size of the file) matters for now. Below is a sample param.txt that tests a 20GB file on the E:\ Drive.

image

The next step is to run a  quick SQLIO using sequential writes to create the test file. We are going to use the command-line to do this (although someone has written a GUI for the tool). So open a command prompt, change to the installation directory for SQLIO and type the command below (we will save an detailed explanation of the parameters for later).

sqlio -kW -s10 -fsequential -o8 -b8 -LS -Fparam.txt timeout /T 10

This command will create the file and run a 10 second test. The output will look something like what I have pasted below:

sqlio v1.5.SG

using system counter for latency timings, 2241035 counts per second

parameter file used: param.txt

     file e:\testfile.dat with 1 thread (0) using mask 0x0 (0)

1 thread writing for 10 secs to file e:\testfile.dat

     using 8KB sequential IOs

     enabling multiple I/Os per thread with 8 outstanding

size of file e:\testfile.dat needs to be: 20971520000 bytes

current file size:      104857600 bytes

need to expand by:      20866662400 bytes

expanding e:\testfile.dat …

SQLIO will stop here for a while, while your PC chugs away creating the 20GB test file. Once completed, it will run out quick 10 second test, but you can ignore the rest of the output because this test is  of no consequence.

Running a real test

The previous command was just the entre. We are not interested in the resulting data because the point of the exercise was to create the test file. Now it is time for the main course. Try this command. It will spend 2 minutes running a random IO write to the 20gig test file using a size of 8KB for each write.

sqlio -kW -b8 -frandom -s120 -BH -LS -Fparam.txt

Below is the output that summarises the configuration specified by the above command:

sqlio v1.5.SG

using system counter for latency timings, 2241035 counts per second

1 thread writing for 120 secs to file e:\TestFile.dat

using 8KB random IOs

buffering set to use hardware disk cache (but not file cache)

using current size: 20000 MB for file: e:\TestFile.dat

initialization done

For the next two minutes SQLIO will chug away, hammering the disk with writes. Once the test has been performed, SQLIO will report its findings. You will see IOPS, MBPS and a report of average/max/min latency. On top of this, a histogram showing the distribution of latency is provided. This histogram gives context to the “average latency” figure because it shows the shape of the latency that occurred throughout the test. I graphed the distribution in excel below the SQLIO results below:

CUMULATIVE DATA:

throughput metrics:

IOs/sec:   225.80

MBs/sec:     1.76

latency metrics:

Min_Latency(ms): 0

Avg_Latency(ms): 3

Max_Latency(ms): 111

histogram:

ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+

%:  4  6  6 31 23 15  5  3  2  1  1  1  1  0  0  0  0  0  0  0  0  0  0  0  0

image

Running the numbers…

Now, before we get into a more detailed test, let’s examine some of the SQLIO parameters:

  • -k specifies whether to perform a read or write test (–kW for write and –kR for read)
  • -s specifies how long to run the test for. In the example above it ran for 2 minutes (120 seconds)
  • -f specifies whether to run a random or sequential IO operation (-frandom)
  • -b specifies the size of the IO operations (in the example above 8KB)
  • -t specifies the number of threads to use. A multi-cpu server should be able to utilise more threads than you have processors. If your storage can handle it, we can increase the number of threads and see what latency arises as a result.
  • -o specifies the number of outstanding requests. This simulates a sudden spike in load and gives an indication of how fast IO requests are being serviced. If you keep adding outstanding requests, latency will start to increase as the number of IO requests outstrips the disks ability to service them.
  • -LS means to capture the disk latency information. If you do not specify this you will not get any latency results

Okay, so how about we see what difference a queue of IO requests makes. Below is a SQLIO command with the addition of the –o parameter. Let’s see what a queue of 4 outstanding requests does and compare the historgram output…

sqlio -kW -b8 -frandom –s120 –o4 -BH -LS -Fparam.txt

And the result? Much more latency than our first example above but no real increase in IOPS or MBPS. Clearly we are already at the limit of what my laptop can handle (I stripped the hyperbole and pasted the counters only).

IOs/sec:   221.73

MBs/sec:     1.73

Min_Latency(ms): 0

Avg_Latency(ms): 17

Max_Latency(ms): 187

 

image

Now since I only changed 1 parameter and such a difference was seen, most people will use SQLIO with a batch file to test different parameters. For example, if you were to paste the commands below into a batch file, you would be running write tests using 16KB, 32KB and 64KB sizes.

sqlio -kW -b16 -frandom -s120 -BH -LS -Fparam.txt

sqlio -kW -b64 -frandom -s120 -BH -LS -Fparam.txt

sqlio -kW -b128 -frandom -s120 -BH -LS -Fparam.txt

For what it’s worth, here is the results for each of the above tests (including the 8KB one we stared with) showing the relationship of IOPS, MBPS and latency. As predicted by our exploration of the relationship between request size, IOPS and MBPS in part 6, latency was smallest with the 8KB option.

8KB write 16KB write 64KB write 128KB write
IOs/sec: 225.80

MBs/sec: 1.76

Avg_Latency(ms): 3

IOs/sec: 220.39

MBs/sec: 3.44

Avg_Latency(ms): 4

IOs/sec: 192.85

MBs/sec: 12.05

Avg_Latency(ms): 4

IO/Sec: 176.30

MB/sec: 22.02

Avg Latency(ms): 5

image

Now one quick note: If you want to play with the –t parameter and add more threads, you will  have to reference the test file directly and not refer to the parameters file. This is because the one of the settings in the param.txt file is the number of threads for each file. Not matter what you put in at the command line, it will be overwritten by what is specified in param.txt. Thus the command below would only run a single thread despite 8 threads being specified via the –t parameter.

sqlio -kW -b64 -frandom -s120 -t8 -o1 -BH -LS -Fparam.txt

sqlio v1.5.SG

using system counter for latency timings, 2241035 counts per second

parameter file used: param.txt

file c:\testfile.dat with 1 thread (0) using mask 0x0 (0)

 

To get around this issue, drop the –F parameter and refer to the test file directly as shown below:

sqlio -kW -b64 -frandom -s120 -t8 -o1 -BH -LS e:\testfile.dat

sqlio v1.5.SG

using system counter for latency timings, 2241035 counts per second

8 threads writing for 120 secs to file e:\testfile.dat

 

Conclusion (and coming up next)…

Phew! Okay, so apart from possibly whetting your appetite for smashing disk infrastructure, you might have also come to the realisation that there are many parameters to test in various combinations. In this entire article, I have assumed random writes to the disk, but what about sequential writes? For that matter, what about reads? What about multiple threads and more outstanding requests? What about longer tests or different sized test files?

These are all important questions and I will answer them… in the next post or two. This one is getting a little too long and I have plenty more to cover in this area.

So have a play with the parameters on SQLIO on your own hardware and in the next post, we will continue looking at SQLIO, plus some great work people have done to make your life much easier using it. I want to also return to PERFMON to show you the relationship between the PhysicalDisk and LogicalDisk counters and what SQLIO reports. Then we will examine two other tools, including one that is lesser known, but a very powerful way to measure disk performance. (That one will redeem me with the tech guys who will have no doubt found this article to be too light on 🙂

Subsequent to that, we hark way back to part 1 and return to a lead indicator point of view of disk IO performance and look at how you can nail the ass off your SAN vendor to ensure they do all the due diligence necessary that your Disk infrastructure will perform well.

Until then, thanks for reading

Paul Culmsee

HGBP_Cover-236x300

www.sevensignma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

"Ain’t it cool?" – Integrating SharePoint and real-time performance data – Part 2

Send to Kindle

Hi again

This article is the second half of a pair of articles explaining how I integrated real-time performance data with an SharePoint based IT operational portal, designed around the principle of passive compliance with legislative or organisational controls.

In the first post, I introduced the PI product by OSIsoft, and explained how SQL Reporting services is able to generate reports from more than just SQL Server databases. I demonstrated how I created a report server report from performance data stored in the PI historian via an OLE DB provider for PI, and I also demonstrated how I was able to create a report that accepted a parameter, so that the output of the report could be customised.

I also showed how a SharePoint provides a facility to enter parameter data when using the report viewer web part.

We will now conclude this article by explaining a little about my passively compliant IT portal, and how I was able to enhance it with seamless integration with the real-time performance data stored in the PI historian.

Just to remind you, here is my conceptual diagram in "acoustic Visio" format

The IT portal

This is the really ultra brief explanation of the thinking that went into my IT portal

I spent a lot of time thinking about how critical IT information could be stored in SharePoint to achieve the goals of quick and easy access to information, make tasks like change/configuration management more transparent and efficient, as well as capture knowledge and documentation. I was influenced considerably by ISO17799 as it was called back then, especially in the area of asset management. I liked the use of the term "IT Assets" in ISO17799 and the strong emphasis on ownership and custodianship.

ISO defined asset as "any tangible or intangible thing that has value to an organization". It maintained that "…to achieve and maintain appropriate protection of organizational assets. All assets should be accounted for and have a nominated owner. Owners should be identified for all assets and the responsibility for the maintenance of appropriate controls should be assigned. The implementation of specific controls may be delegated by the owner as appropriate but the owner remains responsible for the proper protection of the assets."

That idea of delegation is that an owner of an asset can delegate the day-to-day management of that asset to a custodian, but the owner still bears ultimate responsibility.

So I developed a portal around this idea, but soon was hit by some constraints due to the broad ISO definition of an asset. Since assets have interdependencies, geeks have a tendency to over-complicate things and product a messy web of interdependencies. After some trial and error, as well as some soul searching I was able to come up with a 3 tier model that worked.

I changed the use of the word "asset", and split it into three broad asset types.

  • Devices (eg Server, SAN, Switch, Router, etc)
  • IT Services (eg Messaging, Databases, IP Network, etc)
  • Information Assets (eg Intranet, Timesheets,
image

The main thing to note about this model is to explain the different between an IT Service and an Information Asset. The distinction is in the area of ownership. In the case of an "Information Asset", the ownership of that asset is not IT. IT are a service provider, and by definition the IT view of the world is different to the rest of the organisation. An "IT Service" on the other hand, is always owned by IT and it is the IT services that underpin information assets.

So there is a hierarchical relationship there. You can’t have an information asset without an IT service providing it. Accountabilities are clear also. IT own the service, but are not responsible for the information asset itself – that’s for other areas of the organisation. (an Information Asset can also depend on other information assets as well as many IT services.

While this may sound so obvious that its not worth writing, my experience is that IT department often view information assets and the services providing those assets as one and the same thing.

Devices and Services

So, as an IT department, we provide a variety of services to the organisation. We provide them with an IP network, potentially a voice over IP system, a database subsystem, a backup and recovery service, etc.

It is fairly obvious that each IT service consists of a combination of IT devices (and often other IT services). an IP network is an obvious one and a basic example. The devices that underpin the "IP Network" service are routers, switches and wireless access points.

For devices we need to store information like

  • Serial Number
  • Warranty Details
  • Physical Location
  • Vendor information
  • Passwords
  • Device Type
  • IP Address
  • Change/Configuration Management history
  • IT Services that depend on this device (there is usually more than 1)

For services, we need to store information like

  • Service Owner
  • Service Custodian
  • Service Level Agreement (uptime guarantees, etc)
  • Change/Configuration Management history
  • IT Devices that underpin this service (there is usually more than 1)
  • Dependency relationships with other IT services
  • Information Assets that depend on this IT service

Keen eyed ITIL practitioners will realise that all I am describing here is a SharePoint based CMDB. I have a site template, content types, lists, event handlers and workflows that allow the above information to be managed in SharePoint. Below is three snippets showing sections of the portal, drilling down into the device view by location (click to expand), before showing the actual information about the server "DM01"

image image

image

Now the above screen is the one that I am interested in. You may also notice that the page above is a system generated page, based on the list called "IT Devices". I want to add real-time performance data to this screen, so that as well as being able to see asset information about a device, I also want to see its recent performance.

Modifying a system page

I talked about making modifications to system pages in detail in part 3 of my branding using Javascript series. Essentially, a system page is an automatically generated ASPX page that SharePoint creates. Think about what happens each time you add a column to a list or library. The NewForm.aspx, Editform.Aspx and Dispform.aspx are modified as they have to be rebuild to display the new or modified column.

SharePoint makes it a little tricky to edit these pages on account of custom modifications running the risk of breaking things. But as I described in the branding series, using the ToolPaneView hack does the job for us in a safe manner.

So using this hack, I was able to add a report viewer web part to the Dispform.aspx of the "IT devices" list as shown below.

image image

imageimage

Finally, we have our report viewer webpart, linked to our report that accesses PI historian data. As you can see below, the report that I created actually is expecting two parameters to be supplied. These parameters will be used to retrieve specific performance data and turn it into a chart.

image

Web Part Connection Magic

Now as it stands, the report is pretty much useless to us in the sense that we have to enter parameters to it manually, to get it to actually present us the information that we want. But on the same page as this report is a bunch of interesting information about a particular device, such as its name, IP Address, location and description. Wouldn’t it be great if we could somehow pass the device name (or some other device information) to the report web part automatically.

That way, each time you opened up a device entry, the report would retrieve performance information for the device currently being viewed. That would be very, very cool.

Fortunately for us it can be easily done. The report services web part, like many other web parts is connectable. This means that it can accept information from other web parts. This means that it is possible to have the parameters automatically passed to the report! 

Wohoo!

So here is how I am going to do this. I am going to add two new columns to my device list. Each column will be the parameter passed to the report. This way, I can tailor the report being generated on a device by device basis. For example, for a SAN device I might want to report on disk I/O, but a server I might want CPU. If I store the parameter as a column, the report will be able to retrieve whatever performance data I need.

Below shows the device list with the additional two columns added. the columns are called TAGPARAM1 and TAGPARAM2. The next screen below, shows the values I have entered for each column against the device DM01. These values will be passed to the report server report and used to find matching performance data.

image image

So the next question becomes, how do I now transparently pass these two parameters to the report? We now have the report and the parameters on the same page, but no obvious means to pass the value of TagParam1 and TagParam2 to the report viewer web part.

The answer my friends, is to use a filter web part!

Using the toolpane view hack, we once again edit the view item page for the Device List. We now need to add two additional web parts (because we have two parameters). Below is the web part to add.

image

The result should be a screen looking like the figure below

image

Filter web parts are not visible when a page is rendered in the browser. They are instead used to pass data between other web parts. There are various filter web parts that work in different ways. The Page Field filter is capable of passing the value of any column to another web part.

Confused? Check out how I use this web part below…

The screen above shows that the two Page Field filters web parts are not configured. They are prompting you to open the tool pane and configure them. Below is the configuration pane for the page field filter. Can you see how it has enumerated all of the columns for the "IT device" list? In the second and third screen we have chosen TagParam1 for the first page filter and TagParam2 for the second page filter web part.

image image image

Now take a look at the page in edit mode. The page filters now change display to say that they are not connected. All we have done so far is tell the web parts which columns to grab the parameter values from

image

Almost Home – Connecting the filters

So now we need to connect each Page Field filter web part to the report viewer web part. This will have the effect of passing to the report viewer web part, the value of TagParam1 and TagParam2. Since these values change from device to device, the report will display unique data for each device.

To to connect each page filter web part you click the edit dropdown for each page filter. From the list of choices, choose "Connections", and it will expand out to the choice of "Send Filter Values To". If you click on this, you will be promoted to send the filter values to the report viewer web part on the page. Since in my example, the report viewer web part requires two parameters, you will be asked to choose which of the two parameters to send the value to.

image image

Repeat this step for both page filter web parts and something amazing happens, we see a performance report on the devices page!! The filter has passed the values of TagParam1 and tagParam2 to the report and it has retrieved the matching data!

image

Let’s now save this page and view it in all of its glory! Sweet eh!

image 

Conclusion (and Touchups)

So let’s step back and look at what we have achieved. We can visit our IT Operations portal, open the devices list and immediately view real-time performance statistics for that device. Since I am using a PI historian, the performance data could have been collected via SNMP, netflow, ping, WMI, Performance Monitor counters, a script or many, many methods. But we do not need to worry about that, we just ask PI for the data that we want and display it using reporting services.

Because the parameters are stored as additional metadata with each device, you have complete control over the data being presented back to SharePoint. You might decide that servers should always return CPU stats, but a storage area network return disk I/O stats. It is all controllable just by the values you enter into the columns being used as report parameters.

The only additional thing that I did was to use my CleverWorkArounds Hide Field Web Part, to subsequently hide the TagParam1 and TagParam2 fields from display, so that when IT staff are looking at the integrated asset and performance data, the ‘behind the scenes’ glue is hidden from them.

So looking at this from a IT portal/compliance point of view, we now have an integrated platform where we can:

  • View device asset information (serial number, purchase date, warranty, physical location)
  • View IT Service information (owners, custodians and SLA’s)
  • View Information Asset information (owners, custodians and SLA’s)
  • Understand the relationships between devices, services and information assets
  • Access standards, procedures and work instructions pertaining to devices, services and information assets
  • Manage change and configuration management for devices, services and information assets
  • Quickly and easily view detailed, real time performance statistics of devices

All in all, not a bad afternoons work really! And not one line of code!

As i said way back at the start of the first article, this started out as a quick idea for a demo and it seems to have a heck of a lot of potential. Of course, I used PI but there is no reason why you can’t use similar techniques in your own IT portals to integrate your operational and performance data into the one place.

I hope that you enjoyed this article and I look forward to feedback.

<Blatant Plug>Want an IT Portal built in passive compliance? Then let’s talk!</Blatant Plug>

cheers

Paul Culmsee

 

 

 

 

OSISoft addendum

Now someone at OSISoft at some point will read this article and wonder why I didn’t write about RTWebparts. Essentially PI has some web parts that can be used to display historian data in SharePoint. There were two reasons why I did not mention them.

  1. To use RTWebparts you have to install a lot of PI components onto your web front end servers. Nothing wrong with that, but with Report Services, those components only need to go onto the report server. For my circumstances and what I had to demonstrate, this was sufficient.
  2. This post was actually not about OSISoft or PI per se. It was used to demonstrate how it is possible to use SharePoint to integrate performance and operational information into one integrated location. In the event that you have PI in your enterprise and want to leverage it with SharePoint, I suggest you contact me about it because we do happen to be very good at it 🙂

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

"Ain’t it cool?" – Integrating SharePoint and real-time performance data – Part 1

Send to Kindle

Hi

This is one of those nerdy posts in the category of "look at me! look at me! look at what I did, isn’t it cool?". Normally application developers write posts like this, demonstrating some cool bit of code they are really proud of. Being only a part-time developer and more of a security/governance/compliance kind of guy, my "aint it cool" post is a little different.

So if you are a non developer and you are already tempted to skip this one, please reconsider. If you are a CIO, IT manager, Infrastructure manager or are simply into ITIL, COBiT or compliance around IT operations in general, this post may have something for you. It offers something for knowledge managers too. Additionally it gives you a teensy tiny glimpse into my own personal manifesto of how you can integrate different types of data to achieve the sort of IT operational excellence that is a step above where you may be now.

Additionally, if you are a Cisco nerd or an infrastructure person who has experience with monitoring, you will also find something potentially useful here.

In this post, I am going to show you how I leveraged three key technologies, along with a dash of best practice methodology to create an IT Portal that I think is cool.

The Premise – "Passive Compliance"

In my career I have filled various IT roles and drunk the kool-aid of most of the vendors, technologies, methodologies and practices. Nowadays I am a product of all of these influences, leaving me slightly bitter and twisted, somewhat of a security nerd, but at the same time fairly pragmatic and always mindful of working to business goals and strategy.

One or the major influences to my current view of the world, was a role working for OSI Software from 2000-2004, via a former subsidiary company called WiredCity. OSISoft develop products in the process control space, and their core product is a data historian called PI. At WiredCity, we took this product out of the process control market and right into the IT enterprise and OSISoft now market this product as "IT Monitor". I’ll talk about PI/IT monitor in detail in a moment, but humour me and just accept my word that it is a hellishly fast number crunching database for storing lots of juicy performance data.

An addition I like to read all the various best practice frameworks and methodologies and I write about them a fair bit. As a result of this interest, I have a SharePoint IT portal template that I have developed over the last couple of years, designed around the guiding principle of passive compliance. That is, by utilising the portal for IT various operational tasks, structured in a certain way, you implicitly address some COBiT/ISO27001 controls as well as leverage ITIL principles. I designed it in such a way that an auditor could take a look at it and it would demonstrate the assurance around IT controls for operational system support. It also had the added benefit of being a powerful addition to disaster recovery strategy. (In the second article, to be published soon, I will describe my IT portal in more detail).

Finally, I have SQL Reporting Services integrated with SharePoint, used to present enterprise data from various databases into the IT Portal – also as part of the principle of passive compliance via business intelligence.

Recently, I was called in to help conduct a demonstration of the aforementioned PI software, so I decided to add PI functionality to my existing "passive compliance" IT portal to integrate asset and control data (like change/configuration management) along with real-time performance data. All in all I was very pleased with the outcome as it was done in a day with pretty impressive effect. I was able to do this with minimal coding, utilising various features of all three of the above applications with a few other components and pretty much wrote no code at all.

Below I have built a conceptual diagram of the solution. Unfortunately I don’t have Visio installed, but I found a great freeware alternative 😉

Image0003

I know, there is a lot to take in here (click to enlarge), but if you look in the center of the diagram, you will see a mock up of a SharePoint display. All of the other components drawn around it combine to produce that display. I’ll now talk about the main combination, PI and SQL Reporting Services.

A slice of PI

Okay so let’s explain PI because I think most people have a handle on SharePoint :-).

To the right is the terminator looking at data from a PI historian showing power flow in California. So this product is not a lightweight at all. It’s heritage lies in this sort of critical industrial monitoring.

Just to get the disclaimers out of the way, I do not work for OSISoft anymore nor are they even aware of this post. Just so hard-core geeks don’t flame me and call me a weenie, let me just say that I love RRDTool and SmokePing and prefer Zabbix over Nagios. Does that make me cool enough to make comment on this topic now? 🙂  

Like RRDTool, PI is a data historian, designed and optimised for time-series data.

"Data historian? Is that like a database of some kind?", you may ask. The answer is yes, but its not a relational database like SQL Server or Oracle. Instead, it is a "real-time, time series" data store. The English translation of that definition, is that PI is extremely efficient at storing time based performance data.

"So what, you can store that in SQL Server, mySQL or Oracle", I hear you say. Yes you most certainly can. But PI was designed from the ground up for this kind of data, whereas relational databases are not. As a result, PI is blisteringly fast and efficient. Pulling say, 3 months of data that was collected at 15 second intervals would literally take seconds to do, with no loss of fidelity, even going back months.

As an example, let’s say you needed to monitor CPU usage of a critical server. PI could collect this data periodically, save it into the historian for later view/review/reporting or analysis. Getting data into the historian can be performed a number of ways. OSISoft have written ‘interfaces’ to allow collection of data from sources such as SNMP, WMI, TCP-Response, Windows Performance Monitor counters, Netflow and many others.

The main point is that once the data is inside the historian, it really doesn’t matter whether the data was collected via SNMP, Performance Monitor, a custom script, etc. All historian data can now be viewed, compared, analysed and reported via a variety of tools in a consistent fashion.

SQL Reporting Services

For those of you not aware, Reporting Services has been part of SQL Server since SQL 2000 and allows for fairly easy generation of reports out of SQL databases. More recently, Microsoft updated SQL Server 2005 with tight integration with SharePoint. Now when creating a report server report, it is "published" to SharePoint in a similar manner to the publishing of InfoPath forms.

Creating reports is performed via two ways, but I am only going to discuss the Visual Studio method. Using Visual Studio, you are able to design a tailored report, consisting of tables and charts. An example of a SQL Reporting Services Report in visual studio is below. (from MSDN)

 

The interesting thing about SQL Reporting services is that it can pull data from data sources other than SQL Server databases. Data sources include Oracle, Web Services, ODBC and OLE-DB. Depending on your data source, reports can be parameterised (did I just make up a new word? 🙂 ). This is particularly important to SharePoint as you will soon see. It essentially means that you can feed your report values that customise the output of that report. In doing so, reports can be written and published once, yet be flexible in the sort of data that is returned.

Below is a basic example:

Here is a basic SQL statement that retrieves three fields from a data table called "picomp2". Those fields are "Tag", "Time" and "Value". This example selects values only where "Time" is between 12-1pm on July 28th and where "tag" contains the string "MYSERVER"

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%MYSERVER%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM')

Now what if we wanted to make the value for TAG flexible? So instead of "MYSERVER", use the string "DISK" or "PROCESSOR". Fortunately for most data sources, SQL Reporting Services allows you to pass parameters into the SQL. Thus, consider this modified version of the above SQL statement.

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%' + ? + '%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM') 

Look carefully at the WHERE clause in the above line. Instead of specifying %MYSERVER%, I have modified it to %?%. The question mark has special meaning. It means that you will be prompted to specify the string to be added to the SQL on the fly. Below I illustrate the sequence using three screenshots. The first screenshot shows the above SQL inside a visual studio report project. Clicking the exclamation mark will execute this SQL.

image

Immediately we get asked to fill out the parameter as shown below. (I have added the string "DISK")

image

Clicking OK, and the SQL will now be executed against the datasource, with the matching results returned as shown below. Note the all data returned contains the word "disk" in the name. (I have scrubbed identifiable information to protect the innocent).

image

Reporting Services and SharePoint Integration

Now we get to the important bit. As mentioned earlier, SharePoint and SQL Reporting Services are now tightly integrated. I am not going to explain this integration in detail, but what I am going to show you is how a parameterised query like the example above is handled in SharePoint.

In short, if you want to display a Reporting Services report in SharePoint, you use a web part called the "SQL Server Reporting Services Report Viewer"

image

After dropping this webpart onto a SharePoint page, you pick the report to display, and if it happens to be a parameterised report, you see a screen that looks something like the following.

image

Notice anything interesting? The webpart recognises that the report requires a parameter and asks for you to enter it. As you will see in the second article, this is very useful indeed! But first let’s get reporting services talking to the PI historian.

Fun with OLEDB

So, I have described (albeit extremely briefly) enough information about PI and Reporting services. I mentioned earlier that PI is not a relational database, but a time series database. This didn’t stop OSISoft from writing an OLEDB provider 🙂

Thus it is possible to get SQL reporting services to query PI using SQL syntax. In fact the SQL example that I showed in the previous section was actually querying the PI historian.

To get reporting services to be able to talk to PI, I need to create a report server Data Source as shown below. When selecting the data source type, I choose OLE DB from the list. The subsequent screen allows you to pick the specific OLEDB provider for PI from the list.

image image

Now I won’t go into the complete details of completing the configuration of the PI OLE DB provider, as my point here is to demonstrate the core principle of using OLE DB to allow SQL Reporting Services to query a non-relational data store.

Once the data source had been configured and tested (see the test button in the above screenshot), I was able to then create my SQL query and then design a report layout. Here is the sample SQL again.

SELECT    "tag", "time", "value"
FROM      picomp2
WHERE     (tag LIKE '%' + ? + '%') AND ("time" >= '7/28/2008 12:00:00 PM') AND ("time" <= '7/28/2008 1:00:00 PM') 

As I previously explained, this SQL statement contains a parameter, which is passed to the report when it is run, thereby providing the ability to generate a dynamic report.

Using Visual Studio I created a new report and added a chart from the toolbox. Once again the purpose of this post is not to teach how to create a report layout, but below is a screenshot to illustrate the report layout being designed. You will see that I have previewed my design and it has displayed a textbox (top left) allowing the parameter to be entered for the report to be run. The report has pulled the relevant data from the PI historian and rendered it in a nice chart that I created.

image

Conclusion

Right! I think that’s about enough for now. To sum up this first post, we talked a little about my IT portal and the principle of "passive compliance". We examined OSISoft’s PI software and how it can be used to monitor your enterprise infrastructure. We then took a dive into SQL Reporting services and I illustrated how we can access PI historian data via OLE DB.

In the second and final post, I will introduce my IT Portal template in a brief overview, and will then demonstrate how I was able to integrate PI data into my IT portal to combine IT asset data with real-time performance metrics with no code 🙂

I hope that you found this post useful. Look out for the second half soon where this will all come together nicely

cheers

Paul

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle