Back to Cleverworkarounds mainpage
 

Good advice hidden in the Infrastructure Update

I guess the entire SharePoint world now is aware of the post SP1 "Infrastructure updates" put out by Microsoft recently. Probably the best thing about them are that the flaky "content deployment" feature has had some serious work done on it. (My advice has always been use with extreme caution or avoid it, but now I will have to reassess).

Anyway, that is not what I am writing about. Being a former IT Security consultant, I have always installed SharePoint in a "least privilege" configuration. I use different service accounts for search, farm, SSP, reporting services and web applications. The farm account in particular, is one that needs to be very carefully managed given its control over the heart of SharePoint – the configuration database.

Specifically, the farm account never should have any significant rights on any SharePoint farm server, over and above what it minimally needs (which is some SQL Server rights and some DCOM permission stuff). For what it’s worth, I do not use the Network Service account either, as it is actually more risky than a low privileged domain or local user account.

Developers on the other hand tend to run their boxes as full admin. I have no problem with this, so long as there is a QA or test server that is running least privilege.

However, as always with tightening the screws, there are some potential side effects in doing this. There are occasions when the farm account actually needs to perform a task that requires higher privileges, over and above what it has by default. Upgrading SharePoint from a standard to enterprise license is one such example, and it seems that performing the infrastructure update may be another.

If you take the time to read the installation instructions for the infrastructure update, there is this gem of advice:

To ensure that you have the correct permissions to install the software update and run the SharePoint Products and Technologies Configuration Wizard, we recommend that you add the account for the SharePoint Central Administration v3 application pool identity to the Administrators group on each of the local Web servers and application servers and then log on by using that account

You may wonder why this even matters? After all, it is exceedingly likely that you logged into the server as an administrator or domain administrator anyway. Surely you have all the permission that you need, right?

The answer is that not all update tasks are run by your logged-in account. Although you start the installation using your account, at a certain point during the install, many tasks are performed by the SharePoint Central Administration application. Thus, despite you having administrative permissions, SharePoint (using the farm account) will not have.

So the advice above essentially says, temporarily add the SharePoint farm account to the local administrators group and then log into the server as that user account. Now perform the action as instructed. That way the account that starts the installation is the same account that runs SharePoint and thus we know that we are using the correct account with the correct server privileges.

Once the installation has been completed, you can log out of the farm account and then revoke its administrator access, and we are back to the original locked down configuration.

So keep this in mind when performing farm tasks that are likely to require some privileges at the operating system level. It is a good habit to get into (provided you remember to lock down permissions afterwards πŸ™‚ ).

Cheers

Paul



Why do SharePoint Projects Fail – Part 6

Hi again and welcome to part 6 of my series on the factors of why SharePoint projects fail. Joel Oleson’s write-up a while back gave me 5 minutes of fame, but like any contestant on Big Brother, I’ve had my time in the limelight, been voted out of the house (as in Joel’s front page) and I’m back to being an ordinary citizen again.

If you have followed events thus far, I covered off some wicked problem theory, before delving into the bigger ticket items that contribute to SharePoint project failure. In the last post, we pointed our virtual microscope at the infrastructure aspects that can cause a SharePoint problem to go off the rails.

Now we turn our magnifying glass onto application development issues and therefore application developers. Ah, what fun you can have with application developer stereotyping, eh! A strange breed indeed they are. As a group they have had a significant contribution to the bitter and twisted individual that I am today.

The CleverWorkarounds tequila shot rating is back!

image imageimageimageimageimageimage for a project manager in denial πŸ™‚

imagefor the rest of us!

Continue reading “Why do SharePoint Projects Fail – Part 6”



Why do SharePoint Projects Fail – Part 5

Hi again and welcome to this seemingly endless series of posts on the topic of SharePoint projects gone bad.

We spent a couple of posts looking at problem projects in general before focusing specifically on SharePoint. If you have followed the series closely, you will observe that haven’t talked much on technical aspects of the product yet. If you were expecting me to pick apart annoying aspects of the architecture then unfortunately, you will be disappointed because I really don’t believe that it is a big factor in why SharePoint projects fail. Besides which, 90% of SharePoint blogs are on technical/development content anyway.

So where am I going with part 5 then you ask?  I am indeed delving into technical aspects, but once again it is all about the people involved.

So now its time to take a few cheap-shots at the geeks. (After all, they are sensitive souls and we don’t want them to feel left out do we). For the purposes of this post, infrastructure people, tech support, system administrators can be lumped into the same ‘geek bucket’.

Geeks can also cop it like Project Managers do, when projects take on wicked tendencies. They will implement the agreed requirements, but the stakeholders feel that the end result isn’t what they wanted. In the ensuing fallout that happens when the project sponsor realises that say, half a million bucks has been blown with little to show for it, blame is inevitably directed their way, whether justified or not.

Continue reading “Why do SharePoint Projects Fail – Part 5”



Clever, very clever (but bad all the same)

Tags: Governance,Risk,Security @ 10:27 am

I just read on bugtraq about Cesar’s presentation on “Token Kidnapping”. All versions of windows are affected.

While I don’t use this blog for IT security stuff, I do strongly recommend you read both Microsoft’s advisory and especially Cesar’s presentation from his company’s web site. This vulnerability has all the characteristics of mass-exploit fodder, and SharePoint sites may have some susceptibility. Additionally, when ‘patched’, it may be one of those fixes that breaks compatibility with things.

So, here’s the gist of the issue. SharePoint web applications run in the context of whatever user account was chosen when the web application was created. When it accesses the SQL configuration database, it does so using the web service account. After SharePoint authenticates a user, it can take on that user’s identity through impersonation, when it needs to access or check for user permissions to a resource (i.e not the web application account).

Sometimes you impersonate an account that has more privileged access than you. Obviously that is a security risk, so you need to have specific permission to do so. There are four impersonation levels, each of which indicates the degree to which one user can impersonate another – SecurityAnonymous, SecurityIdentification, SecurityImpersonation, and SecurityDelegation. A limited user needs SE_IMPERSONATE_PRIVILEGE enabled in order to impersonate the context of an administrator account.

Back to Cesar. essentially he has discovered a clever technique for gaining privileged access to a server, via a combination of factors. If two windows services are running via the same user account, one can access the threads of the other. What if service A has the SeImpersonatePrivilege level? Though Cesar’s technique, service B now can also gain that privilege.

Ouch!

So if SharePoint is running as the NETWORK SERVICE account (very common on WSS scenarios), it is possible for it to access the threads of, say, the RpcSs service, which has the SeImpersonatePrivilege impersonation level. Therefore, SharePoint running as NETWORK SERVICE may potentially be a vector to gaining privilege escalation on the server.

Nasty…



"Guru of governance?"

My old mate from employers yonder, Mike Stringfellow is one of those developers who always liked to take a step back and think through the bigger picture implications of various design/coding decisions. It’s just a pity that at that particular company we worked at, he was in the minority. (Not helped by some unbelievably poor executive level management wacky views of reality).

So I became one of those bitter and twisted anal infrastructure guys who always seemed to default to "No", when asked a question. (Usually "no" was the right answer nonetheless). I was told by another colleague that some of the (web) developers were scared of me… but Mike at least understood why πŸ™‚

He recently blogged the idea of using features to modify web.config file of a web application. After all, the SharePoint SDK offers the WebConfigModifications property (of the SPWebApplication) class. He suggested that as a governance nazi I might have issues with this idea. As it turns out I do, but only for you developers that have been slack-arsed and not done your homework! πŸ™‚

Continue reading “"Guru of governance?"”



SharePoint External Storage API – Crushing My Dream

When I read several months back, that Microsoft was going to supply an API for external storage in MOSS/WSS, I sprang from my desk and danced around the room and babbled incoherently as if I’d been touched by Benny Hinn.

Okay, well maybe I didn’t quite do that. But what I did do was forward the KB article to a colleague whose company is the leading reseller for EMC Documentum in my town. We’d previously had one of those conversations over a few beers where we questioned the wisdom of SharePoint’s potentially unwise, yet unsurprising use of SQL Server as the storage engine.

…so what’s wrong with SQL?

I am going to briefly dump on SQL here for a minute. But first, let me tell you, I actually like SQL Server! Always have. I hated other Office Server products like Exchange until the 2003 version and SharePoint until the 2007 version. But on the whole, I found SQL to be pretty good. So hopefully that will stop the SQL fanboys from flaming me!

Those readers who appreciate capacity planning issues would appreciate the challenges SQL based storage brings to the table.Additionally, those who have used Enterprise Information Management products like Documentum or Hummingbird (now OpenText) would nod as if Microsoft have finally realised the error of their ways with this updated API.

All of the SharePoint goodies like version control, full text indexing and records management come at a price. Disk consumption and performance drain.  Microsoft say to plan for 1.5 times your previous growth in disk usage. In my own real-world results it is more like 2.5 times previous growth. Disk I/O is also increased markedly.

“So what? Disk is cheap”, you reply. Perhaps so, but the disk itself was never the major cost. Given that this is a SQL database we are talking about, a backup of a 100 gigabyte SQL database could take hours and a restore possibly longer. A differential backup of a SQL database would be the entire 100 gig as it is generally one giant database file! So the whole idea of differential backups during the week and full backups on weekends suddenly has to be re-examined. Imagine a disk partition gets corrupted rendering the data useless. In a file server, this might mean 20% of shared files are unavailable while a restore takes place. In a SQL world, you have likely toasted the whole thing and a full restore is required. Often organisations overlook the common issue of existing backup infrastructure not being scalable enough to deal with SQL databases of this size.

“100 gig”, you scoff, don’t be ridiculous”. Sorry but I have news for you. At an application level, there is a scalability issue in that the lowest logical SharePoint object that can have its own database is a Site Collection, not an individual site. For many reasons, it is usually better to use a single site collection where possible. But if one particular SharePoint site has a library with a lot of files, then the entire content database for the site collection is penalised.

Now the above reasons may be the big ticket items that vendors use to sell SAN’s and complex Backup/Storage solutions, but that’s not the real issue.

The real issue (drumroll…)

It may come as a complete shock to you, but documents are not all created equal. No! Really? πŸ™‚ If they were, those crazy cardigan wearing, file-plan obsessed document controllers and records managers wouldn’t exist. But as it happens, different content is handled and treated completely differently, based on its characteristics.

Case in point: Kentucky Fried Chicken would have some interesting governance around the recipe for its 11 herbs and spices, as would the object of Steve Balmer’s chair throwing with their search engine page ranking algorithms.

I picked out those two obvious examples to show an extreme in documents with high intrinsic value to an organisation. The reality is much more mundane. For example, you may be required by law to store all financial records for seven years. In this day and age, invoices can be received electronically, via fax or email. Once processed by accounts payable, invoices largely have little real value.

By using SQL Server, Microsoft is in effect allocating an identical cost of each document in terms of infrastructure cost. Since all documents of a site collection reside inside a SQL content database, you have limited flexibility to shift lower value documents to lower cost storage infrastructure.

How do the big boys do it then?

Documentum as an example stores the content itself in traditional file shares and then stores the name and location of that document (and any additional metadata) in the SQL database. Those of you who have only seen SharePoint may think this is a crazy idea and introduce much more complex disaster recovery issues. But the reality is the opposite.

Consider the sorts of things you can do with this set-up. You can have many file shares on many servers or SAN’s. Documentum for example, would happily allow an administrator to automatically move all documents not accessed in three months to older, slower file storage. It would move the files and then update the file location in SQL so the new location is hidden from the user and they don’t even know it has been moved. Conversely, documents on older, slower storage that have been accessed recently can be moved back to the faster storage automatically.

It also facilitates geographically dispersed organisations having a central SQL repository for the document metadata, but each remote site has a local file store, to make retrieval work at LAN speeds for most documents. This is a much simpler geographically dispersed scenario than SharePoint can ever do right now.

Restores from backup are quite simple. If a file server corrupts, it only affects the documents stored on that file server. Individual file restores are easy to perform and you don’t have to do a major 100gig database restore for a few files.

Furthermore, documents that have a compliance requirement, but do not need to be immediately available, can easily be archived off to read-only media, thus reducing disk space consumption. The metadata detail of the file can still be retrieved from the SQL database, but location information in the SQL database can now refer to a DVD or tape number.

For this reason, it is clear that SharePoint’s architecture has some cost and scalability limitations when it comes to disk usage and management, largely due to SQL Databases and the limitation of Site Collections for content databases.

So how can we move less valuable documents onto less expensive disk hardware? Multiple databases? Possibly, but that requires multiple site collections and this complicates your architecture significantly. (Doing that is the Active Directory equivalent of using separate forests for each of your departments).

Note to SharePoint fanboys: I am well aware that you can ‘sort of’ do some of this stuff via farm design, site design and 3rd party tools. But until you have seen an high end enterprise content management system, there is no contest.

So you might wonder why SharePoint is all the rage then – even for organisations that already have high end ECM systems? Well the short answer is other ECM vendors’ GUIs suck balls and users like SharePoint’s front end better. (And I am not going to provide a long answer πŸ™‚ )

Utopia then?

As I said at the start of this post, I was very happy to hear about Microsoft’s external storage API. In my mind’s eye, I envisaged a system where you create two types of document libraries: ‘standard’ document libraries that use SQL as the store and ‘enhanced’ document libraries that look and feel identical to a regular document library, but it stores the data outside of SQL. Each ‘enhanced document library’ would be able to point to various different file stores, configured from within the properties of the document library itself.

Utopia my butt!

Then a few weeks back some more detail emerged in SDK documentation and my dream was shattered. This really smells like a “just get version 1 out there and we will fix it properly in version 2” release. I know all software companies partake in this sales technique, but it is Microsoft we are talking about here. Therefore it it my god given right…no…my god given privilege to whine about it as much as I see fit.

Essentially this new feature defines an external BLOB store (EBS). The EBS runs parallel to the SQL Server content database, which stores the site’s structured data. You will note that this is pretty much the Documentum method.

In SharePoint, you must implement a COM interface (called the EBS Provider) to keep these two stores in sync. The COM interface recognizes file Save and Open commands and invokes redirection calls to the EBS. The EBS Provider also ensures that the SQL Server content database contains metadata references to their associated BLOB streams in the external BLOB store.

You install and configure the EBS Provider on each Web front end server in your farm. In its current version, external BLOB storage is supported only at the scope of the farm (SPFarm).

Your point being?

If you haven’t realised why I marked the previous sentence in bold, it is this. Since this EBS provider can only be supported at farm scope, then every document library on every site on every site collection in your farm now saves its data via the EBS provider.

So there is utterly nil granularity with this approach. It’s an all or nothing deal. (There goes my utopian dream of two different types of document libraries). All of your documents in the farm are doing to be stored in this EBS provider!

But it gets worse!

The external BLOB storage feature in the present release will not remain syntactically consistent with external BLOB storage technology to be released with the next full-version release of Microsoft Office and Windows SharePoint Services. Such compatibility was not a design goal, so you cannot assume that your implementation using the present version will be compatible with future versions of Microsoft Office or Windows SharePoint Services.

So basically, if you invest time and resources into implementing an EBS provider, then you’re pretty much have to rewrite it all for the next version. (At least you find this out up front).

No utility is available for moving BLOB data from the content database into the external BLOB store. Therefore, when you install and enable the EBS Provider for the first time, you must manually move existing BLOBs that are currently stored in the content database to your external BLOB store.

Okay that makes sense. It is annoying but I can forgive that. Basically if you implement an EBS provider and enable it, your choices for migrating your existing content into it is a backup and restore/overwrite operation or simply wait it out and allow the natural process of file updates do the job for you.

When using an external BLOB store with the EBS Provider, you must re-engineer your backup and restore procedures, as well as your provisions for disaster recovery, because some backup and restore functions in Windows SharePoint Services operate on the content database but not on the external BLOB store. You must handle the external BLOB store separately.

I would have preferred Microsoft to flesh this statement out, as this will potentially cause much grief if people are not aware of this. It implies that STSADM isn’t going to give you the sort of full-fidelity backup that you expect. Yeouch! I feel I might get a few late night call-outs on that one!

Ah, but wait a minute there, sunshine, is that any different to now? STSADM backup and restore is not exactly rock solid now!

Any error conditions, resource drag, or system latency that is introduced by using the EBS Provider, or in the external BLOB store itself, are reflected in the performance of the SharePoint site generally.

Yeah whatever, this is code word for Microsoft’s tech support way of getting out of helping you. “I’m sorry sir, but call your EBS vendor. Thank you come again!”

Conclusion

I can’t say I am surprised at this version 1 implementation, but I am disappointed. If only the granularity extended to a site collection or better still an individual site, I could forgo the requirement to extend it to individual document libraries or content types.

So it will be interesting to see if this API gets any real uptake and if it does, who would actually use it!

later

Paul



SharePoint for Cisco FanBoys (final housekeeping) – Part 6

Hey there!

Sorry it has taken me a while to get back to the Cisco articles. The “choose your own adventure” post took longer than I thought it would and I also was side tracked blogging about annoying programming issues with XML and Javascript.

This is the last of the Cisco fanboy series of articles and really its more a tidying up of loose ends. To call this last article a Cisco fanboy article is a bit of a stretch actually, since we are now moving to a broader level of governance and accountability, and is therefore not really about Cisco, so I’ll start a new series more appropriately titled and continue from where this article leaves off. 

I started the series with the intent of starting with a seemingly innocuous scenario (Cisco TFTP backups), demonstrating how SharePoint can be leveraged as an okay point solution. I then tried to slowly expand the scope to the broader issues of complex infrastructure management management, while sticking to a Cisco/IP network oriented theme in an attempt to get technical thinkers (like Cisco guys) to think beyond nuts and bolts. This also demonstrated how thinking past ‘the point solution’ can being more substantive benefits. 

Continue reading “SharePoint for Cisco FanBoys (final housekeeping) – Part 6”



SharePoint for Cisco Fanboys (and more developers) – Part 4

Welcome to part 4 of my series on demonstrating SharePoint’s usefulness for storing Cisco configuration backups. What a hard slog it’s been! The last article (part 3) of this series focused on how to modify an open source C# TFTP server to upload files into a SharePoint document library using the SharePoint SDK.

Here is the quick recap on what we have covered so far

  • Part 1 illustrated how it it possible to use SNMP to tell a Cisco IOS device to dump its configuration to TFTP. We talked about the version control feature of SharePoint and why it makes sense to TFTP your configs to a SharePoint document library. We covered the WEBDAV network provider supplied with XP and Win2003 and finished off with a basic example using the TFTP server TFTPD32.
  • Part 2 went into more detail about the issues you can face when using the WEBDAV network provider. It also went into more detail on 3 TFTP server products (WinAgents TFTP, SolarWinds TFTP Server and TFTPD32 and why TFTPD32 ended up being the best choice for this purpose.
  • Part 3 then looked at a wonderful open source TFTP server written in C# called TFTPUtil. We modified the source code of this TFTP server to use the SharePoint SDK and upload files to a SharePoint document library.

Now both the WEBDAV and SDK methods had some issues. The WEBDAV method was obviously easy to set up because pretty much any application (theoretically anyway) can be made to work with it. I, however, had reliability issues with this method as part 2 detailed. The SDK method was much more reliable, but had its own problems. Many people would be uncomfortable with having to perform custom modification of an existing open source TFTP server, but more importantly, there are security implications with this method too.

So we have one remaining method that we can explore. This method is still based around modifications to the TFTPUtil source code but instead of using the SharePoint SDK, we instead use the SharePoint web services.

Continue reading “SharePoint for Cisco Fanboys (and more developers) – Part 4”



SharePoint for Cisco Fanboys (and developers) -Part 3

As I write this series, it is getting less and less about Cisco and more and more about SharePoint. This article is definitely developer centric, but since Cisco guys tend to be interested in the guts of the detail, I decided to keep going :-).

If you read my articles I tend to take the piss out of IT role stereotypes just to make it more entertaining reading. Sales guys and IT Managers tend to cop it the most, but I also like to have a dig at the expense of the nerds too. Cisco nerds on the whole are a great bunch, but I have to say, the scariest nerd I have ever met drank Cisco kool-aid in jumbo size!

If you have gotten to this article after reading the first two and you are scoffing at my audacity to suggest you TFTP your configs into SharePoint, chances are most people think you’re scary! If you are hitting this series of articles for the first time, go back and read part 1 and part 2 before being scary!

Seriously now, I thought that this would be a 2 part set of articles, but I got all bogged down in the methods of getting files into SharePoint. The WEBDAV based methods described in the previous article is easy to do, but ultimately is not the recommended method. So now, we will look at the ‘proper’ ways to do it and see if they are worth the effort. They work okay, but are more complex and I’m not convinced that the governance issues are necessarily worth it for many readers.

Degree of difficulty for this article is varied.

CleverWorkArounds Coffee requirement rating (for an application developer): image 

CleverWorkArounds Coffee requirement rating (for a non developer): image  image image image

Continue reading “SharePoint for Cisco Fanboys (and developers) -Part 3”



SharePoint for Cisco Fanboys (darn WEBDAV) – Part 2

imageHere I am back again, illustrating some of the interesting possibilities that SharePoint offers for Cisco people.

To recap my last post, I showed you a little perl script I wrote to get an IOS router or switch to dump its current configuration to a TFTP server. I then used one of several freeware TFTP servers to show how you can have a TFTP server save the captured file into a version enabled document library.

I then hit a snag in relation to using a Windows Service to do this task. In this article we will delve into this issue in more detail. In addition, I ended up delving much deeper than I intended. So, like my branding series, this is going to turn into a multi-part series too, covering some application development, configuration, security and governance issues. How many parts it will end up being is anybody’s guess!

This is a technically oriented series of articles for the most part, so for you people who like the governance and finance stuff, you may not get too much out of this one. Although this article (part 2) focuses on my issues and observations with the Windows WEBDAV client, if you are one of these people who have ‘special’ feelings when you see those pretty blue Cisco boxes like the image above, then you may find some useful content here. πŸ™‚

SharePoint developers and architects may also find this of interest.

Continue reading “SharePoint for Cisco Fanboys (darn WEBDAV) – Part 2”



« Previous PageNext Page »

Today is: Friday 6 March 2026 -