Teddies, Fetishes and the Management Consulting Scam

Send to Kindle

jediteddy.jpg

What if I told you that the key to becoming a successful management consultant was to become a Teddy Bear?

What if I also told you that it involves fetishes? You might be re-checking the URL to make sure you are on the right site!

Fear not, this article is definitely not “50 Shades of Management Consulting Grey”. Nor is it about donning a cuddly animal suit as a mascot for a football team. To borrow from the much loved children’s TV show “Playschool,” there’s definitely a bear in there, but not the one you might be thinking!

You see, for many people, modern corporate life is now at a point where pace of change is accelerating, unrelenting and fatiguing. In my home state of Western Australia, businesses are reeling from unprecedented levels of disruption and uncertainty, be it the end of the commodity boom, the impact of global competition or disruptive, technology-enabled innovation. It is now difficult to think of any industry that has not had the ground shift beneath it in some way — except perhaps, for Management Consulting.

Management Consulting thrives in an environment of fear, ambiguity and doubt, principally because its business model is based on the presumption that they can make it go away. It’s lucrative too — ambiguity is such a powerful force that executives will part with copious amounts of cash in attempts to escape it…

read the full article at medium.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The ASS Scale. The best 2*2 management model ever!

Send to Kindle

So today I was inspired to come out of blogging hibernation because I saw possibly the worst dodgy 2*2 management matrix ever. The piece below was something that was originally going to be part of my next book with Kailash – as we spend some time on why models like this are so popular. Unfortunately this piece never made it, but Craig Brown told me I had to release it or he would. Thus, I feel it is now appropriate to unveil the greatest 2*2 dodgy management model ever! Without further ado I present to you the ASS Scale…

Does your team kick ass?

Want to improve team performance? Do you want your teams to be more agile, resilient, flexible, strategic, emergent, dynamic and follow orders without question?

The Agile Synergy Scale (ASS)™ is a cutting edge team diagnostic tool that provides a typology of team states. This provides CEO’s and other people who control the budget a sure-fire way to bring the best out of your people, help them reach their full potential and Kick Ass!.

The Agile Synergy Scale draws on several beers worth of research into all the latest literature from Wikipedia and Social Media, such as Big Data Analytics, Neuroscience, Holocracy, Transdisciplinary Intelligence, Innovation Ideation, Neurolinguistic Complexity Theory, Tasseography, Graphology, Craniosacral Therapy and 3D Printing. It explores the relationship between people, motivation and intelligence and unlocks an entirely new way of thinking about all forms of organisational awesomeness.

The framework consists of 4 domains – or “ASS cheeks” as shown below. There is a fifth domain – but we will get to that in a moment. These domains are illustrated in the diagram below.

assscale

The X axis represents team ability from low to high – and incorporates all of the sheer talent and expert knowledge necessary to probe for outstanding achievement for team and organisational excellence. The vertical scale represents a team desire – the lube of synergy that is the difference between accommodating maximum motivation versus constricted performance.

Let’s examine each ass-cheek in more detail and see where you and your team sits.

High Desire, High Skills: Kick Ass!

You and your team are as awesome as the Avengers. Perfectly balanced between brain, brawn and beauty, there is no challenge too tough for you and a Nobel prize in the category of legendaryness is a foregone conclusion.

High Desire, Low Skills: Kiss Ass

You and your team so want to be awesome, you all read the clickbait pearls of wisdom on your LinkedIn feed and therefore “talk the talk” with the best of them, but when the rubber hits the road and pressure is on, there is nothing under the hood. A dangerous sub-variety of kiss-asses are scary-asses (those who think they are kick-asses but are blind to their skill deficiencies.)

Low Desire, High Skills: Slack-ass (or “Can’t be assed”)

You all know your stuff as good as anybody, but nevertheless, you all withhold your discretionary effort (loafing). This is likely because the psychological needs of your team and individual members are not being met – either that or you are all whiny bitches.

Low Desire, Low Skills: Suck-ass

This quadrant has two sub-types. Rational suck-asses and stupid suck-asses. Rational suck-asses have the self-awareness to know they suck-ass and remedial action can be undertaken. Stupid suck-asses unfortunately have their head so far up their asses that they have little awareness of how much they suck-ass.

The toxic hole of chaos

There is a fifth domain (in the middle of the diagram): The toxic hole of chaos, which is the state of not knowing what sort of ASS cheek your team aligns with. It is extremely important you avoid this area in the long term as prolonged exposure can stifle and suffocate your team.

How to measure your ASS

We measure your teams ASS by administering a Rate of Extrinsic Collaboration and Team Agile Leadership Exam. This psychometric instrument can be administered by one of our certified Agile Synergy Scale PROfessional Business Excellence Reviewers. Our ASS PROBERS have gone through an extensive vetting process via a comprehensive multi-choice exam, and can administer a RECTAL exam with minimum discomfort.

So what are you waiting for? Sign your team up for a RECTAL exam today and measure your ASS.

 

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Glyma is now open source!

Send to Kindle

Hi all

If you are not aware, my colleagues and I have spend a large chunk of the last few years developing a software tool for SharePoint called Glyma (pronounced “Glimmer”). Glyma is a very powerful  knowledge management solution for SharePoint 2010/2013, that deals with knowledge that is highly valuable, yet difficult to capture in writing – all that hard earned knowledge that tends to walk out the door in organisations.

Glyma was born from Seven Sigma’s Dialogue Mapping skills and it represents a lot of what we do as an organisation, and the culmination of many years of experience in the world of complex problem facilitation. We have been using Glyma as a consultancy value-add for some time, and our clients have gained a lot of benefit from it. Clients have also deployed it in their environments for reasons such as capture of knowledge, lessons learnt, strategic planning, corporate governance as well as business analysis, critical thinking and other knowledge visualisation/knowledge exchange scenarios.

image

I am very pleased to let people know that we have now decided to release Glyma under an open source license (Apache 2). This means you are free to download the source and use it in any manner you see fit.

You can download the source code from Chris Tomich’s githib site or you can contact me or Chris for the binaries. The install/user and admin manuals can be found from the Glyma web site, which also has a really nice help system, tutorial videos and advice on how to build good Glyma maps.

This is not just some sample code we have uploaded. This is a highly featured, well architected and robust product with some really nice SharePoint integration. In particular for my colleague, Chris Tomich, this represents a massive achievement as a developer/product architect. He has created a highly flexible graph database with some real innovation behind it. Technically, Glyma is a hypergraph database, that sits on SQL/SharePoint. Very few databases of this type exist outside of academia/maths nerds and very few people could pull off what he has done.

image

For those of you that use/have tried Compendium software, Glyma extends the ideas of Compendium (and can import Compendium maps), while bringing it into the world of enterprise information management via SharePoint.

Below I have embedded a video to give you an idea of what Glyma is capable of. More videos exist on Youtube as well as the Glyma site, so be sure to dig deeper.

 

I look forward to hearing how organsiations make use of it. Of course, feel free to contact me for training/mentoring and any other value-add services Smile

 

Regards

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 3)

This entry is part 3 of 3 in the series Apps Model
Send to Kindle

Hiya

This is the third post in some articles aimed helping strategic or business focused users understand the SharePoint 2013 and Office365 “apps model”, and what it means for the future of SharePoint. In part 1 of this series, I outlined the opportunities and challenges that Microsoft are currently trying to addressing. They were:

  • Changing perceptions to cloud technologies and increased adoption; which enables…
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange in the form of Google Apps; as well as…
  • An increasing number of smaller cloud-based “point solution” players who chip away at SharePoint features with cheaper and easier to use offerings; while suffering from…
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace; while dealing with…
  • Customers unable to handle the ever increasing complexity of SharePoint, leading to delaying upgrades for years

These reasons prompted Microsoft’s to take a strongly cloud driven strategy and have really been transforming their business to deliver it. They really have transitioned from a software provider to an application hosting provider and in terms of SharePoint, the “apps” model is now the future of customisation.

Now “apps” is a multifaceted topic and the word has been overused unfortunately. So in part two we started to unpack the apps model by channelling the kids TV show playshcool to show the idea of SharePoint customisations being hosted on separate servers, but presenting a seamless experience for users.

If you never read part 1 and 2, I seriously suggest you do. To whet your appetite, here are some pretty diagrams to highlight what you missed out on!

image image  image

The main point I made was the notion that custom SharePoint components ran on separate, non SharePoint servers and were embedded into SharePoint via Iframes. These remote apps then communicate securely with SharePoint (eg read or write data from lists) via web services.

I concluded part 2 by showing the benefits of the apps model from Microsoft’s perspective. Among other things, this model of developing custom SharePoint solutions can be supported on Office365 and on-premises. For on-premises customers, SharePoint servers remain pristine and free of the muck and clutter of 3rd party code, making service packs and cumulative updates much less complex and costly. It also enables Microsoft to offer an app store, where 3rd party vendors can maintain cloud based services that can be embedded and consumed by on-premises and online SharePoint installs. If you go back to the 5 key strategic threats I started this post with, it addresses each one nicely.

So Microsoft’s intent is good, and there is nothing wrong with good intentions… or is there?

image

Digging deeper

So where does one start with unpacking the apps model? Let’s make this a little less of a dry read by channelling Big Bang Theory to find out. First up, Penny wants to know where app data stored is stored, given that the app runs on a different server to SharePoint as shown below. (If you think about it, from apps perspective – SharePoint is the remote server).

image

Sheldon’s answer? (Of course Sheldon invented the apps model right)…

image

Er… come again? Let’s see if Leonard can give a clearer explanation…

image

Leonard’s explanation is a little better. Ultimately the app developer has the choice over where app data is stored. For example, let’s say someone writes a survey app for SharePoint and it renders on the home page via an iframe. When users fill in this poll, the results could be stored on the server that hosts the app and not SharePoint at all (which in SharePoint terms is referred to as the remote web). Alternatively, the app could store the survey results in a list on a SharePoint site which the app is being invoked from (this is called the host web). Yet another alternative is something called the App Web, which I  will return to in a moment.

First lets look at the pros and cons of the first two options.

Option A: Store App Data in SharePoint Lists

If the app developer chooses this approach, the app reads/writes to SharePoint lists in the site where the app is deployed (henceforth called the host web). In this approach, when a site administrator chooses to add an app to the site, the app has to specify the level of access is required for the site, and it asks the site admin to authorise this access. In the image below, you can see that this Kodak app is requesting the ability to edit or delete items in document libraries and lists on this site, as well as access user profile information. If the site administrator clicks “Trust it”, the app now has the access it needs.

image

The pro to this approach is that all app data resides in regular old SharePoint, where it is searchable and can take advantage of all of the goodies that lists and libraries give you like versioning, information management policies and workflow. Additionally, multiple apps can access these lists, so this allows for the development of componentised solutions that work with a single authoritative data source.

The potential cons (or implications to be aware of) to the approach are:

  • SharePoint lists and libraries are not always an appropriate data stores for some types of data. Most people are well aware now that a SharePoint list is most definitely not a relational database and it has performance issues when misused (among other things). SharePoint also has in-built thresholds that kick in when lists get big (list queries that generate a result set of 5000 or more items will fail by default). Microsoft state in their SharePoint 2013 and SharePoint Online solution packs documentation “If your business needs require you to work with large data sets and query result sets, this approach won’t work”.
  • If the app has been deployed to many sites and site collections (and uses lists on each), then things can get painful if a new version of the app requires a new or modified set of columns on the list in the host web.
  • if you delete the app from the host web, the list data remains on the site as the lists will likely not be deleted. Sometimes this is a good thing as the data might be important or used in other ways, but if the app developer is storing configuration data here, would leave orphaned data in the site.

Also think about what happens when you have an on-premises SharePoint server but the app is hosted by a 3rd party outside your firewall. How is the remote app even able to get to your SharePoint box in the first place? To enable this to happen, you are likely going to need to talk to your network/security people because you are going to need some funky firewall/reverse proxy infrastructure to allow that to happen. Additionally, some organisations might be uncomfortable that an app from a 3rd party on the interweb can have the ability to read and write data inside an internal SharePoint server anyway.

So what alternatives are there here?

Option B: Store data in the remote app

The other option for the app developer is to store the app data on the same server the app is running from (called the remote web). In this approach, no data is stored in SharePoint at all. The pros for this option is that it alleviates two of the issues from option A above in that developers can use any data storage system they want (eg SQL Server, a GIS system or a graph database) and you do not have any of those pesky firewall issues with the app connecting back to your SharePoint server. The app renders in its iframe and does its thing.

Unfortunately this also has some cons.

  • There are potential data sovereignty issues. Where is the remote app hosted? Are you sure you trust the 3rd party app provider with your data? Consider that a 3rd party might host an app for many organisations. Do they have adequate precautions for keeping your data isolated and secure (ie Is your stuff stored in a separate database to everyone else?) Are they adequately backing up that data consistent with your internal standards? If you uninstall the app, is the data also uninstalled at the app provider end?
  • This data is very likely not searchable or easily usable by SharePoint for other purposes as it is not necessarily directly accessible by SharePoint.
  • Chances are that the app will have to talk to SharePoint in some way, so you really don’t get out of dealing with your network/security people to make it all work if the remote web is outside your firewall.
  • Also consider data integrity. if say, you needed to restore SharePoint because of a data loss or data corruption issue, what does this mean for the data stored on the remote app? will things get out of sync?

These (and other questions) bring us to the next option that is a bit of a mind-bending middle ground.

Option C: Store data in the App Web

Now here is where things start to mess with your head quite a bit. Most people can understand the idea behind option A and B, but what the heck is this thing known as an app web? The short answer, it is a special SharePoint sub-site that is used for certain apps. It usually gets created when a site administrator adds an app to a site and importantly, removed when a site administrator removes the app. If you consider the diagrams below we can see our mythical SharePoint 2013 homepage with 3 apps on it, all running on separate servers as explained in part 2. If we assume each of these apps have deployed an app web on our SharePoint site, there are now three SharePoint sub-sites created underneath it as shown below. These are the app webs.

image

Now app webs are no ordinary subsites in the way you might know them now. For a start, if you tried to access them from your SharePoint host web, you would not be able to at all. Through some trickery, SharePoint puts these sites on a completely different URL than the site where the apps were installed. For example, if you had a site called http://myintranet/sites/web1 and you added a survey app, a subsite exists but it would absolutely not be http://myintranet/sites/web1/survey or anything like that. It would instead look something like:

http://app-afb952fd75de4a.apps.mycompany/sites/web1/surveyapp/

Now being a business audience reading this, trust me that there are good reasons for this apparent weirdness related to security which I will speak to in the next post.

But if this makes sense so far, then in some ways, one can argue that an app web is a weird cross between option A and B in that it is a subsite on your SharePoint farm, yet SharePoint treats it as if it is a remote data store. This means the app web is a special isolated storage area for app developers to put stuff like data, configuration, CSS, JavaScript and whatever other functionality their app needs to do its thing.

This approach has some advantages:

  • The app web is technically a SharePoint site, so app developers can create whatever structure they need (think lists, libraries and columns) to store their stuff like images, css, JavaScript and other goodies. This allows for much more flexible data structures that can easily be accommodated that writing to the odd list in a host web (Option A above)
  • The app web lives is on your SharePoint server, so it means some app components (and data) can be stored here instead of on a remote web on a server far away from you. When you back up your configuration database, the app web is backed up like everything else. (Less data integrity risk than option B).
  • The app web facilitates clean install/uninstall of an app, since the app web is removed if an app is removed. In other words, no more orphaned data lying around

But if you think all of those points through, you might see several important implications:

  • If the app developer decided to store critical app data in the app web, when the app is uninstalled that data is lost (or put better, developers have to write special uninstall code to copy the data somewhere else which means yet more code)
  • Just like option A, SharePoint lists and libraries are not always an appropriate data stores for some types of data. (remember the list item threshold I mentioned in option A? They also apply here too)
  • Apps cannot share app webs between them. In other words, apps cannot reach in and access data from other app webs. Therefore you can’t easily use the information stored in app webs with other apps and applications. In fact if you want to do this you pretty much have to choose option A. Store the shared data apps need in the host web and have both apps access that data
  • You may end up with many many app webs. If you take my example above there are 3 app webs to handle 3 apps. What if this was a project site template and your organisation has hundreds of projects. That means you have thousands of app webs, all with potentially interesting data that trapped in mini information silos.
  • SharePoint search cannot index the app webs
  • A critical but often overlooked one is that developers can’t update the library/list metadata schema in an app web (think columns, content types, libraries, etc) without updating and redeploying the app. As you will see in a future post, this gets real ugly real fast!
  • SharePoint App Webs are created with special templates which block SharePoint Designer (that’s probably a good thing given the purpose of an app web)

Conclusion

So if you are a non-developer reading this post, consider that none of the above options are on their own, likely to give you a solution. For each option, there seems to be just as many advantages as flaws. The reality is that many apps will use at least two, or even all three options. Things like images, css and javascript might be loaded from the app web, some critical reusable SharePoint content from the host web and the remote web for some heavy duty data manipulation.

If you think that through, that means as SharePoint administrators and governance teams, you will likely end up dealing with all of the cons of each of the options. Imagine asking a developer to conceptually draw an app that uses each of the options and consider how many “moving parts” there are to it all. Then when you add the fact that most organisations still have a legacy of full trust solutions to deal with, you can start to see how complex this will be to manage.

Now this really is just the tip of the iceberg. In the next post I am going to talk a little about how all this stuff is wired up from an authentication and security standpoint. I am also going to focus on the application lifecycle management implications of this model. If you think about the picture I have painted here with all of the potential moving parts, how to you think an upgrade to an app would fare?

thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 2)

This entry is part 2 of 3 in the series Apps Model
Send to Kindle

Hiya

This is the second post in some articles aimed at demystifying the SharePoint “apps” model for the strategic or business focused user. In case you are not aware, Microsoft have gotten a serious case of “app fever” in recent times, introducing the terminology not only into SharePoint, but Office as well. While there are very good reasons for this happening, Microsoft used the “app” terminology in multiple ways, therefore making their message rather confusing. As a result, Microsoft have not communicated their intent particularly well and customers often fail to understand why they make the changes that they do.

Things are definitely getting better, but I nevertheless see a lot of confusion around the topic. So in part 1 of this series, I explained the reasons Microsoft have adopted the strategy that they have done. To recap, they are trying to respond to five major disruptive forces that challenge their market position:

  • Changing perceptions to cloud technologies and increased adoption; which enables…
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange in the form of Google Apps; as well as…
  • An increasing number of smaller cloud-based “point solution” players who chip away at SharePoint features with cheaper and easier to use offerings; while suffering from…
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace; while dealing with…
  • Customers unable to handle the ever increasing complexity of SharePoint, leading to delaying upgrades for years

Microsoft’s answer to this has to go all-in with cloud, as this is the only way to beat the cloud providers at their own game, while reducing the complexity burden on their customers.  This of course is in the form of Office365, OneDrive and an ever increasing set of cloud oriented tools like Delve and Project Online.

But in SharePoint land, this has turned traditional development upside down. More than a decade of customisation “best practices” are no longer best – in fact they are no longer usable in many circumstances. The main reason is that the most common method of customisation normally applied to SharePoint (full-trust server side code) is not permitted in the cloud. Microsoft couldn’t risk untrusted, 3rd party custom code on their servers. What happens if one clients dodgy code affects everybody else sharing the service? This would threaten performance, uptime and Microsoft’s ability to upgrade their service over time.

So things had to change. Microsoft’s small army of product architects commandeered a whiteboard and started architecting innovative solutions to deal with these challenges and the apps model is the result. So let’s examine some core bits to the apps model by channelling a much loved children’s TV show.

There is a bear in there…

Now at this point I have to warn the developers or tech people writing this post. I am going to give a simplified version of the apps model intended for a decision making audience. I will omit many details I don’t deem necessary to make my key points. You have been warned…

image

Any parent of small children in most countries might be familiar with Playschool – a show for toddlers that has been around for eons. It is well known for its theme song starting with the line “There is a bear in there…and a chair as well”. When trying to come up with a suitable way to explain the SharePoint apps model, using Playschool as a metaphor turned out to work brilliantly. You see in each episode of Playschool, there was a segment where viewers were taken through the “magic window” to faraway lands. In the show, the presenter would pick one of the three windows and we would zoom into it, resulting in a transition to another segment. In our case, we have to pick the square window for two reasons. Firstly, a good many apps are in effect windows to somewhere else. Secondly, and much more importantly, it perfectly matches the new Microsoft corporate logo. Perfect metaphor or what eh? 🙂

Like the Playschool magic window, browsers have a similar capability to enable you to visit strange and magical lands… Not only is there a bear in there and a chair as well, but there are plenty of other things like YouTube videos and Yammer discussions. I have drawn this conceptually shown below. Note the black window in the SharePoint team site on the left, that can be filled with YouTube or Yammer.

image

You have no doubt visited web sites that have embedded content like YouTube videos or SlideShare slides in them (This blog site has lots of embedded ads that make me no money!). Essentially, it is possible for browsers to include content from different sites together into a single “page” experience. Users see it all as one page, even through content can come from all over the place. This is really useful, because it means you can leverage the capability of other sites to enhance the functionality of your own sites..

This my friends, is one of the core tenets of the current SharePoint 2013 apps model. Instead of running on the SharePoint server, many apps now run separately from SharePoint, embedded in SharePoint pages so that they look like they are part of SharePoint. In the example mock-up below, we have a SharePoint team site. In it, we have a remote web site that displays some pretty dashboard data. By loading that emote content into our magical square window, it now appears a part of SharePoint.

image       image

Going back to Microsoft’s core pain points, this helps things a lot. For a start, it means no custom code has to be installed onto the SharePoint server. Instead, SharePoint simply embeds the external content on the page. In the leftmost image above, you can see the SharePoint server (labelled as “Your server”), rendering a page with a placeholder in it. It then retrieves content from a remote server (labelled “my server”) and displays it in the placeholder to render the complete page (the rightmost image above).

So what feat of Microsoft innovation and general awesomeness enabled this to happen?

Everybody meet “Mr IFrame”.

Inline Frames (IFrames) are windows cut into your webpage that allow your visitor to view content on another site without reloading the entire page. The concept was first implemented in Microsoft Internet Explorer way back in 1997. Yep – you heard right… 1997. So IFrames are not a new concept at all – in fact its positively ancient when you count time in internet years. For this reason, when developers find this out, their reaction is usually something like this…

image

But there is more than meets the eye…

Now if the apps model was just IFrames alone, then you you might wonder what the big deal is with apps. In fact IFrames have been used this way in SharePoint for years via the Page Viewer Web Part. For years, companies with SharePoint deployments have embedded stuff like Twitter, YouTube or Facebook widgets via iFrames.

So of course, there is more to it…

Let’s revisit the “your server” and “my server” diagram used above and consider the question.. What if these remote applications displayed inside an iFrame can interact with SharePoint? In other words, What if the remote application running on my remote server is able to connect to your SharePoint server and read/write data? In the diagram below I have illustrated the idea. The top half of the diagram represents a SharePoint server that could be on-premises or an Office365 tenant. On the left is a Products list, that is somewhere inside this SharePoint server. At the bottom is my application running on my server that creates a pretty dashboard. What if my remote application queried the SharePoint products list to create the dashboard? Now we have an application, that while not running inside SharePoint, can nevertheless utilise live data from SharePoint to create a seamless experience for users.

image

If we now add 3 iframes to a page, the implications should start to become more clear. We can build hybrid solutions leveraging the best of what SharePoint can do, whilst leveraging the best of what other platforms can do. To the user, these are still SharePoint sites, but the reality is that we are now viewing a page that has been delivered by various different platforms. Each can interact with SharePoint data in different ways to deliver a seamless experience. Because these remote apps are not SharePoint at all, developers can write any application they want to, using the platform and tools of their choice. But to the user it is still a SharePoint page… neat huh? I’m sure the Microsoft product team thought that this was a brilliant conceptual masterpiece when they dreamt it up.

image

A beautiful model…

I don’t know if you have ever watched developers come up with API’s, but it tends to be a lot of excitement around a whiteboard as they revel in the glory of their elegant solution designs. So let’s quickly re-examne the benefits of this remotely hosted app approach from Microsoft’s perspective and see how we are going so far…

image

First and foremost, we now have SharePoint customisation approach that they can be fully support in Office365. Microsoft don’t have to put code on their online servers, yet can support extensibility. Now they are much more evenly matched with Google, while at the same time, reduce their tech support costs of SharePoint because they have isolated 3rd party code out of SharePoint. If any problems are encountered with a remote app, SharePoint will keep humming along and Microsoft can now legitimately tell the clients “no really it is not SharePoint causing your issue – go see your friendly neighbourhood app developer”.

More importantly since apps can also can be used in on-premises SharePoint deployments too, meaning both Microsoft and their customers now have pristine SharePoint servers free of the muck and clutter of 3rd party code. Therefore service packs and cumulative updates should no longer strike fear into admins. Microsoft also now nails google’s ass because Google has no real concept of on-premises at all in the way Microsoft does. Thus when hybrid scenarios come up in conversation, Microsoft has a much stronger story to tell.

But there is a more important implication than all of that. Microsoft can now do the app store thing. Vendors can maintain cloud based services that can be embedded and consumed by on-premises and online SharePoint installs. This means 3rd parties can tap into the customer ecosystem with a captive marketplace and customers can browse the store to examine what options are out there to extend SharePoint functionality. In theory, this should enable hundreds of vendors to do some slight modifications to their existing web based applications and incorporate them into the SharePoint ecosystem.

image

But reality is not what’s on the whiteboard…

At this point, I hope I have painted a pretty good picture of the advantages offered by this new paradigm and you can probably appreciate the Microsoft nerds completely falling in love with this conceptual model of future SharePoint customsiations. The Microsoft strategy dudes probably loved it too because it elegantly dealt with all of the challenges they were seeing. Unfortunately though, with most conceptual models, reality is a very different beast from the convenient fiction of models.

So in the next post, we are going to dig a little deeper. For example, how can a remote app even have permissions to talk to SharePoint in the first place? Do you really want code running in some untrusted 3rd party server to be fiddling with data in your SharePoint lists and libraries? How does that even work anyway in an on-premises scenario when a cloud hosted app has to access data behind your firewall?

Fear not though – the Microsoft guys thought of this (and more) when they were drawing their apps model concept on the big whiteboard. So in the next post, we are going to look at what it takes to bring this conceptual masterpiece into reality.

 

Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 1)

This entry is part 1 of 3 in the series Apps Model
Send to Kindle

Hiya

I’ve been meaning to write about the topic of the apps model of SharePoint 2013 for a while now, because it is a topic I am both fascinated and slightly repulsed by. While lots of really excellent material is out there on the apps model (not to mention a few good rants by the usual suspects as well), it is understandably written by developers tasked with making sense of it all, so they can put the model into practice delivering solutions for their clients or organisations.

I spent considerable time reading and researching the various articles and videos on this topic produced by Microsoft and the broader SharePoint community and made a large IBIS map of it. As I slowly got my head around it all, subtle, but significant implications begin to emerge for me. The more I got to know this topic, the more I realised that the opportunities and risks of the apps model holds many important insights and lessons for how organisations should be strategically thinking about SharePoint. In other words, it is not so much about the apps model itself, but what the apps model represents for organisations who have invested into the SharePoint platform.

So these posts are squarely aimed at the business camp. Therefore I am going to skip all sorts of things that I don’t deem necessary to make my points. Developers in particular may find it frustrating that I gloss over certain things, give not quite technically correct explanations and focus on seemingly trivial matters. But like I said, you are not my audience anyway.

So let’s see if we can work out what motivated Microsoft head in this direction and make such a significant change. As always, context is everything…

As it once was…

I want you to picture Microsoft in 2011. SharePoint 2010 has come out to positive reviews and well and truly cemented itself in the market. It adorns the right place in multiple Gartner magic quadrants, demand for talent is outstripping supply and many organsiations are busy embarking on costly projects to migrate from their legacy SharePoint 2007 and 2003 deployments, on the basis that this version has fixed all the problems and that they will definitely get it right this time around. As a result, SharePoint is selling like hotcakes and is about to crack the 2 billion dollar revenue barrier for Microsoft. Consultants are also doing well in this time since someone has to help organsiations get all of that upgrade work done. Life is good… allegedly.

But even before the release of SharePoint 2010, winds of change were starting to blow. In fact, way back in 2008, at my first ever talk at a SharePoint conference, I showed the Microsoft pie chart of buzzwords and asked the crowd what other buzzwords were missing. The answer that I anticipated and received was of course “cloud”, which was good because I had created a new version of the pie and invited Microsoft to license it from me. Unfortunately no-one called.

image

Winds of change…

While my cloud picture was aimed at a few cheap laughs at the time, it holds an important lesson. Early in the release cycle of SharePoint 2007, cloud was already beginning to influence people’s thinking. Quickly, services traditionally hosted within organisations began to appear online, requiring a swipe of the credit card each month from the opex budget which made CFO’s happy. A good example is Dropbox which was founded in 2008. By 2010, won over the hearts and minds of many people who were using FTP. Point solutions such as Salesforce appeared, which further demonstrated how the the competitive landscape was starting to heat up. These smaller, more nimble organsiations were competing successfully on the basis of simplicity and focus on doing one thing well, while taking implementation complexity away.

Now while these developments were on Microsoft’s radar, there was really only one company that seriously scared them. That was Google via their Google Docs product. Here was a company just as big and powerful as Microsoft, playing in Microsoft’s patch using Microsoft’s own approach of chasing the enterprise by bundling products and services together. This emerged as a credible alternative to not only SharePoint, but to Office and Exchange as well.

Some of you might be thinking that Apple was just a big a threat to Microsoft as Google. But Microsoft viewed Apple through the eyes of envy, as opposed to a straight out threat. Apple created new markets that Microsoft wanted a piece of, but Apple’s threat to their core enterprise offerings remained limited. Nevertheless, Microsoft’s strong case of crimson green Apple envy did have a strategic element. Firstly, someone at Microsoft decided to read the disruptive innovation book and decided that disruptive was obviously cool. Secondly, they saw the power of the app store and how quickly it enabled an developer ecosystem and community to emerge, which created barriers for the competition wanting to enter the market later.

Meanwhile, deeper in the bowels of Microsoft, two parallel problems were emerging. Firstly, it was taking an eternity to work through an increasingly large backlog of tech support calls for SharePoint. Clients would call, complaining of slow performance, broken deployments after updates, unhandled exceptions and so on. More often than not though, these issues had were not caused by the base SharePoint platform, but via a combination of SharePoint and custom code that leaked memory, chewed CPU or just plain broke. Troubleshooting and isolating the root cause very difficult which led to the second problem. Some of Microsoft’s biggest enterprise customers were postponing or not bothering with upgrades to SharePoint 2010. They deemed it too complex, costly and not worth the trouble. For others, they were simply too scared to mess with what they had.

A perfect storm of threats

image  image

So to sum up the situation, Microsoft were (and still are) dealing with five major forces:

  • Changing perceptions to cloud technologies (and the opex pricing that comes with it)
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange
  • An increasing number of smaller point solution players who chip away at SharePoint features with cheaper and easier to use offerings
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace
  • Customers unable to contend with the ever increasing complexity of SharePoint and putting off upgrades

So what would you do if you were Microsoft? What would your strategy be to thrive in this paradigm?

Now Microsoft is a big organisation, which affords it the luxury of engaging expensive management consultants, change managers and corporate coaches. Despite the fact that it doesn’t take an MBA to realise that just a couple of these factors alone combine as a threat to the future of SharePoint, lots of strategic workshops were no doubt had with associated whiteboard diagrams, postit notes, dodgy team building games and more than one SWOT analyses to confirm the strategic threats they faced were a clear and present danger. The conclusion drawn? Microsoft had to put cloud at the centrepiece of their strategy. After all, how else can you bring the fight to the annoying cloud upstarts, stave off the serious Google threat, all the while reducing the complexity burden on their customers?

A new weapon and new challenges…

In 2011, Microsoft debuted Office365 as the first salvo in their quest to mitigate threats and take on their competitors at their own game. It combined Exchange, Lync and SharePoint 2010 – packaging them up in a per-user per month approach. The value proposition for some of Microsoft’s customers was pretty compelling because up-front capital costs reduced significantly, they now could get the benefits of better scalability, bigger limits on things like mailboxes, while procurement and deployment was pretty easy compared to doing it in-house. Given the heritage of SharePoint, Exchange and Lync, Microsoft was suddenly competitive enough to put Google firmly on the back foot. My own business dumped gmail and took up Office365 at this time, and have used it since.

Of course, there were many problems that had to be solved. Microsoft was now switching from a software provider to a service provider which necessitated new thinking, processes, skills and infrastructure internally. But outside of Microsoft there were bigger implications. The change from software provider to service provider did not go down well with many Microsoft partners who performed that role prior. It also freaked out a lot of sysadmins who suddenly realised their job of maintaining servers was changing. (Many are still in denial to this day). But more importantly there was a big implication for development and customisation of SharePoint. This all happened mid-way through the life-cycle of SharePoint 2010 and that version was not fully architected for cloud. First and foremost, Microsoft were not about to transfer the problem of dodgy 3rd party customisations onto their servers. Recall that they were getting heaps of support calls that were not core SharePoint issues, but caused by custom code written by 3rd parties. Imagine that in a cloud scenario when multiple clients share the same servers. This means that that one clients dodgy code could have a detrimental effect on everybody else, affecting SLA’s while not solving the core problem Microsoft were having of wearing the cost and blame via tech support calls for problems not of their doing.

So with Office365, Microsoft had little choice but to disallow the dominant approach to SharePoint customisation that had been used for a long time. Developers could no longer use the methods they had come to know and love if a client wanted to use Office 365. This meant that the consultancies who employed them would have to change their approach too, and customers would have to adjust their expectations as well. Office365 was now a much more restricted environment than the freedom of on-premises.

Is it little wonder then, that one of Microsoft’s big focus areas for SharePoint 2013 was to come up with a way to readdress the balance. They needed a customisation model that allowed a consistent approach, whether you chose to keep SharePoint on-premises, or move off to the cloudy world of Office 365. As you will see in the next post, that is not a simple challenge,. The magnitude of change required was going to be significant and some pain was going to have to happen all around.

Coming next…

So with that background set, I will spend time in the next post explaining the apps model in non technical terms, explaining how it helps to address all of the above issues. This is actually quite a challenge, but with the right dodgy metaphors, its definitely possible. 🙂 Then we will take a more critical viewpoint of the apps model, and finally, see what this whole saga tells us about how we should be thinking about SharePoint in the future…

thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Help me visualise the pros and cons of hybrid SharePoint 2013…

Send to Kindle

Like it or not, there is a tectonic shift going on in the IT industry right now, driven primarily by the availability of a huge variety of services hosted in the cloud. Over the last few years, organisations are increasingly procuring services that are not hosted locally, much to the chagrin of many an server hugging IT guy who understandably, sees various risks with entrusting your fate to someone else.

We all know that Microsoft had a big focus on trying to reach feature parity between on-premises SharePoint 2013 and Office365. In other words, with cloud computing as a centrepiece of their strategy, Microsoft’s SharePoint 2013 aim was for stuff that both works on premises, but also also works on Office 365 without too much modification. While SharePoint 2013 made significant inroads into meeting this goal (apps model developers might beg to differ), the big theme to really emerge was that feature parity was a relatively small part of the puzzle. What has happened since the release of SharePoint 2013, is that many organisations are much more interested in hybrid scenarios. That is, utilising on-premises SharePoint along with cloud hosted SharePoint and its associated capabilities like OneDrive and Office Web Applications.

So while it is great that SharePoint online can do the same things as on-prem, it all amounts to naught if they cannot integrate well together. Without decent integration, we are left with a lot of manual work to maintain what is effectively two separate SharePoint farms and we all know what excessive manual maintenance brings over time…

Microsoft to their credit have been quick to recognise that hybrid is where the real action is at, and have been addressing this emerging need with a ton of published material, as well as adding new hybrid functionality with service packs and related updates. But if you have read the material, you can attest that there is a lot of it and it spans many topic areas (authentication alone is a complex area in itself). In fact, the sheer volume and pace of material released by Microsoft show that hybrid is a huge and very complex topic, which begs a really critical question…

Where are we now at with hybrid? Is it a solid enough value proposition for organisations?

This is a question that a) I might be able to help you answer and b) you can probably help me answer…

Visualising complex topics…

A few months back, I started issue mapping all of the material I could get my hands on related to hybrid SharePoint deployments. If you are not aware, Issue mapping is a way of visualising rationale and I find it a brilliant personal learning tool. It allows me to read complex articles and boil them down to the core questions, answers, pros and cons of the various topics. The maps are easy to read for others, and they allow me to make my critical thinking visible. As a result, clients also like these maps because they provide a single integrated place where they can explore topics in an engaging, visual way, instead of working their way through complex whitepapers.

If you wish to jump straight in and have a look around, click here to access my map on Hybrid SharePoint 2013 deployments. You will need to sign in using a facebook or gmail ID to do so. But be sure to come back and read the rest of this post, as I need your help…

But for the rest of you, if you are wondering what my hybrid SharePoint map looks like, without jumping straight in, check out the screenshots below. The tool I am using is called Glyma (glimmer), which allows these maps to use developed and consumed using SharePoint itself. First up, we have a very simple map, showing the topic we are discussing.

image

If you click the plus sign next to the “Hybrid SharePoint deployments” idea node, we can see that I mapped all of the various hybrid pros and cons I have come across in my readings and discussions. Given that hybrid SharePoint is a complex topic, there are lots of pros and cons as shown in the partial image below…

image

Many of the pros and cons can be expanded further, which delves deeper into the topics. A single click expands one node level, and a double click expands the entire branch. To illustrate, consider the image below. One of the cons is around many of the search related caveats with hybrid that can easily trip people up. I have expanded the con node and the sub question below it.  Also notice hat one of the idea nodes has an attachment icon. I will get to that in a moment…

image

As I mentioned above, one of the idea nodes titled “SPO search sometimes has delays on low long it takes for new content to appear in the index” has an attachment icon as well as more nodes below it. Let’s click that attachment icon and expand that node. It turns out that I picked this up when I read Chris O’Brien’s excellent article and so I have embedded his original article to that node. Now you can read the full detail of his article for yourself, as well as understand how his article fits into a broader context.

image

It is not just written content either. If I move further up the map, you will see some nodes have video’s tagged to them. When Microsoft released the videos to 2014’s Vegas conference, I found all sorts of interesting nuggets of information that was not in the whitepapers. Below is an example of how I tagged one of the Vegas video’s to one of my nodes.

image

 

A call to action…

SharePoint hybrid is a very complex topic and right now, has a lot of material scattered around the place. This map allows people, both technical and non technical, to grasp the issue in a more strategic, bigger picture way, while still providing the necessary detail to aid implementation.

I continually update this map as I learn more about this topic from various sources, and that is where you come in. If you have had to work around a curly issue, or if you have had a massive win with a hybrid deployment, get in touch and let me know about it. It can be a reference to an article, a skype conversation or anything, The Glyma system can accommodate many sources of information.

More importantly, would you like to help me curate the map on this topic? After all, things move fast the SharePoint community rarely stands still. So If you are up to speed on this topic or have expertise to share, get in touch with me. I can give you access to this map to help with its ongoing development. With the right meeting of the minds, this map could turn into an incredible valuable information resource to a great many people.

So get in touch if you want to put your expertise out there…

 

Thanks for reading

 

 

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Rewriting the knowledge management rulebook… The story of “Glyma” for SharePoint

Send to Kindle

“If Jeff ever leaves…”

I’m sure you have experienced the “Oh crap” feeling where you have a problem and Jeff is on vacation or unavailable. Jeff happens to be one of those people who’s worked at your organisation for years and has developed such a deep working knowledge of things, it seems like he has a sixth sense about everything that goes on. As a result, Jeff is one of the informal organisational “go to guys” – the calming influence amongst all the chaos. An oft cited refrain among staff is “If Jeff ever leaves, we are in trouble.”

In Microsoft’s case, this scenario is quite close to home. Jeff Teper, who has been an instrumental part of SharePoint’s evolution is moving to another area of Microsoft, leaving SharePoint behind. The implications of this are significant enough that I can literally hear Bjorn Furuknap’s howls of protest all the way from here in Perth.

So, what is Microsoft to do?

Enter the discipline of knowledge management to save the day. We have SharePoint, and with all of that metadata and search, we can ask Jeff to write down his knowledge “to get it out of his head.” After all, if we can capture this knowledge, we can then churn out an entire legion of Jeffs and Microsoft’s continued SharePoint success is assured, right?

Right???

There is only one slight problem with this incredibly common scenario that often underpins a SharePoint business case… the entire premise of “getting it out of your head” is seriously flawed. As such, knowledge management initiatives have never really lived up to expectations. While I will save a detailed explanation as to why this is so for another post, let me just say that Nonaka’s SECI model has a lot to answer for as it is based on a misinterpretation of what tacit knowledge is all about.

Tacit knowledge is expert knowledge that is often associated with intuition and cannot be transferred to others by writing it down. It is the “spider senses” that experts often seem to have when they look at a problem and see things that others do not. Little patterns, subtleties or anomalies that are invisible to the untrained eye. Accordingly, it is precisely this form of knowledge that is of the most value in organisations, yet is the hardest to codify and most vulnerable to knowledge drain. If tacit knowledge could truly be captured and codified in writing, then every project manager who has ever studied PMBOK would have flawless projects, because the body of knowledge is supposed to be all the codified wisdom of many project managers and the projects they have delivered. There would also be no need for Agile coaches, Microsoft’s SharePoint documentation should result in flawless SharePoint projects and reading Wictor’s blog would make you a SAML claims guru.

The truth of tacit knowledge is this: You cannot transfer it, but you acquire it. This is otherwise known as the journey of learning!

Accountants are presently scratching their heads trying to figure out how to measure tacit knowledge. They call it intellectual capital, and the reason it is important to them is that most of the value of organisations today is classified on the books as “intangibles”. According to the book Balanced Scorecard, a company’s physical assets accounted for 62% of its market value in 1982, 38% of its market value in 1992 and only 21% in 2003. This is in part a result of the global shift toward knowledge economies and the resulting rise in the value of intellectual capital. Intellectual capital is the sum total of the skills, knowledge and experience of staff and is critical to sustaining competitiveness, performance and ultimately shareholder value. Organisations must therefore not only protect, but extract maximum value from their intellectual capital.

image

Now consider this. We are in an era where baby boomers are retiring, taking all of their hard-earned knowledge with them. This is often referred to as “the knowledge tsunami”, “the organisational brain drain” and the more nerdy “human capital flight”. The issue of human capital flight is a major risk area for organisations. Not only is the exodus of baby boomers an issue, but there are challenges around recruitment and retention of a younger, technologically savvy and mobile workforce with a different set of values and expectations. One of the most pressing management problems of the coming years is the question of how organisations can transfer the critical expertise and experience of their employees before that knowledge walks out the door.

The failed solutions…

After the knowledge management fad of the late 1990’s, a lot of organisations did come to realise that asking experts to “write it down” only worked in limited situations. As broadband came along, enabling the rise of rich media services like YouTube, a digital storytelling movement arose in the early 2000’s. Digital storytelling is the process by which people share stories and reflections while being captured on video.

Unfortunately though, digital storytelling had its own issues. Users were not prepared to sit through hours of footage of an expert explaining their craft or reflecting on a project. To address this, the material was commonly edited down to create much smaller mini-documentaries lasting a few minutes – often by media production companies, so the background music was always nice and inoffensive. But this approach also commonly failed. One reason for failure was well put by David Snowden when he saidInsight cannot be compressed”. While there was value in the edited videos, much of the rich value within the videos was lost. After all, how can one judge ahead of time what someone else finds insightful. The other problem with this approach was that people tended not to use them. There was little means for users to find out these videos existed, let alone watch them.

Our Aha moment

In 2007, my colleagues and I started using a sensemaking approach called Dialogue Mapping in Perth. Since that time, we have performed dialogue mapping across a wide range of public and private sector organisations in areas such as urban planning, strategic planning, process reengineering, organisational redesign and team alignment. If you have read my blog, you would be familiar with dialogue mapping, but just in case you are not, it looks like this…

Dialogue Mapping has proven to be very popular with clients because of its ability to make knowledge more explicit to participants. This increases the chances of collective breakthroughs in understanding. During one dialogue mapping session a few years back, a soon-to-be retiring, long serving employee relived a project from thirty years prior that he realised was relevant to the problem being discussed. This same employee was spending a considerable amount of time writing procedure manuals to capture his knowledge. No mention of this old project was made in the manuals he spent so much time writing, because there was no context to it when he was writing it down. In fact, if he had not been in the room at the time, the relevance of this obscure project would never have been known to other participants.

My immediate thought at the time when mapping this participant was “There is no way that he has written down what he just said”. My next thought was “Someone ought to give him a beer and film him talking. I can then map the video…”

This idea stuck with me and I told this story to my colleagues later that day. We concluded that the value of asking our retiring expert to write his “memoirs” was not making the best use of his limited time. The dialogue mapping session illustrated plainly that much valuable knowledge was not being captured in the manuals. As a result, we seriously started to consider the value of filming this employee discussing his reflections of all of the projects he had worked on as per the digital storytelling approach. However, rather than create ‘mini documentaries’, utilise the entire footage and instead, visually map the rationale using Dialogue Mapping techniques. In this scenario, the map serves as a navigation mechanism and the full video content is retained. By clicking on a particular node in the map, the video is played from the time that particular point was made. We drew a mock-up of the idea, which looked like the picture below.

image

While thinking the idea would be original and cool to do, we also saw several strategic advantages to this approach…

  • It allows the user to quickly find the key points in the conversation that is of value to them, while presenting the entire rationale of the discussion at a glance.
  • It significantly reduces the codification burden on the person or group with the knowledge. They are not forced to put their thoughts into writing, which enables more effective use of their time
  • The map and video content can be linked to the in-built search and content aggregation features of SharePoint.
    • Users can enter a search from their intranet home page and retrieve not only traditional content such as documents, but now will also be able to review stories, reflections and anecdotes from past and present experts.
  • The dialogue mapping notation when stored in a database, also lends itself to more advanced forms of queries. Consider the following examples:
    • “I would like any ideas from lessons learnt discussions in the Calgary area”
    • “What pros or cons have been made about this particular building material?”
  • The applicability of the approach is wide.
    • Any knowledge related industry could take advantage of it easily because it fits into exiting information systems like SharePoint, rather than creating an additional information silo.

This was the moment the vision for Glyma (pronounced “glimmer”) was born…

Enter Glyma…

Glyma (pronounced ‘glimmer’) is a software platform for ‘thought leaders’, knowledge workers, organisations, and other ‘knowledge economy participants’ to capture and trade their knowledge in a way that reduces effort but preserves rich context. It achieves this by providing a new way for users to visually capture and link their ideas with rich media such as video, documents and web sites. As Glyma is a very visually oriented environment, it’s easier to show Glyma rather than talk to it.

Ted

image

What you’re looking at in the first image above are the concepts and knowledge that were captured from a TED talk on education augmented with additional information from Wikipedia. The second is a map that brings together the rationale from a number of SPC14 Vegas videos on the topic of Hybrid SharePoint deployments.

Glyma brings together different types of media, like geographical maps, video, audio, documents etc. and then “glues” them together by visualising the common concepts they exemplify. The idea is to reduce the burden on the expert for codifying their knowledge, while at the same time improving the opportunity for insight for those who are learning. Glyma is all about understanding context, gaining a deeper understanding of issues, and asking the right questions.

We see that depending on your focus area, Glyma offers multiple benefits.

For individuals…

As knowledge workers our task is to gather and learn information, sift through it all, and connect the dots between the relevant information. We create our knowledge by weaving together all this information. This takes place through reading articles, explaining on napkins, diagramming on whiteboards etc. But no one observes us reading, people throw away napkins, whiteboards are wiped clean for re-use. Our journey is too “disposable”, people only care about the “output” – that is until someone needs to understand our “quilt of information”.

Glyma provides end users with an environment to catalogue this journey. The techniques it incorporates helps knowledge workers with learning and “connecting the dots”, or as we know it synthesising. Not only does it help us with doing these two critical tasks, it then provides a way for us to get recognition for that work.

For teams…

Like the scenario I started this post with, we’ve all been on the giving and receiving end of it. That call to Jeff who has gone on holiday for a month prior to starting his promotion and now you need to know the background to solving an issue that has arisen on your watch. Whether you were the person under pressure at the office thinking, “Jeff has left me nothing of use!”, or you are Jeff trying to enjoy your new promotion thinking, “Why do they keep on calling me!”, it’s an uncomfortable situation for all involved.

Because Glyma provides a medium and techniques that aid and enhance the learning journey, it can then act as the project memory long after the project has completed and the team members have moved onto their next challenge. The context and the lessons it captures can then be searched and used both as a historical look at what has happened and, more importantly, as a tool for improving future projects.

For organisations…

As I said earlier, intangible assets now dominate the balance sheets of many organisations. Where in the past, we might have valued companies based on how many widgets they sold and how much they have in their inventory, nowadays intellectual capital is the key driver of value. Like any asset, organisations need to extract maximum value from intellectual capital and in doing so, avoid repeat mistakes, foster innovation and continue growth. Charles G. Sieloff summed this up well in the name of his paper, “if only HP knew what HP knows”.

As Glyma aids, enhances, and captures an individual’s learning journey, that journey can now be shared with others. With Glyma, learning is no longer a silo, it becomes a shared journey. Not only does it do this for individuals but it extends to group work so that the dynamics of a group’s learning is also captured. Continuous improvement of organisational processes and procedures is then possible with this captured knowledge. With Glyma, your knowledge assets are now tangible.

Lemme see it!

So after reading this post this far, I assume that you would like to take a look. Well as luck would have it, we put out a public Glyma site the other day that contains some of my own personal maps. The maps on the SP2013 apps model and hybrid SP2013 deployments in particular represent my own learning journey, so hopefully should help you if you want a synthesis of all the pros and cons of these issues. Be sure to check the videos on the getting started area of the site, and check the sample maps! Smile

glymasite

I hope you like what you see. I have a ton of maps to add to this site, and very soon we will be inviting others to curate their own maps. We are also running a closed beta, so if you want to see this in your organisation, go to the site and then register your interest.

All in all, I am super proud of my colleagues at Seven Sigma for being able to deliver on this vision. I hope that this becomes a valuable knowledge resource for the SharePoint community and that you all like it. I look forward to seeing how history judges this… we think Glyma is innovative, but we are biased! 🙂

 

Thanks for reading…

Paul Culmsee

www.glyma.co

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

“Assumption is the mother of all f**k ups”

Send to Kindle

My business partner, Chris Tomich, is the John Deacon of Seven Sigma.

In case you do not know who John Deacon is, he is the bass player from Queen who usually said very little publicly and didn’t write that many songs (and by songs I mean blog posts). But when Deacon finally did getting around to writing a song, they tended to be big – think Another One Bites the Dust, I Want To Break Free and Your My Best Friend.

Chris is like that, which is a pity for the SharePoint community because he outstanding SharePoint architect, software engineer and one of the best Dialogue Mappers on the planet. If he had the time to write on his learning and insight, the community would have a very valuable resource. So this is why I am pleased that he has started writing what will be a series of articles on how he utilises Dialogue Mapping in practice, which is guaranteed to be much less verbose than my own hyperbole but probably much more useful to many readers. The title of my post here is a direct quote from his first article, so do yourself a favour and have a read it if you want a different perspective on sense-making.

The article is called From Analyst to Sense-maker and can be found here:

http://mymemorysucks.wordpress.com/2014/01/07/from-analyst-to-sense-maker/#!

thanks for reading

 

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

p.s Now all I need to do is get my other Business Partner, mild mannered intellectual juggernaut known as Peter (Yoda) Chow to start writing Smile

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–conclusion and reflections

This entry is part 13 of 13 in the series Workflow
Send to Kindle

Hi all

In case you have not been paying attention, I’ve churned out a large series of posts – twelve in all – on the topic of SharePoint Designer 2013 workflows. The premise of the series was to answer a couple of questions:

1.  Is there enough workflow functionality in SharePoint 2013 to avoid having to jump straight to 3rd party tools?

2. Is there enough workflow functionality to enable and empower citizen developers to create lightweight solutions to solve organisational problems?

To answer these questions, I took a relatively simple real world scenario to illustrate what the journey looks like. Well – sort of simple in the sense that I deliberately chose a scenario that involved managed metadata. Because of this seemingly innocuous information architecture decision, we encountered SharePoint default settings that break stuff, crazy error messages that make no sense, learnt all about REST/oData, JSON, a dash of CAML and mastered the Fiddler tool to make sense of it all. We learnt a few SharePoint (and non SharePoint) web services, played with new features like dictionaries, loops and stages. Hopefully, if you have stuck with me as we progressed through this series, you have a much better understanding of the power and potential peril of this technology.

So where does that leave us with our questions?

In terms of the question of whether this edition enables you to avoid 3rd party tools – I think the answer is an absolute yes for SharePoint Foundation and a qualified yes for everything else. On the plus side, the new architecture certainly addresses some of the previous scalability issues and the ability to call web services and parse the data returned, opens up all sorts of really interesting possibilities. If “no custom development” solutions are your mantra (which is really “no managed code” usually) , then you have at your disposal a powerful development tool. Don’t forget that I have shown you a glimpse of what can be done. Very clever people like Fabian WIlliams have taken it much further than me, such as creating new SharePoint lists, creating no code timer jobs and creating your own declarative workflows – probably the most interesting feature of all.

In a nutshell, with this version, many things that were only possible in Visual Studio now become very doable using SharePoint Designer – especially important for Office365 scenarios.

So then, why a qualified yes as opposed to an enthusiastic yes?

Because it is still all so… how do I put this…  so #$%#ing fiddly!

Fiddly is just a euphemism for complexity, and in SharePoint it manifests in the minefield of caveats and “watch out for…” type of advice that SharePoint consultants often have to give. It has afflicted SharePoint since the very beginning and Microsoft are seemingly powerless to address it while they address issues of complexity by making things more complex. As an example: Here is my initial workflow action to assign the process owner a task from part 2. One single, simple action that looks up the process owner based on the organisation column.

image_thumb43  image

Now the above solution never worked of course because managed metadata columns are not supported in the list item filtering capability of SPD workflows. Yes, we were able to work around the issue successfully without sacrificing our information architecture, but take a look below at the price we paid in terms of complexity to achieve it. From one action to dozens. Whilst I prefer this in a workflow rather than in Visual studio and compiled to a WSP file, it required a working knowledge of JSON, REST/oData, CAML and debugging HTTP traffic via Fiddler. Not exactly the tools of your average information worker or citizen developer.

image_thumb10  image_thumb18    image_thumb22

image_thumb25  image_thumb27  image_thumb14

A lot of code above to assign a task to someone eh?

Another consideration on the 3rd party vs. out of the box discussion is of course all of the features that the 3rd party workflow tools have. The most obvious example is a decent forms solution. Whilst InfoPath still is around, the fact that Microsoft did precisely nothing with it in SharePoint 2013 and removed support for its use in SharePoint 2013 workflows suggests that they won’t have a change of heart anytime soon.

In fact, my prediction is that Microsoft are working on their own forms based solution and will be seriously bolstering workflow capability in SharePoint vNext. They will create many additional declarative workflow actions, and probably model a hybrid forms solution that works in a similar way to the way Nintex live forms does. Why I do I think this? It’s just a hunch, based on the observation that a lot of the plumbing to do this is there in SharePoint 2013/Workflow Manager and also that there is a serious gap in the forms story in SharePoint 2013. How else will they be able to tell a good multi-device story going forward?

But perhaps the ultimate lead indicator to the suitability of this new functionality to citizen developers is to gauge feedback from citizen developers who took the time to understand the twelve articles I wrote. In fact, if you are truly evil IT manager, concerned with the risk of information worker committing SharePoint atrocities, then get your potential citizen developers to read this series of articles as a way to set expectations and test their mettle. If they get through them, give them the benefit of the doubt and let them at it!

So all you citizen developers, do you feel inspired that we were able to get around the issues, or feel somewhat shell shocked at all of the conceptual baggage, caveats and workarounds? If you are in the latter camp, then maybe serious SharePoint 2013 workflow development is not for you, but then again, if you are not blessed with a large budget to invest in 3rd party tools, you want to get SharePoint onto your CV, all the while, helping organisations escape those annoying project managers and elitist developers, at least you now know what you need to learn!

On a more serious note, if you are on a SharePoint governance, strategy or steering team (which almost by definition means you are only reading this conclusion and not the twelve articles that preceded it), then you should consider how you define value when looking at the ROI of 3rd party verses going out of the box for workflow. For me, if part of your intention or strategy is to build a deeper knowledge and capacity of SharePoint in your information workers and citizen developers, then I would look closely at out of the box because it does force people to better understand how SharePoint works more broadly. But (and its a big but), remember that the 3rd party tools are more mature offerings. While they may mitigate the need for workflow authors to learn SharePoint’s deeper plumbing, they nevertheless produce workflows that are much simpler and more understandable than what I produced using out of the box approaches. Therefore from a resource based view (ie take the least amount of time to develop and publish workflows), one would lean toward the third party tools.

I hope you enjoyed the series and thanks so much for reading

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle