How’s the weather? Using a public API with PowerApps (part 2)

This entry is part 3 of 3 in the series OpenAPI
Send to Kindle

Introduction

Hi again

This is the second half to a post that will use the OpenWeatherMap API in PowerApps. The business scenario is around performing inspections. In the first post I gave the example of a park ranger or plant operator, both conducting inspections where weather conditions can impact the level of danger or the result of the inspection. In such a scenario it makes sense to capture weather conditions when a photo is taken.

PowerApps has the ability to capture location information such as latitude and longitude, and public weather API’s generally allow you to get weather conditions for a given location. So the purpose of these posts is to show you how you can not only capture this data in PowerApps, but then send it to SharePoint in the form of metadata via Flow.

In Part 1, we got the painful stuff out of the way – that is, getting PowerApps to talk to the OpenWeather web service via a custom connector. Hopefully if you got through that part, you now have a much better understanding of the purpose of the OpenAPI specification and can see how it could be used to get PowerApps to consume many other web services. Now we are going to actually build an app that takes photos and captures weather data.

App prerequisites…

Now to get through this post, we are going to do this is to leverage a proof of concept app I built in a separate post. This app was also an inspection scenario, allowing a user to take a bunch of photos, which were then sent to SharePoint via Flow, with correctly named files. If you have not read that post, I suggest you do so now, because I am assuming you have that particular app set up and ready to go.

Go on… its cool, I will wait for you… Smile

Seriously now, don’t come back until you can do what I show below. On the left is PowerApps, taking a couple of photos, and on the right is the photos now in a SharePoint document library.

image  image

Now if you have performed the tasks in the aforementioned article, not only do you have a PowerApp that can take photos, you’ll have a connection to Flow and ready to go (yeah the pun was intended).

First up, lets recap two key parts of the original app.

1. Photo and file name…

When the camera was clicked, a photo was taken and a file name was generated for it. The code is below:

Collect(PictureList,{

        Photo: Camera1.Photo,

        ID: Concatenate(AuditNumber.Text,”-“,Text(Today(),”[$-en-US]dd:mm:yy”),”-“,Text(CountRows(PictureList)+1),”.jpg”)

} )

This code created an in-memory table (a collection named PictiureList) that, when a few photos are taken, looks like this:

image

2. Saving to Flow

The other part of the original app was saving the contents of the above collection to Flow. The Submit button has the following code…

UpdateContext( { FinalData : Concat(PictureList, ID & “|” & Photo & “#”) } );
UploadPhotostoAuditLib.Run(FinalData)

The first line takes the PictureList collection above and munges it into a giant string called FinalData. Each row is delimited by a hash “#” and each column delimited by a pipe “|”. The second line then calls the Flow and passes this data to it.

Both of these functions are about to change…

Getting the weather…

The first task is to get the weather. In part 1 we already created the custom connector to the service. Now it is time to use it in our app by adding it as a data source. From the View menu, choose Data Sources. You should see your existing data source that connects to Flow.

image  image

Click Add data source and then New connection. If you got everything right in part 1, you should see a data source called OpenWeather. Click on it, and you will be asked to enter an API key. You should have this key from part 1 (and you should understand exactly why you were asked for it at this point), so go ahead, add it here and click the Create button. If all things to go plan, you will now see OpenWeather added as a data source.

image  image  image  image

Now we are connected to the API, let’s test it by modifying the code that takes a photo. Instead of just capturing the photo and generating a file name, let’s also grab the latitude, longitude from PowerApps, call the API and collect the current temperature.

First here is the code and then I will break it down…

image

UpdateContext( { Weather: OpenAPI.GetWeather(Location.Latitude,Location.Longitude,”metric”) } );
Collect(PictureList,
{

     Photo: Camera1.Photo,

     ID: Concatenate(AuditNumber.Text,”-“,Text(Today(),”[$-en-US]dd:mm:yy”),”-“,Text(CountRows(PictureList)+1),”.jpg”),

     Latitude:Location.Latitude,

     Longitude:Location.Longitude,

     Temp:Weather.main.temp } )

 

The first line is where the weather API is called: OpenAPI.GetWeather(Location.Latitude,Location.Longitude,”metric”) . The parameters Location.Latitude and Location.Longitude come straight from PowerApps. I want my temperature in Celsius so I pass in the string “metric” as the 3rd parameter.

My API call is then wrapped into an UpdateContext() function, which enables us to save the result of the API call into a variable I have called Weather.

Now if you test the app by taking photos, you will notice a couple of things. First up, under variables, you will now see Weather listed. Drill down into it and you will see that a complex data structure is returned by the API. In the example below I drilled down to Weather->Main to find a table that includes temperature.

image

image  image

The second line of code (actually I broke it across multiple lines for readability) is the Collect function which, as its title suggests, creates collections. A collection is essentially an in-memory data table and the first parameter of Collect() is simply the name of the collection. In our example it is called PictureList.  The second second parameter is a record to insert into the collection. A record is a comma delimited set of fields and values inside curly braces. eg: { Title: “Hi”, Description: “Greetings” }. In our example, we are building a table consisting of:

  • Photo
  • File name for Photo
  • Latitude
  • Longitude
  • Temperature

The last parameter is the most interesting, because we are getting the temperature from the Weather variable. As this variable is a complex data type, we have to be quite specific about the value we want. I.e. Weather.main.temp.

Here is what the PictureList collection looks like now. If you have understood the above code, you should be able to extend it to grab other interesting weather details like wind speed and direction.

image

Getting ready for Flow…

Okay, so now let’s look at the code behind the Submit button. The change made here is to now include the additional columns from PictureList into my variable called FinalData. If this it not clear then I suggest you read this post or even Mikael Svenson’s work where I got the idea…

image

UpdateContext( { FinalData : Concat(PictureList, ID & “|” & Photo & “|” & Latitude & “|” & Longitude & “|” & Temp & “#”) } );
UploadPhotosToAuditLib.Run(FinalData)

So in case it is not clear, the first line munges each row and column from PictureList into a giant string called FinalData. Each row is delimited by a hash “#” and each column delimited by a pipe “|”. The second line then calls the Flow and passes it FinalData.

At this point, save your changes to PowerApps and publish as you are done here. Let’s now make some changes to the SharePoint document library where the photos are currently being uploaded to. We will add columns for Temperature, Latitude and Longitude and I am going to assume you know enough of SharePoint to do this and will paste a screenshot of the end in mind…

image  image

Right! Now it is time to turn our attention to Flow. The intent here is to not only upload the photos to the document library, but update metadata with the location and temperature data. Sounds easy enough right? Well remember how I said that we got rid of most of the painful stuff in part 1?

I lied…

Going with the Flow…

Now with Flow, it is easy to die from screenshot hell, so I am going to use some brevity in this section. If you played along with my pre-requisite post, you already had a flow that more or less looks like this:

  1. A PowerApps Trigger
  2. A Compose action that splits the photo via hash: @split(triggerbody()[‘ProcessPhotos_Inputs’],”#”)
  3. An Apply to each statement with a condition inside @not(empty(item()))
  4. A Compose action that grabs the file name: @split(item(),’|’)[0]
  5. A Compose action that grabs the file contents and converts it to binary: @dataUriToBinary(@split(item(),’|’)[1])
  6. A SharePoint Create File action that uploads the file to a document library

The image below illustrates the basic layout.

image

Our task is to now modify this workflow to:

  1. Handle the additional data sent from PowerApps (temperature, latitude and longitude)
  2. Update SharePoint metadata columns on the uploaded photos with this new data.

As I write these lines, Flow has very poor support for doing this. It has been acknowledged on UserVoice and I know the team are working on improvements. So the method I am going to use here is essentially a hack and I actually feel a bit dirty even suggesting it. But I do so for a couple of reasons. Firstly, it helps you understand some of the deeper capabilities of Flow and secondly, I hope this real-world scenario is reviewed by the Flow team to help them understand the implications of their design decisions and priorities.

So what is the issue? Basically the flow actions in SharePoint have some severe limitations, namely:

  • The Create File action provides no way to update library metadata when uploading a file
  • The Create Item action provides access to metadata but only works on lists
  • The Update Item action works on document libraries, but requires the item ID of the document to do so. Since Create File does not provide it, we have no reference to the newly created file
  • The Get Items function allows you to search a list/library for content, but cannot match on File Name (actually it can! I have documented a much better method here!)

So my temporary “clever” method is to:

  1. Use Create File action to upload a file
  2. Use the Get Items action to bring me back the metadata for the most recently created file in the library
  3. Grab the ID from step 2
  4. Use the Update Item action to set the metadata on the recently uploaded image.

Ugh! This method is crude and I fear what happens if a lot of flows or file activity was happening in this library and I really hope that soon the next section is soon redundant…

Okay so let’s get started. First up let’s make use of some new Flow functionality and use Scopes to make this easier. Expand the condition block and find the Compose action that extracts the file name. If you dutifully followed my pre-req article it will be called “Get File Name”. Above this, add a Scope and rename it to “Get File and Metadata”. Drag the “Get File Name” and “Get File Body” actions to it as shown below.

image  image  image

Now let’s sort out the location and temperature data. Under “Get File Body”, add a new Compose action and rename it to “Get Latitude”. In the compose function add the following:

Under “Get Latitude”, add a new Compose action and rename it to “Get Longitude”. In the compose function add the following:

Under “Get Longitude”, add a new Compose action and rename it to “Get Temperature”. In the compose function add the following:

  • @split(item(),’|’)[4]

This will result in the following:

image  image

Now click on the Get File and Metadata scope and collapse it to hide the detail of metadata extraction (now you know what a scope is for Smile)

image

So now we have our metadata, we need to apply it to SharePoint. Under the “Create File” action, add a new scope and call it “Get Item ID”. This scope is where the crazy starts…

Inside the scope, add a SharePoint – Get Items action. Enter the URL of your site collection and in the name (not URL) of your document library. In the Order By field, type in Created desc and set the Maximum Get Count to 1. Basically this action is calling a SharePoint list web service and “Created desc” says “order the results by Created Date in descending order (newest first).

Actually what you do is set Filter Query to FileLeaf eq ‘[FileName]’ as described in this later post!

Now note the plural in the action: “Get Items”. That means that by design, it assumes more than 1 result will be returned. The implication is that the data will comes back as an array. in JSON land this looks like the following:

[ { “Name”: “Value” }, { “Name”: “Value2” }, { “Name”: “Value3” } ]

and so on…

Also note that there is no option in this action to choose which fields we want to bring back, so this action will bring back a big, ugly JSON array back from SharePoint containing lots of information.

Both of these caveats mean we now have to do some data manipulation. For a start, we have to get rid of the array as many Flow actions cannot handle them as data input. Also, we are only interested in the item ID for the newly uploaded photo. All of the other stuff is just noise. So we will add 3 more flow actions that:

  1. clear out all data apart from the ID
  2. turn it from an array back to a regular JSON element
  3. extract the ID from the JSON.

For step 1, under the “Get items” action just added, add a new Data Operations – Select action. We are going to use this to select just the ID field and delete the rest. In the From textbox, choose the Value variable returned by the Get Items action. In the Map field, enter a key called “ID” and set the value to be the ID variable from the “Get Items” action.

image

For step 2, under the “Select” action, add a Data Operations – Join action. This action allows you to munge an array into a string using a delimiter – much like what we did in PowerApps to send data to Flow. Set the From text box to be the output of the Select action. The “Join with” delimiter can actually be anything, as the array will always have 1 item. Why? In the Get Items action above, we set the Maximum Get Count to 1. We will always get back a single item array.

image

The net effect of this step will be the removal of the array. I.e., from:

[ { “ID”: 48 } ]

to

{ “ID”:48 }

For step 3, under the “Join” action, add a Data Operations – Parse JSON action. This action will process the JSON file and each element found will be available to subsequent actions. The easiest way to understand this action is to just do it and see the effect. First, set the Content textbox to the output from the Join action.

image

Now we need to tell this action which elements that we are interested in. We already know that we only have 1 variable called ID because of the Select action we set up earlier that has stripped everything else out. So to tell this action we are expecting an ID, click the “use sample payload…” link and paste some sample JSON in our expected format…

{

    “ID”:48

}

If all goes to plan, a Schema has been generated based on that sample data that will now allow us to grab the actual ID value.

image  image

Okay, so we are done with the Get Item ID scope, so collapse it as follows…

image

Finally, under the “Get Item ID” scope, add a SharePoint – Update Item action. Add the URL of your site collection and then specify the document library where the photos were uploaded to. If this has worked, the additional metadata columns should now be visible as shown in the red box below. Now set the specific item we want to update by setting the ID parameter to the ID variable from the Parse JSON step.

image

Now assign the metadata to the library columns. Set Latitude to the output variable from the Get Latitude step, Longitude to the output variable from the Get Longitude step and Temperature to the output variable from the Get Temperature step as shown below.

image

Now save your flow and cross all fingers and toes…

Testing and conclusion!

Return to PowerApps (in the browser or on your mobile device – not the PowerApps studio app). Enter an audit number and take some photos… Wohoo! Here they are in the library along with metadata. Looks like I need to put on a jacket when I step outside Smile

image   image

So taking a step back, we have managed to achieve quite a lot here.

  1. We have wired up a public web service to PowerApps
  2. We have used PowerApps built-in location data to load weather data each time a photo has been taken
  3. We have used Flow to push this data into SharePoint and included the location and weather data as parameters.

Now on reflection there are a couple of massive issues that really prevent this from living up to its Citizen Developer potential.

  1. I had to use a 3rd party service to generate my OpenAPI file and it was not 100%. Why can’t Microsoft provide this service?
  2. Flow’s poor support for common SharePoint scenarios meant I had to use (non) clever workarounds to achieve my objectives.

Both of these issue were resolvable, which is a good thing, but I think the solutions take this out of the realm of most citizen developers. I had to learn too much JSON, too much Swagger/OpenAPI and delve deep into Flow.

In saying all that, I think Flow is the most immature piece of the puzzle at this stage. The lack of decent SharePoint support is definitely one where if I were a program manager, I would have pushed up the release schedule to address. It currently feels like the developer of the SharePoint actions has never used SharePoint on a day to day basis, and dutifully implemented the web services without lived experience of the typical usage scenarios.

For other citizen developers, I’d certainly be interested in how you went with your own version and variations of this example, so please let me know how you go.

And for Microsoft PowerApps/Flow product team members who (I hope) are reading this, I think you are building something great here. I hope this material is useful to you as well.

 

Thanks for reading

 

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Glyma is now open source!

Send to Kindle

Hi all

If you are not aware, my colleagues and I have spend a large chunk of the last few years developing a software tool for SharePoint called Glyma (pronounced “Glimmer”). Glyma is a very powerful  knowledge management solution for SharePoint 2010/2013, that deals with knowledge that is highly valuable, yet difficult to capture in writing – all that hard earned knowledge that tends to walk out the door in organisations.

Glyma was born from Seven Sigma’s Dialogue Mapping skills and it represents a lot of what we do as an organisation, and the culmination of many years of experience in the world of complex problem facilitation. We have been using Glyma as a consultancy value-add for some time, and our clients have gained a lot of benefit from it. Clients have also deployed it in their environments for reasons such as capture of knowledge, lessons learnt, strategic planning, corporate governance as well as business analysis, critical thinking and other knowledge visualisation/knowledge exchange scenarios.

image

I am very pleased to let people know that we have now decided to release Glyma under an open source license (Apache 2). This means you are free to download the source and use it in any manner you see fit.

You can download the source code from Chris Tomich’s githib site or you can contact me or Chris for the binaries. The install/user and admin manuals can be found from the Glyma web site, which also has a really nice help system, tutorial videos and advice on how to build good Glyma maps.

This is not just some sample code we have uploaded. This is a highly featured, well architected and robust product with some really nice SharePoint integration. In particular for my colleague, Chris Tomich, this represents a massive achievement as a developer/product architect. He has created a highly flexible graph database with some real innovation behind it. Technically, Glyma is a hypergraph database, that sits on SQL/SharePoint. Very few databases of this type exist outside of academia/maths nerds and very few people could pull off what he has done.

image

For those of you that use/have tried Compendium software, Glyma extends the ideas of Compendium (and can import Compendium maps), while bringing it into the world of enterprise information management via SharePoint.

Below I have embedded a video to give you an idea of what Glyma is capable of. More videos exist on Youtube as well as the Glyma site, so be sure to dig deeper.

 

I look forward to hearing how organsiations make use of it. Of course, feel free to contact me for training/mentoring and any other value-add services Smile

 

Regards

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 2)

This entry is part 2 of 3 in the series Apps Model
Send to Kindle

Hiya

This is the second post in some articles aimed at demystifying the SharePoint “apps” model for the strategic or business focused user. In case you are not aware, Microsoft have gotten a serious case of “app fever” in recent times, introducing the terminology not only into SharePoint, but Office as well. While there are very good reasons for this happening, Microsoft used the “app” terminology in multiple ways, therefore making their message rather confusing. As a result, Microsoft have not communicated their intent particularly well and customers often fail to understand why they make the changes that they do.

Things are definitely getting better, but I nevertheless see a lot of confusion around the topic. So in part 1 of this series, I explained the reasons Microsoft have adopted the strategy that they have done. To recap, they are trying to respond to five major disruptive forces that challenge their market position:

  • Changing perceptions to cloud technologies and increased adoption; which enables…
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange in the form of Google Apps; as well as…
  • An increasing number of smaller cloud-based “point solution” players who chip away at SharePoint features with cheaper and easier to use offerings; while suffering from…
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace; while dealing with…
  • Customers unable to handle the ever increasing complexity of SharePoint, leading to delaying upgrades for years

Microsoft’s answer to this has to go all-in with cloud, as this is the only way to beat the cloud providers at their own game, while reducing the complexity burden on their customers.  This of course is in the form of Office365, OneDrive and an ever increasing set of cloud oriented tools like Delve and Project Online.

But in SharePoint land, this has turned traditional development upside down. More than a decade of customisation “best practices” are no longer best – in fact they are no longer usable in many circumstances. The main reason is that the most common method of customisation normally applied to SharePoint (full-trust server side code) is not permitted in the cloud. Microsoft couldn’t risk untrusted, 3rd party custom code on their servers. What happens if one clients dodgy code affects everybody else sharing the service? This would threaten performance, uptime and Microsoft’s ability to upgrade their service over time.

So things had to change. Microsoft’s small army of product architects commandeered a whiteboard and started architecting innovative solutions to deal with these challenges and the apps model is the result. So let’s examine some core bits to the apps model by channelling a much loved children’s TV show.

There is a bear in there…

Now at this point I have to warn the developers or tech people writing this post. I am going to give a simplified version of the apps model intended for a decision making audience. I will omit many details I don’t deem necessary to make my key points. You have been warned…

image

Any parent of small children in most countries might be familiar with Playschool – a show for toddlers that has been around for eons. It is well known for its theme song starting with the line “There is a bear in there…and a chair as well”. When trying to come up with a suitable way to explain the SharePoint apps model, using Playschool as a metaphor turned out to work brilliantly. You see in each episode of Playschool, there was a segment where viewers were taken through the “magic window” to faraway lands. In the show, the presenter would pick one of the three windows and we would zoom into it, resulting in a transition to another segment. In our case, we have to pick the square window for two reasons. Firstly, a good many apps are in effect windows to somewhere else. Secondly, and much more importantly, it perfectly matches the new Microsoft corporate logo. Perfect metaphor or what eh? 🙂

Like the Playschool magic window, browsers have a similar capability to enable you to visit strange and magical lands… Not only is there a bear in there and a chair as well, but there are plenty of other things like YouTube videos and Yammer discussions. I have drawn this conceptually shown below. Note the black window in the SharePoint team site on the left, that can be filled with YouTube or Yammer.

image

You have no doubt visited web sites that have embedded content like YouTube videos or SlideShare slides in them (This blog site has lots of embedded ads that make me no money!). Essentially, it is possible for browsers to include content from different sites together into a single “page” experience. Users see it all as one page, even through content can come from all over the place. This is really useful, because it means you can leverage the capability of other sites to enhance the functionality of your own sites..

This my friends, is one of the core tenets of the current SharePoint 2013 apps model. Instead of running on the SharePoint server, many apps now run separately from SharePoint, embedded in SharePoint pages so that they look like they are part of SharePoint. In the example mock-up below, we have a SharePoint team site. In it, we have a remote web site that displays some pretty dashboard data. By loading that emote content into our magical square window, it now appears a part of SharePoint.

image       image

Going back to Microsoft’s core pain points, this helps things a lot. For a start, it means no custom code has to be installed onto the SharePoint server. Instead, SharePoint simply embeds the external content on the page. In the leftmost image above, you can see the SharePoint server (labelled as “Your server”), rendering a page with a placeholder in it. It then retrieves content from a remote server (labelled “my server”) and displays it in the placeholder to render the complete page (the rightmost image above).

So what feat of Microsoft innovation and general awesomeness enabled this to happen?

Everybody meet “Mr IFrame”.

Inline Frames (IFrames) are windows cut into your webpage that allow your visitor to view content on another site without reloading the entire page. The concept was first implemented in Microsoft Internet Explorer way back in 1997. Yep – you heard right… 1997. So IFrames are not a new concept at all – in fact its positively ancient when you count time in internet years. For this reason, when developers find this out, their reaction is usually something like this…

image

But there is more than meets the eye…

Now if the apps model was just IFrames alone, then you you might wonder what the big deal is with apps. In fact IFrames have been used this way in SharePoint for years via the Page Viewer Web Part. For years, companies with SharePoint deployments have embedded stuff like Twitter, YouTube or Facebook widgets via iFrames.

So of course, there is more to it…

Let’s revisit the “your server” and “my server” diagram used above and consider the question.. What if these remote applications displayed inside an iFrame can interact with SharePoint? In other words, What if the remote application running on my remote server is able to connect to your SharePoint server and read/write data? In the diagram below I have illustrated the idea. The top half of the diagram represents a SharePoint server that could be on-premises or an Office365 tenant. On the left is a Products list, that is somewhere inside this SharePoint server. At the bottom is my application running on my server that creates a pretty dashboard. What if my remote application queried the SharePoint products list to create the dashboard? Now we have an application, that while not running inside SharePoint, can nevertheless utilise live data from SharePoint to create a seamless experience for users.

image

If we now add 3 iframes to a page, the implications should start to become more clear. We can build hybrid solutions leveraging the best of what SharePoint can do, whilst leveraging the best of what other platforms can do. To the user, these are still SharePoint sites, but the reality is that we are now viewing a page that has been delivered by various different platforms. Each can interact with SharePoint data in different ways to deliver a seamless experience. Because these remote apps are not SharePoint at all, developers can write any application they want to, using the platform and tools of their choice. But to the user it is still a SharePoint page… neat huh? I’m sure the Microsoft product team thought that this was a brilliant conceptual masterpiece when they dreamt it up.

image

A beautiful model…

I don’t know if you have ever watched developers come up with API’s, but it tends to be a lot of excitement around a whiteboard as they revel in the glory of their elegant solution designs. So let’s quickly re-examne the benefits of this remotely hosted app approach from Microsoft’s perspective and see how we are going so far…

image

First and foremost, we now have SharePoint customisation approach that they can be fully support in Office365. Microsoft don’t have to put code on their online servers, yet can support extensibility. Now they are much more evenly matched with Google, while at the same time, reduce their tech support costs of SharePoint because they have isolated 3rd party code out of SharePoint. If any problems are encountered with a remote app, SharePoint will keep humming along and Microsoft can now legitimately tell the clients “no really it is not SharePoint causing your issue – go see your friendly neighbourhood app developer”.

More importantly since apps can also can be used in on-premises SharePoint deployments too, meaning both Microsoft and their customers now have pristine SharePoint servers free of the muck and clutter of 3rd party code. Therefore service packs and cumulative updates should no longer strike fear into admins. Microsoft also now nails google’s ass because Google has no real concept of on-premises at all in the way Microsoft does. Thus when hybrid scenarios come up in conversation, Microsoft has a much stronger story to tell.

But there is a more important implication than all of that. Microsoft can now do the app store thing. Vendors can maintain cloud based services that can be embedded and consumed by on-premises and online SharePoint installs. This means 3rd parties can tap into the customer ecosystem with a captive marketplace and customers can browse the store to examine what options are out there to extend SharePoint functionality. In theory, this should enable hundreds of vendors to do some slight modifications to their existing web based applications and incorporate them into the SharePoint ecosystem.

image

But reality is not what’s on the whiteboard…

At this point, I hope I have painted a pretty good picture of the advantages offered by this new paradigm and you can probably appreciate the Microsoft nerds completely falling in love with this conceptual model of future SharePoint customsiations. The Microsoft strategy dudes probably loved it too because it elegantly dealt with all of the challenges they were seeing. Unfortunately though, with most conceptual models, reality is a very different beast from the convenient fiction of models.

So in the next post, we are going to dig a little deeper. For example, how can a remote app even have permissions to talk to SharePoint in the first place? Do you really want code running in some untrusted 3rd party server to be fiddling with data in your SharePoint lists and libraries? How does that even work anyway in an on-premises scenario when a cloud hosted app has to access data behind your firewall?

Fear not though – the Microsoft guys thought of this (and more) when they were drawing their apps model concept on the big whiteboard. So in the next post, we are going to look at what it takes to bring this conceptual masterpiece into reality.

 

Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 1)

This entry is part 1 of 3 in the series Apps Model
Send to Kindle

Hiya

I’ve been meaning to write about the topic of the apps model of SharePoint 2013 for a while now, because it is a topic I am both fascinated and slightly repulsed by. While lots of really excellent material is out there on the apps model (not to mention a few good rants by the usual suspects as well), it is understandably written by developers tasked with making sense of it all, so they can put the model into practice delivering solutions for their clients or organisations.

I spent considerable time reading and researching the various articles and videos on this topic produced by Microsoft and the broader SharePoint community and made a large IBIS map of it. As I slowly got my head around it all, subtle, but significant implications begin to emerge for me. The more I got to know this topic, the more I realised that the opportunities and risks of the apps model holds many important insights and lessons for how organisations should be strategically thinking about SharePoint. In other words, it is not so much about the apps model itself, but what the apps model represents for organisations who have invested into the SharePoint platform.

So these posts are squarely aimed at the business camp. Therefore I am going to skip all sorts of things that I don’t deem necessary to make my points. Developers in particular may find it frustrating that I gloss over certain things, give not quite technically correct explanations and focus on seemingly trivial matters. But like I said, you are not my audience anyway.

So let’s see if we can work out what motivated Microsoft head in this direction and make such a significant change. As always, context is everything…

As it once was…

I want you to picture Microsoft in 2011. SharePoint 2010 has come out to positive reviews and well and truly cemented itself in the market. It adorns the right place in multiple Gartner magic quadrants, demand for talent is outstripping supply and many organsiations are busy embarking on costly projects to migrate from their legacy SharePoint 2007 and 2003 deployments, on the basis that this version has fixed all the problems and that they will definitely get it right this time around. As a result, SharePoint is selling like hotcakes and is about to crack the 2 billion dollar revenue barrier for Microsoft. Consultants are also doing well in this time since someone has to help organsiations get all of that upgrade work done. Life is good… allegedly.

But even before the release of SharePoint 2010, winds of change were starting to blow. In fact, way back in 2008, at my first ever talk at a SharePoint conference, I showed the Microsoft pie chart of buzzwords and asked the crowd what other buzzwords were missing. The answer that I anticipated and received was of course “cloud”, which was good because I had created a new version of the pie and invited Microsoft to license it from me. Unfortunately no-one called.

image

Winds of change…

While my cloud picture was aimed at a few cheap laughs at the time, it holds an important lesson. Early in the release cycle of SharePoint 2007, cloud was already beginning to influence people’s thinking. Quickly, services traditionally hosted within organisations began to appear online, requiring a swipe of the credit card each month from the opex budget which made CFO’s happy. A good example is Dropbox which was founded in 2008. By 2010, won over the hearts and minds of many people who were using FTP. Point solutions such as Salesforce appeared, which further demonstrated how the the competitive landscape was starting to heat up. These smaller, more nimble organsiations were competing successfully on the basis of simplicity and focus on doing one thing well, while taking implementation complexity away.

Now while these developments were on Microsoft’s radar, there was really only one company that seriously scared them. That was Google via their Google Docs product. Here was a company just as big and powerful as Microsoft, playing in Microsoft’s patch using Microsoft’s own approach of chasing the enterprise by bundling products and services together. This emerged as a credible alternative to not only SharePoint, but to Office and Exchange as well.

Some of you might be thinking that Apple was just a big a threat to Microsoft as Google. But Microsoft viewed Apple through the eyes of envy, as opposed to a straight out threat. Apple created new markets that Microsoft wanted a piece of, but Apple’s threat to their core enterprise offerings remained limited. Nevertheless, Microsoft’s strong case of crimson green Apple envy did have a strategic element. Firstly, someone at Microsoft decided to read the disruptive innovation book and decided that disruptive was obviously cool. Secondly, they saw the power of the app store and how quickly it enabled an developer ecosystem and community to emerge, which created barriers for the competition wanting to enter the market later.

Meanwhile, deeper in the bowels of Microsoft, two parallel problems were emerging. Firstly, it was taking an eternity to work through an increasingly large backlog of tech support calls for SharePoint. Clients would call, complaining of slow performance, broken deployments after updates, unhandled exceptions and so on. More often than not though, these issues had were not caused by the base SharePoint platform, but via a combination of SharePoint and custom code that leaked memory, chewed CPU or just plain broke. Troubleshooting and isolating the root cause very difficult which led to the second problem. Some of Microsoft’s biggest enterprise customers were postponing or not bothering with upgrades to SharePoint 2010. They deemed it too complex, costly and not worth the trouble. For others, they were simply too scared to mess with what they had.

A perfect storm of threats

image  image

So to sum up the situation, Microsoft were (and still are) dealing with five major forces:

  • Changing perceptions to cloud technologies (and the opex pricing that comes with it)
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange
  • An increasing number of smaller point solution players who chip away at SharePoint features with cheaper and easier to use offerings
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace
  • Customers unable to contend with the ever increasing complexity of SharePoint and putting off upgrades

So what would you do if you were Microsoft? What would your strategy be to thrive in this paradigm?

Now Microsoft is a big organisation, which affords it the luxury of engaging expensive management consultants, change managers and corporate coaches. Despite the fact that it doesn’t take an MBA to realise that just a couple of these factors alone combine as a threat to the future of SharePoint, lots of strategic workshops were no doubt had with associated whiteboard diagrams, postit notes, dodgy team building games and more than one SWOT analyses to confirm the strategic threats they faced were a clear and present danger. The conclusion drawn? Microsoft had to put cloud at the centrepiece of their strategy. After all, how else can you bring the fight to the annoying cloud upstarts, stave off the serious Google threat, all the while reducing the complexity burden on their customers?

A new weapon and new challenges…

In 2011, Microsoft debuted Office365 as the first salvo in their quest to mitigate threats and take on their competitors at their own game. It combined Exchange, Lync and SharePoint 2010 – packaging them up in a per-user per month approach. The value proposition for some of Microsoft’s customers was pretty compelling because up-front capital costs reduced significantly, they now could get the benefits of better scalability, bigger limits on things like mailboxes, while procurement and deployment was pretty easy compared to doing it in-house. Given the heritage of SharePoint, Exchange and Lync, Microsoft was suddenly competitive enough to put Google firmly on the back foot. My own business dumped gmail and took up Office365 at this time, and have used it since.

Of course, there were many problems that had to be solved. Microsoft was now switching from a software provider to a service provider which necessitated new thinking, processes, skills and infrastructure internally. But outside of Microsoft there were bigger implications. The change from software provider to service provider did not go down well with many Microsoft partners who performed that role prior. It also freaked out a lot of sysadmins who suddenly realised their job of maintaining servers was changing. (Many are still in denial to this day). But more importantly there was a big implication for development and customisation of SharePoint. This all happened mid-way through the life-cycle of SharePoint 2010 and that version was not fully architected for cloud. First and foremost, Microsoft were not about to transfer the problem of dodgy 3rd party customisations onto their servers. Recall that they were getting heaps of support calls that were not core SharePoint issues, but caused by custom code written by 3rd parties. Imagine that in a cloud scenario when multiple clients share the same servers. This means that that one clients dodgy code could have a detrimental effect on everybody else, affecting SLA’s while not solving the core problem Microsoft were having of wearing the cost and blame via tech support calls for problems not of their doing.

So with Office365, Microsoft had little choice but to disallow the dominant approach to SharePoint customisation that had been used for a long time. Developers could no longer use the methods they had come to know and love if a client wanted to use Office 365. This meant that the consultancies who employed them would have to change their approach too, and customers would have to adjust their expectations as well. Office365 was now a much more restricted environment than the freedom of on-premises.

Is it little wonder then, that one of Microsoft’s big focus areas for SharePoint 2013 was to come up with a way to readdress the balance. They needed a customisation model that allowed a consistent approach, whether you chose to keep SharePoint on-premises, or move off to the cloudy world of Office 365. As you will see in the next post, that is not a simple challenge,. The magnitude of change required was going to be significant and some pain was going to have to happen all around.

Coming next…

So with that background set, I will spend time in the next post explaining the apps model in non technical terms, explaining how it helps to address all of the above issues. This is actually quite a challenge, but with the right dodgy metaphors, its definitely possible. 🙂 Then we will take a more critical viewpoint of the apps model, and finally, see what this whole saga tells us about how we should be thinking about SharePoint in the future…

thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Help me visualise the pros and cons of hybrid SharePoint 2013…

Send to Kindle

Like it or not, there is a tectonic shift going on in the IT industry right now, driven primarily by the availability of a huge variety of services hosted in the cloud. Over the last few years, organisations are increasingly procuring services that are not hosted locally, much to the chagrin of many an server hugging IT guy who understandably, sees various risks with entrusting your fate to someone else.

We all know that Microsoft had a big focus on trying to reach feature parity between on-premises SharePoint 2013 and Office365. In other words, with cloud computing as a centrepiece of their strategy, Microsoft’s SharePoint 2013 aim was for stuff that both works on premises, but also also works on Office 365 without too much modification. While SharePoint 2013 made significant inroads into meeting this goal (apps model developers might beg to differ), the big theme to really emerge was that feature parity was a relatively small part of the puzzle. What has happened since the release of SharePoint 2013, is that many organisations are much more interested in hybrid scenarios. That is, utilising on-premises SharePoint along with cloud hosted SharePoint and its associated capabilities like OneDrive and Office Web Applications.

So while it is great that SharePoint online can do the same things as on-prem, it all amounts to naught if they cannot integrate well together. Without decent integration, we are left with a lot of manual work to maintain what is effectively two separate SharePoint farms and we all know what excessive manual maintenance brings over time…

Microsoft to their credit have been quick to recognise that hybrid is where the real action is at, and have been addressing this emerging need with a ton of published material, as well as adding new hybrid functionality with service packs and related updates. But if you have read the material, you can attest that there is a lot of it and it spans many topic areas (authentication alone is a complex area in itself). In fact, the sheer volume and pace of material released by Microsoft show that hybrid is a huge and very complex topic, which begs a really critical question…

Where are we now at with hybrid? Is it a solid enough value proposition for organisations?

This is a question that a) I might be able to help you answer and b) you can probably help me answer…

Visualising complex topics…

A few months back, I started issue mapping all of the material I could get my hands on related to hybrid SharePoint deployments. If you are not aware, Issue mapping is a way of visualising rationale and I find it a brilliant personal learning tool. It allows me to read complex articles and boil them down to the core questions, answers, pros and cons of the various topics. The maps are easy to read for others, and they allow me to make my critical thinking visible. As a result, clients also like these maps because they provide a single integrated place where they can explore topics in an engaging, visual way, instead of working their way through complex whitepapers.

If you wish to jump straight in and have a look around, click here to access my map on Hybrid SharePoint 2013 deployments. You will need to sign in using a facebook or gmail ID to do so. But be sure to come back and read the rest of this post, as I need your help…

But for the rest of you, if you are wondering what my hybrid SharePoint map looks like, without jumping straight in, check out the screenshots below. The tool I am using is called Glyma (glimmer), which allows these maps to use developed and consumed using SharePoint itself. First up, we have a very simple map, showing the topic we are discussing.

image

If you click the plus sign next to the “Hybrid SharePoint deployments” idea node, we can see that I mapped all of the various hybrid pros and cons I have come across in my readings and discussions. Given that hybrid SharePoint is a complex topic, there are lots of pros and cons as shown in the partial image below…

image

Many of the pros and cons can be expanded further, which delves deeper into the topics. A single click expands one node level, and a double click expands the entire branch. To illustrate, consider the image below. One of the cons is around many of the search related caveats with hybrid that can easily trip people up. I have expanded the con node and the sub question below it.  Also notice hat one of the idea nodes has an attachment icon. I will get to that in a moment…

image

As I mentioned above, one of the idea nodes titled “SPO search sometimes has delays on low long it takes for new content to appear in the index” has an attachment icon as well as more nodes below it. Let’s click that attachment icon and expand that node. It turns out that I picked this up when I read Chris O’Brien’s excellent article and so I have embedded his original article to that node. Now you can read the full detail of his article for yourself, as well as understand how his article fits into a broader context.

image

It is not just written content either. If I move further up the map, you will see some nodes have video’s tagged to them. When Microsoft released the videos to 2014’s Vegas conference, I found all sorts of interesting nuggets of information that was not in the whitepapers. Below is an example of how I tagged one of the Vegas video’s to one of my nodes.

image

 

A call to action…

SharePoint hybrid is a very complex topic and right now, has a lot of material scattered around the place. This map allows people, both technical and non technical, to grasp the issue in a more strategic, bigger picture way, while still providing the necessary detail to aid implementation.

I continually update this map as I learn more about this topic from various sources, and that is where you come in. If you have had to work around a curly issue, or if you have had a massive win with a hybrid deployment, get in touch and let me know about it. It can be a reference to an article, a skype conversation or anything, The Glyma system can accommodate many sources of information.

More importantly, would you like to help me curate the map on this topic? After all, things move fast the SharePoint community rarely stands still. So If you are up to speed on this topic or have expertise to share, get in touch with me. I can give you access to this map to help with its ongoing development. With the right meeting of the minds, this map could turn into an incredible valuable information resource to a great many people.

So get in touch if you want to put your expertise out there…

 

Thanks for reading

 

 

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Rewriting the knowledge management rulebook… The story of “Glyma” for SharePoint

Send to Kindle

“If Jeff ever leaves…”

I’m sure you have experienced the “Oh crap” feeling where you have a problem and Jeff is on vacation or unavailable. Jeff happens to be one of those people who’s worked at your organisation for years and has developed such a deep working knowledge of things, it seems like he has a sixth sense about everything that goes on. As a result, Jeff is one of the informal organisational “go to guys” – the calming influence amongst all the chaos. An oft cited refrain among staff is “If Jeff ever leaves, we are in trouble.”

In Microsoft’s case, this scenario is quite close to home. Jeff Teper, who has been an instrumental part of SharePoint’s evolution is moving to another area of Microsoft, leaving SharePoint behind. The implications of this are significant enough that I can literally hear Bjorn Furuknap’s howls of protest all the way from here in Perth.

So, what is Microsoft to do?

Enter the discipline of knowledge management to save the day. We have SharePoint, and with all of that metadata and search, we can ask Jeff to write down his knowledge “to get it out of his head.” After all, if we can capture this knowledge, we can then churn out an entire legion of Jeffs and Microsoft’s continued SharePoint success is assured, right?

Right???

There is only one slight problem with this incredibly common scenario that often underpins a SharePoint business case… the entire premise of “getting it out of your head” is seriously flawed. As such, knowledge management initiatives have never really lived up to expectations. While I will save a detailed explanation as to why this is so for another post, let me just say that Nonaka’s SECI model has a lot to answer for as it is based on a misinterpretation of what tacit knowledge is all about.

Tacit knowledge is expert knowledge that is often associated with intuition and cannot be transferred to others by writing it down. It is the “spider senses” that experts often seem to have when they look at a problem and see things that others do not. Little patterns, subtleties or anomalies that are invisible to the untrained eye. Accordingly, it is precisely this form of knowledge that is of the most value in organisations, yet is the hardest to codify and most vulnerable to knowledge drain. If tacit knowledge could truly be captured and codified in writing, then every project manager who has ever studied PMBOK would have flawless projects, because the body of knowledge is supposed to be all the codified wisdom of many project managers and the projects they have delivered. There would also be no need for Agile coaches, Microsoft’s SharePoint documentation should result in flawless SharePoint projects and reading Wictor’s blog would make you a SAML claims guru.

The truth of tacit knowledge is this: You cannot transfer it, but you acquire it. This is otherwise known as the journey of learning!

Accountants are presently scratching their heads trying to figure out how to measure tacit knowledge. They call it intellectual capital, and the reason it is important to them is that most of the value of organisations today is classified on the books as “intangibles”. According to the book Balanced Scorecard, a company’s physical assets accounted for 62% of its market value in 1982, 38% of its market value in 1992 and only 21% in 2003. This is in part a result of the global shift toward knowledge economies and the resulting rise in the value of intellectual capital. Intellectual capital is the sum total of the skills, knowledge and experience of staff and is critical to sustaining competitiveness, performance and ultimately shareholder value. Organisations must therefore not only protect, but extract maximum value from their intellectual capital.

image

Now consider this. We are in an era where baby boomers are retiring, taking all of their hard-earned knowledge with them. This is often referred to as “the knowledge tsunami”, “the organisational brain drain” and the more nerdy “human capital flight”. The issue of human capital flight is a major risk area for organisations. Not only is the exodus of baby boomers an issue, but there are challenges around recruitment and retention of a younger, technologically savvy and mobile workforce with a different set of values and expectations. One of the most pressing management problems of the coming years is the question of how organisations can transfer the critical expertise and experience of their employees before that knowledge walks out the door.

The failed solutions…

After the knowledge management fad of the late 1990’s, a lot of organisations did come to realise that asking experts to “write it down” only worked in limited situations. As broadband came along, enabling the rise of rich media services like YouTube, a digital storytelling movement arose in the early 2000’s. Digital storytelling is the process by which people share stories and reflections while being captured on video.

Unfortunately though, digital storytelling had its own issues. Users were not prepared to sit through hours of footage of an expert explaining their craft or reflecting on a project. To address this, the material was commonly edited down to create much smaller mini-documentaries lasting a few minutes – often by media production companies, so the background music was always nice and inoffensive. But this approach also commonly failed. One reason for failure was well put by David Snowden when he saidInsight cannot be compressed”. While there was value in the edited videos, much of the rich value within the videos was lost. After all, how can one judge ahead of time what someone else finds insightful. The other problem with this approach was that people tended not to use them. There was little means for users to find out these videos existed, let alone watch them.

Our Aha moment

In 2007, my colleagues and I started using a sensemaking approach called Dialogue Mapping in Perth. Since that time, we have performed dialogue mapping across a wide range of public and private sector organisations in areas such as urban planning, strategic planning, process reengineering, organisational redesign and team alignment. If you have read my blog, you would be familiar with dialogue mapping, but just in case you are not, it looks like this…

Dialogue Mapping has proven to be very popular with clients because of its ability to make knowledge more explicit to participants. This increases the chances of collective breakthroughs in understanding. During one dialogue mapping session a few years back, a soon-to-be retiring, long serving employee relived a project from thirty years prior that he realised was relevant to the problem being discussed. This same employee was spending a considerable amount of time writing procedure manuals to capture his knowledge. No mention of this old project was made in the manuals he spent so much time writing, because there was no context to it when he was writing it down. In fact, if he had not been in the room at the time, the relevance of this obscure project would never have been known to other participants.

My immediate thought at the time when mapping this participant was “There is no way that he has written down what he just said”. My next thought was “Someone ought to give him a beer and film him talking. I can then map the video…”

This idea stuck with me and I told this story to my colleagues later that day. We concluded that the value of asking our retiring expert to write his “memoirs” was not making the best use of his limited time. The dialogue mapping session illustrated plainly that much valuable knowledge was not being captured in the manuals. As a result, we seriously started to consider the value of filming this employee discussing his reflections of all of the projects he had worked on as per the digital storytelling approach. However, rather than create ‘mini documentaries’, utilise the entire footage and instead, visually map the rationale using Dialogue Mapping techniques. In this scenario, the map serves as a navigation mechanism and the full video content is retained. By clicking on a particular node in the map, the video is played from the time that particular point was made. We drew a mock-up of the idea, which looked like the picture below.

image

While thinking the idea would be original and cool to do, we also saw several strategic advantages to this approach…

  • It allows the user to quickly find the key points in the conversation that is of value to them, while presenting the entire rationale of the discussion at a glance.
  • It significantly reduces the codification burden on the person or group with the knowledge. They are not forced to put their thoughts into writing, which enables more effective use of their time
  • The map and video content can be linked to the in-built search and content aggregation features of SharePoint.
    • Users can enter a search from their intranet home page and retrieve not only traditional content such as documents, but now will also be able to review stories, reflections and anecdotes from past and present experts.
  • The dialogue mapping notation when stored in a database, also lends itself to more advanced forms of queries. Consider the following examples:
    • “I would like any ideas from lessons learnt discussions in the Calgary area”
    • “What pros or cons have been made about this particular building material?”
  • The applicability of the approach is wide.
    • Any knowledge related industry could take advantage of it easily because it fits into exiting information systems like SharePoint, rather than creating an additional information silo.

This was the moment the vision for Glyma (pronounced “glimmer”) was born…

Enter Glyma…

Glyma (pronounced ‘glimmer’) is a software platform for ‘thought leaders’, knowledge workers, organisations, and other ‘knowledge economy participants’ to capture and trade their knowledge in a way that reduces effort but preserves rich context. It achieves this by providing a new way for users to visually capture and link their ideas with rich media such as video, documents and web sites. As Glyma is a very visually oriented environment, it’s easier to show Glyma rather than talk to it.

Ted

image

What you’re looking at in the first image above are the concepts and knowledge that were captured from a TED talk on education augmented with additional information from Wikipedia. The second is a map that brings together the rationale from a number of SPC14 Vegas videos on the topic of Hybrid SharePoint deployments.

Glyma brings together different types of media, like geographical maps, video, audio, documents etc. and then “glues” them together by visualising the common concepts they exemplify. The idea is to reduce the burden on the expert for codifying their knowledge, while at the same time improving the opportunity for insight for those who are learning. Glyma is all about understanding context, gaining a deeper understanding of issues, and asking the right questions.

We see that depending on your focus area, Glyma offers multiple benefits.

For individuals…

As knowledge workers our task is to gather and learn information, sift through it all, and connect the dots between the relevant information. We create our knowledge by weaving together all this information. This takes place through reading articles, explaining on napkins, diagramming on whiteboards etc. But no one observes us reading, people throw away napkins, whiteboards are wiped clean for re-use. Our journey is too “disposable”, people only care about the “output” – that is until someone needs to understand our “quilt of information”.

Glyma provides end users with an environment to catalogue this journey. The techniques it incorporates helps knowledge workers with learning and “connecting the dots”, or as we know it synthesising. Not only does it help us with doing these two critical tasks, it then provides a way for us to get recognition for that work.

For teams…

Like the scenario I started this post with, we’ve all been on the giving and receiving end of it. That call to Jeff who has gone on holiday for a month prior to starting his promotion and now you need to know the background to solving an issue that has arisen on your watch. Whether you were the person under pressure at the office thinking, “Jeff has left me nothing of use!”, or you are Jeff trying to enjoy your new promotion thinking, “Why do they keep on calling me!”, it’s an uncomfortable situation for all involved.

Because Glyma provides a medium and techniques that aid and enhance the learning journey, it can then act as the project memory long after the project has completed and the team members have moved onto their next challenge. The context and the lessons it captures can then be searched and used both as a historical look at what has happened and, more importantly, as a tool for improving future projects.

For organisations…

As I said earlier, intangible assets now dominate the balance sheets of many organisations. Where in the past, we might have valued companies based on how many widgets they sold and how much they have in their inventory, nowadays intellectual capital is the key driver of value. Like any asset, organisations need to extract maximum value from intellectual capital and in doing so, avoid repeat mistakes, foster innovation and continue growth. Charles G. Sieloff summed this up well in the name of his paper, “if only HP knew what HP knows”.

As Glyma aids, enhances, and captures an individual’s learning journey, that journey can now be shared with others. With Glyma, learning is no longer a silo, it becomes a shared journey. Not only does it do this for individuals but it extends to group work so that the dynamics of a group’s learning is also captured. Continuous improvement of organisational processes and procedures is then possible with this captured knowledge. With Glyma, your knowledge assets are now tangible.

Lemme see it!

So after reading this post this far, I assume that you would like to take a look. Well as luck would have it, we put out a public Glyma site the other day that contains some of my own personal maps. The maps on the SP2013 apps model and hybrid SP2013 deployments in particular represent my own learning journey, so hopefully should help you if you want a synthesis of all the pros and cons of these issues. Be sure to check the videos on the getting started area of the site, and check the sample maps! Smile

glymasite

I hope you like what you see. I have a ton of maps to add to this site, and very soon we will be inviting others to curate their own maps. We are also running a closed beta, so if you want to see this in your organisation, go to the site and then register your interest.

All in all, I am super proud of my colleagues at Seven Sigma for being able to deliver on this vision. I hope that this becomes a valuable knowledge resource for the SharePoint community and that you all like it. I look forward to seeing how history judges this… we think Glyma is innovative, but we are biased! 🙂

 

Thanks for reading…

Paul Culmsee

www.glyma.co

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–conclusion and reflections

This entry is part 13 of 13 in the series Workflow
Send to Kindle

Hi all

In case you have not been paying attention, I’ve churned out a large series of posts – twelve in all – on the topic of SharePoint Designer 2013 workflows. The premise of the series was to answer a couple of questions:

1.  Is there enough workflow functionality in SharePoint 2013 to avoid having to jump straight to 3rd party tools?

2. Is there enough workflow functionality to enable and empower citizen developers to create lightweight solutions to solve organisational problems?

To answer these questions, I took a relatively simple real world scenario to illustrate what the journey looks like. Well – sort of simple in the sense that I deliberately chose a scenario that involved managed metadata. Because of this seemingly innocuous information architecture decision, we encountered SharePoint default settings that break stuff, crazy error messages that make no sense, learnt all about REST/oData, JSON, a dash of CAML and mastered the Fiddler tool to make sense of it all. We learnt a few SharePoint (and non SharePoint) web services, played with new features like dictionaries, loops and stages. Hopefully, if you have stuck with me as we progressed through this series, you have a much better understanding of the power and potential peril of this technology.

So where does that leave us with our questions?

In terms of the question of whether this edition enables you to avoid 3rd party tools – I think the answer is an absolute yes for SharePoint Foundation and a qualified yes for everything else. On the plus side, the new architecture certainly addresses some of the previous scalability issues and the ability to call web services and parse the data returned, opens up all sorts of really interesting possibilities. If “no custom development” solutions are your mantra (which is really “no managed code” usually) , then you have at your disposal a powerful development tool. Don’t forget that I have shown you a glimpse of what can be done. Very clever people like Fabian WIlliams have taken it much further than me, such as creating new SharePoint lists, creating no code timer jobs and creating your own declarative workflows – probably the most interesting feature of all.

In a nutshell, with this version, many things that were only possible in Visual Studio now become very doable using SharePoint Designer – especially important for Office365 scenarios.

So then, why a qualified yes as opposed to an enthusiastic yes?

Because it is still all so… how do I put this…  so #$%#ing fiddly!

Fiddly is just a euphemism for complexity, and in SharePoint it manifests in the minefield of caveats and “watch out for…” type of advice that SharePoint consultants often have to give. It has afflicted SharePoint since the very beginning and Microsoft are seemingly powerless to address it while they address issues of complexity by making things more complex. As an example: Here is my initial workflow action to assign the process owner a task from part 2. One single, simple action that looks up the process owner based on the organisation column.

image_thumb43  image

Now the above solution never worked of course because managed metadata columns are not supported in the list item filtering capability of SPD workflows. Yes, we were able to work around the issue successfully without sacrificing our information architecture, but take a look below at the price we paid in terms of complexity to achieve it. From one action to dozens. Whilst I prefer this in a workflow rather than in Visual studio and compiled to a WSP file, it required a working knowledge of JSON, REST/oData, CAML and debugging HTTP traffic via Fiddler. Not exactly the tools of your average information worker or citizen developer.

image_thumb10  image_thumb18    image_thumb22

image_thumb25  image_thumb27  image_thumb14

A lot of code above to assign a task to someone eh?

Another consideration on the 3rd party vs. out of the box discussion is of course all of the features that the 3rd party workflow tools have. The most obvious example is a decent forms solution. Whilst InfoPath still is around, the fact that Microsoft did precisely nothing with it in SharePoint 2013 and removed support for its use in SharePoint 2013 workflows suggests that they won’t have a change of heart anytime soon.

In fact, my prediction is that Microsoft are working on their own forms based solution and will be seriously bolstering workflow capability in SharePoint vNext. They will create many additional declarative workflow actions, and probably model a hybrid forms solution that works in a similar way to the way Nintex live forms does. Why I do I think this? It’s just a hunch, based on the observation that a lot of the plumbing to do this is there in SharePoint 2013/Workflow Manager and also that there is a serious gap in the forms story in SharePoint 2013. How else will they be able to tell a good multi-device story going forward?

But perhaps the ultimate lead indicator to the suitability of this new functionality to citizen developers is to gauge feedback from citizen developers who took the time to understand the twelve articles I wrote. In fact, if you are truly evil IT manager, concerned with the risk of information worker committing SharePoint atrocities, then get your potential citizen developers to read this series of articles as a way to set expectations and test their mettle. If they get through them, give them the benefit of the doubt and let them at it!

So all you citizen developers, do you feel inspired that we were able to get around the issues, or feel somewhat shell shocked at all of the conceptual baggage, caveats and workarounds? If you are in the latter camp, then maybe serious SharePoint 2013 workflow development is not for you, but then again, if you are not blessed with a large budget to invest in 3rd party tools, you want to get SharePoint onto your CV, all the while, helping organisations escape those annoying project managers and elitist developers, at least you now know what you need to learn!

On a more serious note, if you are on a SharePoint governance, strategy or steering team (which almost by definition means you are only reading this conclusion and not the twelve articles that preceded it), then you should consider how you define value when looking at the ROI of 3rd party verses going out of the box for workflow. For me, if part of your intention or strategy is to build a deeper knowledge and capacity of SharePoint in your information workers and citizen developers, then I would look closely at out of the box because it does force people to better understand how SharePoint works more broadly. But (and its a big but), remember that the 3rd party tools are more mature offerings. While they may mitigate the need for workflow authors to learn SharePoint’s deeper plumbing, they nevertheless produce workflows that are much simpler and more understandable than what I produced using out of the box approaches. Therefore from a resource based view (ie take the least amount of time to develop and publish workflows), one would lean toward the third party tools.

I hope you enjoyed the series and thanks so much for reading

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 12

This entry is part 12 of 13 in the series Workflow
Send to Kindle

Hi all, and welcome to part 12 of my articles about SharePoint 2013 Workflows and whether they are ready for prime time. Along the way we have learnt all about CAML, REST, JSON, calling web services, Fiddler, Dictionary objects and a heap of scenarios that can derail aspiring workflow developers. All this just to assign a task to a user!

Anyways, since it has been such a long journey, I felt it worthwhile to remind you of the goal here. We have a fictitious company called Megacorp trying to develop a solution to controlled documents management. The site structure is as follows:

image

The business process we have been working through looks like this:

Snapshot_thumb3

The big issue that has caused me to have to write 12 articles all boils down to the information architecture decision to use a managed metadata column to store the Organisation hierarchy.

Right now, we are in the middle of implementing an approach of calling a web service to perform step 3 in the above diagram. In part 9 and part 10 of this series, I explained the theory of embedding a CAML query into a REST query and in part 11, we built out most of the workflow. Currently the workflow has 4 stages and we have completed the first three of them.

  • 1) Get the organisation name of the current item
  • 2) Obtain an X-RequestDigest via a web service call
  • 3) Constructed the URL to search the Process Owner list and called the web service

The next stage will parse the results of the web service call to get the AssignedToID and then call another web service to get the actual userid of the user. Then we can finally have what we need to assign an approval task. So let’s get into it…

Obtaining the UserID

In the previous post, I showed how we constructed a URL similar to this one:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

This URL uses the CAML in REST method of querying the Process Owners list and returns any items where Organisation equals “Megacorp Burgers”. The JSON data returned shows the AssignedToID entry with a value of 8. Via the work we did in the last post. we already have this data available to us in a dictionary variable called ProcessOwnerJSON.

The rightmost JSON output below illustrates taking that AssignedToID value and calling another web service to return the username , i.e : http://megacorp/iso9001/_api/Web/GetUserById(8).

image   image_thumb52

Confused at this point? Then I suggest you go back and re-read parts 8 and 10 in particular for a recap.

So our immediate task is to extract the AssignedToId from the dictionary variable called ProcessOwnerJSON. Now that you are a JSON guru, you should be able to figure out that the query will be d/results(0)/AssignedToId.

Step 1:

Add a Get an Item from a Dictionary action as the first action in the Obtain Userid workflow stage. Click the item by name or path hyperlink and click the ellipses to bring up the string builder screen. Type in d/results(0)/AssignedToId.

image

Step 2:

Click on the dictionary hyperlink and choose the ProcessOwnerJSON variable from the list.

Step 3:

Click the item hyperlink and use the AssignedToID variable

image

That is basically it for now with this workflow stage as the rest of it remains unchanged from when we constructed it in part 8. At this point, the Obtain Userid stage should look like this:

image

If you look closely, you can see that it calls the GetUserById method and the JSON response is added to the dictionary variable called UserDetail. Then if the HTTP response code is OK (code 200), it will pull out the LoginName from the UserDetail variable and log it to the workflow history before assigning a task.

Phew! Are we there yet? Let’s see if it all works!

Testing the workflow

So now that we have the essential bits of the workflow done, let’s run a test. This time I will use one of the documents owned by Megacorp Iron Man Suits – the Jarvis backup and recovery procedure. The process owner for Megacorp Iron Man suits is Chris Tomich (Chris reviewed this series and insisted he be in charge of Iron Man suits!).

image  image

If we run the workflow against the Jarvis backup and recovery procedure, we should expect a task to be created and assigned to Chris Tomich. Looking at the workflow information below, it worked! HOLY CRAP IT WORKED!!!

image

So finally, after eleven and a half posts, we have a working workflow! We have gotten around the issues of using managed metadata columns to filter lists, and we have learnt a heck of a lot about REST/oData, JSON, CAML and various other stuff along the way. So having climbed this managed metadata induced mountain, is there anything left to talk about?

Of course there is! But let’s summarise the workflow in text format rather than death by screenshot

Stage: Get Organisation Name
   Find | in the Current Item: Organisation_0 (Output to Variable:Index)
   then Copy Variable:Index characters from start of Current Item: Organisation_0 (Output to Variable: Organisation)
   then Replace " " with "%20" in Variable: Organisation (Output to Variable: Organisation)
   then Log Variable: Organisation to the workflow history list
   If Variable: Organisation is not empty
      Go to Get X-RequestDigest
   else
      Go to End of Workflow

Stage: Get-X-RequestDigest
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Call [%Workflow Context: Current Site URL%]_api/contextinfo HTTP Web Service with request
       (ResponseContent to Variable: ContextInfo
        |ResponseHeaders to responseheaders
        |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/GetContextWebInformation/FormDigestValue from Variable: ContextInfo (Output to Variable: X-RequestDigest )
   If Variable: X-RequestDigest is empty
      Go to End of Workflow
   else
      Go to Prepare and execute process owners web service call

Stage: Prepare and execute process owners web service call
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Set Variable:URLStart to _api/web/Lists/GetByTitle('Process%20Owners')/GetItems(query=@v1)?@v1={"ViewXml":"<View><Query><ViewFields><FieldRef%20Name='Organisation'/><FieldRef%20Name='AssignedTo'/></ViewFields><Where><Eq><FieldRef%20Name='Organisation'/><Value%20Type='TaxonomyFieldType'>
   then Set Variable:URLEnd to </Value></Eq></Where></Query></View>"}
   then Call [%Workflow Context: Current Site URL%][Variable: URLStart][Variable: Organisation][Variable: URLEnd] HTTP Web Service with request
      (ResponseContent to Variable: ProcessOwnerJSON
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   then Log Variable: responseCode to the workflow history list
   If Variable: responseCode equals OK
      Go to Obtain Userid
   else
      Go to End of Workflow

Stage: Obtain Userid
   Get d/results(0)/AssignedToId from Variable: ProcessOwnerJSON (Output to Variable: AssignedToID)
   then Call [%Workflow Context: Current Site URL%]_api/Web/GetUserByID([Variable: AssignedToID]) HTTP Web Service with request
      (ResponseContent to Variable: userDetail 
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/LoginName from Variable: UserDetail (Output to Variable: AssignedToName)
      then Log The User to assign a task to is [%Variable: AssignedToName]
      then assign a task to Variable: AssignedToName (Task outcome to Variable:Outcome | Task ID to Variable: TaskID )
   Go to End of Workflow

Tidying up…

Just because we have our workflow working, does not mean it is optimally set up. In the above workflow, there are a whole heap of areas where I have not done any error checking. Additionally, the logging I have done is poor and not overly helpful for someone to troubleshoot later. So I will finish this post by making the workflow a bit more robust. I will not go through this step by step – instead I will paste the screenshots and summarise what I have done. Feel free to use these ideas and add your own good practices in the comments…

First up, I added a new stage at the start of the workflow for anything relation to initialisation activities. Right now, all it does is check out the current item (recall in part 3 we covered issues related to check in/out), and then set a Boolean workflow variable called EndWorkflow to No. You will see how I use this soon enough. I also added a new stage at the end of the workflow to tidy things up. I called it Clean up Workflow and it’s only operation is to check the current item back in.

image   image

In the Get Organisation Name stage, I changed it so that any error condition logs to the history list, and then set the EndWorkflow variable to Yes. Then in the Transition to stage section, I use the EndWorkflow variable to decide whether to move to the next stage or end the workflow by calling the Clean up workflow stage that I created earlier. My logic here is that there can be any number of error conditions that we might check for, and its easier to use a single variable to signify when to abort the workflow.

image

In the Get X-RequestDigest stage, I have added additional error checking. I check that the HTTP response code from the contextinfo web service call is indeed 200 (OK), and then if it is, I also check that we successfully extracted the X-RequestDigest from the response. Once again I use the EndWorkflow variable to flag which stage to move to in the transition section.

image

In the Prepare and execute process owners web service call stage, I also added more error checking – specifically with the AssignedToID variable. This variable is an integer and its default value is set to zero (0). If the value is still 0, it means that there was no process owner entry for the Organisation specified. If this happens, we need to handle for this…

image

Finally, we come to the Obtain Userid stage. Here we are checking both the HTTP code from the GetUserInfo web service call, as well as the userID that comes back via the AssignedToName variable. We assign the task to the user and then set the workflow status to “Completed workflow”. (Remember that we checked out the current item in the Workflow Initialisation stage, so we can now update the workflow status without all that check out crap that we hit in part 3).

image

Conclusion…

So there we have it. Twelve posts in and we have met the requirements for Megacorp. While there is still a heap of work to do in terms of customising the behaviour of the task itself, I am going to leave that to you!

Additionally, there are a lot of additional things we can do to make these workflows much more robust and easier to manage. To that end, I strongly urge you to check out Fabian Williams blog and his brilliant set of articles on this topic that take it much (much) further than I do here. He has written a ton of stuff and it was his work in particular inspired me to write this series. He also provided me with counsel and feedback on this series and I can’t thank him enough.

Now that we have gotten to where I wanted to, I’ll write one more article to conclude the series – reflecting on what we have covered, and its implications for organisations wanting to leverage out of the box SharePoint workflow, as well as implications for all of you citizen developers out there.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 11

This entry is part 11 of 13 in the series Workflow
Send to Kindle

Hi all, and welcome to the penultimate article in what has tuned into a fairly epic series about SharePoint 2013 Workflows. From part 6 to part 8 of this series, we implemented a workflow that made use of the web service calls as well as the new looping capabilities of SharePoint Designer 2013. We used the web service call to get all of the items in the Process Owners list, and then looped through them to find the process owner we needed based on organisation. While that method worked, the concern was that it was potentially inefficient because if there was a large list of process owners, it might consume excessive resources. This is why I referred to the approach in part 6 as the “easy but flawed” way.

Now we are going to use the “better but harder way”. To that end, the part 9 and part 10 have set the scene for this one, where we are going to implement pretty much all of the theory we covered in them. Now I will not rehash any of the theory of the journey we took to get here, but I cannot stress enough that you really should have read them before going through this article.

With that said, we are going make a bunch of changes to the current workflow by doing the following:

  • 1) Change the existing workflow to grab the Organisation name as opposed to the GUID
  • 2) Create a new workflow stage that gets us the X-RequestHeader (explained in part 9).
  • 3) Build the URL that we will use to implement the “CAML in REST” approach (explained in part 9 and part 10)
  • 4) Call the aforementioned webservice
  • 5) Extract the AssignedToId of the process owner for a given organisation
  • 6) Call the GetUserByID webservice to grab the actual userID of the process owner and assign them an approval task

In this post, we will cover the first four of the above steps…

Get the Name not the GUID…

Here is the first stage of the workflow as it is now, assuming you followed parts 6 to 8.

image

First let’s make a few changes so that we get the Name of the Organisation stored with the current item, rather than the GUID as we are doing now. If you recall from part 4, the column Organisation_0 is a hidden column that got created because Organisation is a managed metadata column. This column stores the names and Id’s of managed metadata term(s) that have been assigned in the format of <term name>|<term GUID>. For example “Metacorp+Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”.

To get the GUID, we grabbed everything to the right of the pipe symbol (“|”). Now to get the name, we need everything to the left of it.

Step 1:

Rename the stage from “Obtain Term GUID” to “Get Organisation Name” (I trust that by part 11 a screenshot is not required for this)

Step 2:

Delete the second workflow action called Calculate Variable: index plus 1 (Output to Variable:calc) as we don’t need the variable calc anymore. In addition, delete the workflow action “Copy from Current Item: Organisation_0”. You should be left with two actions and the transition to stage logic as shown below.

image

Step 3:

Add an Extract Substring from Start of String workflow action in between the two remaining actions. Click the “0” hyperlink and click the fx button. In the Lookup for Integer dialog, set it to the existing variable Index. Click on the “string” hyperlink and set it to the Organisation_0 column from the Current Item. Finally, click the (Output to…) hyperlink and create a new string variable called Organisation.

image

Now, at this point we need to pause and think about what we are doing. If you recall part 10, I had trouble getting the format right for the URL that uses CAML inside REST web service call. The culprit was that I had to encode any occurrence of a space in the URL with the HTML encoded space (a %20). Take a look at the URL that  was tested in Fiddler below to see this in action…

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

Look toward the end of the URL where the organisation is specified (marked in bold). What do you notice?

Yep – the space between Megacorp and Burgers is also encoded. But this causes a problem since the current value of the Organisation variable contains the space. So let’s deal with this now by encoding spaces.

Step 4:

Add a Replace Substring in String workflow action. Click the first string hyperlink and type in a single space. In the second string hyperlink, type in %20. In the third string hyperlink, click the fx button and add the Organisation variable. In the final hyperlink (Output to Variable:Output), choose the variable Organisation.

image

After all this manipulation of the Organisation variable, it is probably worthwhile logging it to the workflow history list so we can see if the above steps work as expected.

Step 5:

Click the Log Variable:TermGUID to the workflow history list action and change the variable from TermGUID to Organisation. The action will now be called Log Variable:Organisation to the workflow history list

image

Step 6:

In the Transition to stage section, find the “If Variable: TermGUID is not empty” condition and change the variable from TermGUID to Organisation

image

Step 7:

Create a new workflow stage and call it “Get X-RequestDigest”. Then in the Transition to stage section of the Get Organisation Name stage, find the “Go to Get Process Owners” and change the stage from Get Process Owners to Get X-RequestDigest.

The adjusted workflow should now look like the image below…

image

Getting the X-RequestDigest…

If you recall in part 9, we need to call the contextinfo web service so we can extract the FormDigestValue to use in our CAML embedded web service call to the Process Owners list. If that statement makes no sense then go back and read part 9, otherwise, you should already know what to do!.. Bring on the dictionary variables and the Call to HTTP Web service action!

Step 1:

Go to the Get Process Owners stage further down and find the very first action – a Build Dictionary action that creates a variable called RequestHeader. Right click on it and choose Move Action Up. This will move the action into the Get X-RequestDigest stage as shown below.

image  image

What are we doing here? This action was the one we created in part 9 that asks SharePoint to bring back data in JSON format. We first learnt all about this in part 4 when I explained JSON and part 5 when I explained how dictionary variables work.

Step 2:

Add a Call HTTP Web Service action after the build dictionary action. For the URL, use the string builder and add a lookup to the Current Site URL (found in Workflow Context in the data source dropdown). Then add the string “_api/contextinfo” to it to complete the URL of the web service. Also, make sure the method chosen is a HTTP POST and not a GET.

image  image

image

This will construct the URL based on which SharePoint site the workflow is run from (eg http://megacorp/iso9001/_api/contextinfo. ) but without hard-coding the URL.

Step 3:

Make sure the workflow action from step 2 is selected and in the ribbon, choose the Advanced Properties icon. In the Call HTTP Web Service Parameters dialog, click the RequestHeaders dropdown and choose the RequestHeader variable and click OK. (Now you know why we moved the build dictionary action in step 1)

image

Step 4:

Click the response hyperlink in the Call HTTP Web Service action and choose to create a new variable. Call it ContextInfo. Also check the name of the variable for the response code and make sure it is set to the responseCode and not something like responseCode2.

image  image

Step 5:

Add an If any value equals value condition below the web service call. For the first value hyperlink, choose the variable responseCode as per step 4. Click the second value hyperlink, type in “OK” as shown below:

image

This action ensures that the response to the web service call was valued (OK is the same as a HTTP 200 code). If we get anything other than an OK, there is no point continuing with the workflow.

Step 6:

Inside the condition we created in step 5, add a Get an Item from a Dictionary action. Then do the following:

  • In the item by name or path hyperlink, type in exactly “d/GetContextWebInformation/FormDigestValue” without the quotes.
  • In the dictionary hyperlink, choose the variable ContextInfo that was specified in step 4.
  • In the item hyperlink in the “Output To” section, create a new string variable called X-RequestDigest.

All this should result in the action below.

image

Now let’s take a quick pause to understand what we did in this step. You should recognise the d/GetContextWebInformation/FormDigestValue as parsing the JSON output. We get the value of FormDigestValue and assign it to the variable X-RequestDigest. As a reminder, here is the JSON output from calling the contextinfo web service using Fiddler. Note the path from d –> GetContextWebInformation –> FormDigestValue.

image_thumb17

Step 7:

In the transition to stage section, add an If any value equals value condition. For the first value hyperlink, choose the variable X-RequestDigest that we created in step 6. Click the equals hyperlink and change it to is empty.

image

Step 8:

Under the newly created If Variable: X-Request is empty condition, add a Go to a stage action and set it to End of Workflow. In the Else section of the condition, add a Go to a stage action and set it to the Get Process Owners stage.

image

Cool! We have our X-Request Digest stage all done. Here is what it looks like…

image

This has all been very easy so far hasn’t it! A big difference to some of the previous posts. But now its time to wire up the CAML inside REST web service call, and SharePoint is about to throw us another curveball…

Get the Process Owner…

Our next step is to rip the guts out of the existing stage to get the process owner. Unlike our first solution, we no longer need to loop through the process owners list which means the entire Find Matching Process Owner stage is no longer needed. So before we add new actions, lets do some tidying up.

Step 1:

Delete the entire stage called “Find Matching Process Owner”. Do this by clicking the stage to select all actions within it, and then choose delete from the SharePoint Designer ribbon. SPD will warn you that this will delete all actions. Go ahead and click OK.

image

Our next step is to attempt to make the CAML inside REST web service call. To remind you of what the URL will look like, here is the one we successfully tested in part 10. Ugly isn’t it. Now you know why developers are an odd bunch – they deal with this stuff all day!

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

Let’s take our time here, because as you can see the URL we have to craft is complex. First up, we need to use a Build a Dictionary action to create the HTTP headers we need (including the X-RequestDigest). Recall in part 9, that we also need to set Content-length to 0 and Accept to application/json;odata=verbose.

Step 2:

Add a Build dictionary action as the first action in the Get Process Owners section. Click the this hyperlink and the add button in the Build a Dictionary dialog. Add the following dictionary items:

  • Add a string called Accept and a value of: application/json;odata=verbose

image

  • Add a string called Content-length and a value of 0

image

  • Add a string called X-RequestDigest. In the value textbox, click the fx button and choose the workflow variable called X-RequestDigest.

image  image  image

Your dictionary should look like this:

image

Click ok and set the dictionary variable name to be the existing variable called RequestHeader. The completed action should look like the image below:

image

Now let’s turn our attention to creating the web service URL we need.

Step 3:

Find the existing Call HTTP Web Service action in the Get Process Owner stage. Click the URL hyperlink and click the ellipses to bring up the string builder dialog. Delete the existing URL so we can start over. Add the following entries back (carefully!)

  • 1) A lookup to the Site URL from the Workflow Context
  • 2) The string “_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>”
  • 3) A lookup to the Organisation workflow variable
  • 4) The string “</Value></Eq></Where></Query></View>”}”

This should look like the image below:

image

A snag…

Click OK and see what happens. Uh-oh. We are informed that “Using the special characters ‘[%%]’ or [%xxx%]’ in any string, or using the special character ‘{‘ in a string that also contains a workflow lookup may corrupt the string and cause an unexpected error when the workflow runs” – Ouch!

image

How do we get out of this issue?

Well, we are using two workflow lookups in the string – the first being the site URL at the start and the second being the Organisation variable embedded in the CAML bit of the URL. Since it is complaining of using certain special characters in combination with workflow lookups, let’s break up the URL into pieces by creating a couple of string variables. At the start of step 3 above, we listed 4 elements that make up the URL. Let’s use that as a basis to do this…

Step 4:

Add a Set Workflow Variable action below the build dictionary action in the Get Process Owner stage. Call the variable URLStart and set its value to: _api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>

image   image

Step 5:

Add another Set Workflow Variable action in the Get Process Owner stage. Call the variable URLEnd and set its value to: “</Value></Eq></Where></Query></View>”}”

image

Step 6:

Edit the existing Call HTTP Web Service action in the Get Process Owner stage. Click the URL hyperlink and add the following entries back (carefully!)

  • 1) A lookup to the Site URL from the Workflow Context
  • 2) A lookup to the URLStart workflow variable
  • 3) A lookup to the Organisation workflow variable
  • 4) A lookup to the URLEnd workflow variable

This should look like the image below:

image

Click OK and in the Call HTTP Web Service dialog, make sure the HTTP method is set to HTTP POST. Click OK

image  image

Step 7:

Select the Call HTTP Web Service action and click the Advanced Properties icon in the ribbon. In the Call HTTP Web Service Properties dialog box, click the RequestHeaders parameter and in the drop down list to the right of it, choose the RequestHeader variable created in step 3. Click OK.

image_thumb97    image_thumb103

 

Step 8:

Select the Call HTTP Web Service action and click the variable next to the ResponseContent to section. Create a variable called ProcessOwnerJSON. This variable will store the JSON returned from the web service call.

image    image

Step 9:

In the Transition to stage section of the Get Process Owners stage, look for the If responseCode equals OK condition. Set the stage to Obtain Userid as shown below:

image

Step 10:

To make the workflow better labelled, rename the existing Get Process Owners stage to Prepare and execute Process Owner web service call. This workflow stage is going to end when it has attempted the call and we will create a new stage to extract the process owner and create the approval task. At this point the workflow stage should look like the image below:

image

Conclusion

We will end the post at this point as it is already very long. In the next post, we will make a couple of tweaks to the Obtain Userid workflow stage and test the workflow out. For your reference, here is the complete workflow as it stands…

image

image

image

Thanks for reading

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 10

This entry is part 10 of 13 in the series Workflow
Send to Kindle

Hi there and welcome back to my series of articles that puts a real-world viewpoint to SharePoint 2013 workflow capabilities. This series is pitched more to Business Analysts, SharePoint Hackers and generally anyone who might be termed a citizen developer. This series shows the highs and lows of out of the box SharePoint Designer workflows, and hopefully helps organisations make an informed decision around whether or not to use what SharePoint provides, or moving to the 3rd party landscape.

By now you should be well aware of some of the useful new workflow capabilities such as stages, looping, support for calling web services and parsing the data via dictionary objects. You also now understand the basics of REST/oData and CAML. At the end of the last post, we just learnt that it is possible to embed CAML queries into REST/oData, which gets around the issue of not being able to filter lists via Managed metadata columns. We proved this could be done, but we did not actually try it with the actual CAML query that can filter managed metadata columns. It is now time to rectify this.

Building CAML queries

Now if you are a SharePoint developer worth your salt, you already know CAML, because their are mountains of documentation on this topic on MSDN as well as various blogs. But a useful shortcut for all you non coders out there, is to make use of a free tool called CAMLDesigner 2013. This tool, although unstable at times, is really easy to use, and in this section I will show you how I used it to create the CAML XML we need to filter the Process Owners list via the organisation column.

After you have downloaded CAMLDesigner and successfully gotten it installed, follow these steps to build your query.

Step 1:

Start CAMLDesigner 2013 and on the home screen, click the Connections menu.

image

Step 2:

In the connections screen that slides out from the right, enter http://megacorp/iso9001 into the top textbox, then click the SharePoint 2013 and Web Services buttons. Enter the credentials of a site administrator account and then click the Connect icon at the bottom. If you successfully connect to the site, CAMLDesigner will show the site in the left hand navigation.

image  image

Step 3:

Click the arrow to the left of the Megacorp site and find the Process Owners list. Click it, and all of the fields in the list will be displayed as blue boxes below the These are the fields of the list section.

image

Step 4:

Drag the Organisation column to the These are the selected fields section to the right. Then do the same for the Assigned To column. If you look closely at the second image, you will see that the CAML XML is already being built for you below.

image     image

Step 5:

Now click on the Where menu item above the columns. Drag the Organisation column across to the These are the selected fields section. As you can see in the second image below, once dragged across, a textbox appears, along with a blue button with an operator. Also take note of the CAML XML being build below. You can see that has added a <Where></Where> section.

image

image

image

Step 6:

In the Textbox in the Organisation column you just dragged, type in one of the Megacorp organisations. Eg: Megacorp Burgers. Note the XML changes…

image

Step 7:

Click the Execute button (the Play icon across the top). The CAML query will be run, and any matching data will be returned. In the example below, you can see that the user Teresa Culmsee is the process owner for Megacorp Burgers.

image

image

Step 8:

Copy the XML from the window to clipboard. We now have the XML we need to add to the REST web service call. Exit CAMLDesigner 2013.

image

Building the REST query…

Armed with your newly minted CAML XML as shown below, we need to return to fiddler and draft it into the final URL.

<ViewFields>
   <FieldRef Name='Organisation' />
   <FieldRef Name='AssignedTo' />
</ViewFields>
<Where>
   <Eq>
      <FieldRef Name='Organisation' />
      <Value Type='TaxonomyFieldType'>Megacorp Burgers</Value>
   </Eq>
</Where>

As a reminder, the XML that we had working in the past post looked like this:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

Let’s now munge them together by stripping the carriage returns from the XML and putting it between the <Query> and </Query> sections. This gives us the following large and scary looking URL.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ /> <FieldRef Name=’AssignedTo’ /> </ViewFields> <Where> <Eq> <FieldRef Name=’Organisation’ /> <Value Type=’TaxonomyFieldType’>Megacorp Burgers</Value> </Eq> </Where></Query></View>”}

Are we done? Unfortunately not. If you paste this into Fiddler composer, Fiddler will get really upset and display a red warning in the URL textbox…

image

If despite Fiddlers warning, you try and execute this request, you will get a curt response from SharePoint in the form of a HTTP/1.1 400 Bad Request response with the message HTTP Error 400. The request is badly formed.

The fact that Fiddler is complaining about this URL before it has  even been submitted to SharePoint allows us to work out the issue via trial and error. If you cut out a chunk of the URL, Fiddler is okay with it. For example: This trimmed URL is considered acceptable by Fiddler:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>

But adding this little bit makes it go red again.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ />

Any ideas what the issue could be? Well, it turns out that the use of spaces was the issue. I removed all the spaces from the URL above and where I could not, I encoded it in HTML. Thus the above URL turned into the URL below and Fiddler accepted it

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’ />

So returning to our original big URL, it now looks like this (and Fiddler is no longer showing me a red angry textbox):

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

image

So let’s see what happens. We click the execute button. Wohoo! It works! Below you can see a single matching entry and it appears to be the entry from CAMLBuilder2013. We can’t tell for sure because the Assigned To column is returned as AssignedToID and we have to call another web service to return the actual username. We covered this issue and the web service to call extensively in part 8 but to quickly recap, we need to pass the value of AssignedToID to the http://megacorp/iso9001/_api/Web/GetUserById() web service. In this case, http://megacorp/iso9001/_api/Web/GetUserById(8) because the value of AssignedToId is 8.

The images below illustrate. The first one shows the Process Owner for Megacorp burgers. Note the value of AssignedToID is 8. The second image shows what happens when 8 is passed to the GetUserById web service call. Check Title and LoginName fields.

image image

Conclusion

Okay, so now we have our web service URL’s all sorted. In the next post we are going to modify the existing workflow. Right now it has four stages:

  • Stage 1: Obtain Term GUID (extracts the GUID of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get Process Owners (makes a REST web service call to enumerate the Process Owners List and if successful, moves to stage 3)
  • Stage 3: Find Matching Process Owner (Loops through the process owners and finds the matching organisation from stage 1. For the match, grab the value of AssignedToID and if successful, move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

We will change it to the following  stages:

  • Stage 1: Obtain Term Name (extracts the name of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get the X-RequestDigest (We will grab the request digest we need to do our HTTP POST to query the Process Owners list. If successful move to stage 3)
  • Stage 3: Get Process Owner (makes the REST web service call to grab the Process Owners for the organisation specified by the Term name from stage 1. Grab the value of AssignedToID and move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

One final note: After this epic journey we have taken, you might think that doing this in SharePoint Designer workflow should be a walk in the park. Unfortunately this is not quite the case and as you will see, there are a couple more hurdles to cross.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle