A Clever-workaround for Saving Photos to SharePoint from PowerApps

Send to Kindle

At the time of writing, a common request for PowerApps is to be able to able to upload photos to SharePoint. It makes perfect sense, especially now that its really easy to make a PowerApp that is bound to a SharePoint list. Sadly, although Microsoft have long acknowledged the need in the PowerUsers forum, a solution has not been forthcoming.

I have looked at the various workarounds, such as using the OneDrive connector or a custom web API, but these for me were fiddly. Thanks to ideas from John Liu, I’ve come up with a method that is more flexible and less fiddly to implement, provided you are okay with a bit of PowerShell, and (hopefully) with PnP PowerShell. One advantage to the method is that it handles an entire gallery of photos in a single transaction, rather than just a single photo at a time.

Now in the old days I would have meticulously planned out a multi-part series of posts related to a topic like this, because I have to pull together quite a lot of conceptual threads into a single solution. But since the pace of change in the world of Office365 is so rapid, my solution may be out of date by the time I publish it. So instead I offer a single summary post of my solution and leave the rest to you to figure out.  Sorry followers, its just too hard to do epic multi-part articles these days – times have changed.

What you need

  1. An understanding of JSON and basic idea of web services
  2. An azure subscription
  3. Access to Azure functions
  4. The PnP Powershell cmdlets
  5. A Swagger file (don’t worry if this makes no sense now)
  6. To be signed up to PowerApps

 How we are going to solve this…

In a nutshell, we will create an Azure function, using PowerShell to receive photos from PowerApps and uploads them to a SharePoint library. Here is my conceptual diagram that I spent hours and hours drawing…

Snapshot

To do this we will need to do a few things.

  1. Customise PowerApps to store photo data in the way we need
  2. Create and configure our Azure function
  3. Write and test the PowerShell code to upload to SharePoint
  4. Create a Swagger file so that PowerApps can talk to our Azure function
  5. Create a custom PowerApps connection/datasource use the Azure function
  6. Test successfully and bask in the glory of your awesomeness

Step 1: Customise PowerApps to store photo data in the way we need

Let’s set up a basic proof of concept PowerApp. We will add a camera control to take photos, a picture gallery to view the photos and a button to submit the photos to SharePoint. I’ll use the PowerApps desktop client rather than the web page for this task and create a blank app using the Phone Layout.

image

From the Insert menu, add a Camera control from the Media dropdown to add it to the screen. Leave it up near the top…

image

From the Insert Menu, add a Gallery control. For my demo I will use the vertical gallery. Move it down below the camera control so it looks like the second image below.

image  image

From the Insert Menu, add two buttons below the gallery. Set the text property on one to “Submit” and the other to “Clear”. I realise the resulting layout will not win any design awards but just go with it. Use the picture below to guide you.

image    image

Now let’s wire up some magic. Firstly, we will set it up so clicking on the camera control will take a photo, and save it to PowerApps storage. To do this we will use the Collect function. Assuming your Camera control is called “Camera1”, select it , and set the OnSelect property to:

Collect(PictureList,Camera1.Photo)

image

Now when a photo is taken, it is added to an in-memory PowerApps data-source called PictureList. To see this in action, preview the PowerApp and click the camera control a couple of times. Exit the preview and choose “Collections” from the left hand menu. You will now see the PictureList collection with the photos you just took, stored in a field called Url. The reason it is called URL and not “Photo” will become clear later).

image

Now let’s wire up the clear button to clear out this collection. Choose the button labelled “Clear” and set its OnSelect property to:

Clear(PictureList)

image

If you preview the app and click this button, you will see that the collection is now empty of pictures.

The next step is to wire up the Gallery to the PictureList collection so that you can see the photos being taken. To do this, select the gallery control and set the Items property to PictureList as shown below. Preview this and you should be able to take a set of photos, see them added to the gallery and be able to clear the gallery via the button.

image

image

Now we get to a task that will not necessarily make sense until later. We need to massage the PictureList collection to get it into the right format to send to SharePoint. For example, each photo needs a filename, and in a real-world scenario, we would likely further customise the gallery to capture additional information about each photo. For this post I will not do this, but I want to show you how you can manipulate data structures in PowerApps. To do this, we are going to now wire up some logic to the “Submit” button. First I will give you the code before I explain it.

Clear(SubmitData);
ForAll(PictureList,Collect(SubmitData, { filename: "a file.jpg", filebody: Url }))

image

In PowerApps, it is common to add multiple statements to controls, separated by a semicolon. Thus, the first line above initialises a new Collection I have called “SubmitData”. If this collection already had data in it, the Clear function will wipe it out. The second line uses two functions, ForAll and the previously introduced Collect. ForAll([collection],[formula]) will iterate through [collection] and perform tasks specified in [formula]. In our case we are adding records to the SubmitData collection. Each record consists of two fields and is in JSON format – hence the curly braces. The first field is called filename and the second is called filebody. In my example the filename is a fixed string, but filebody grabs the Url field from the current item in PictureList.

To see the effect, run the app, click submit and then re-examine the collections. Now you will see two collections listed – the original one that captures the photos from the camera (PictureList), and the one called SubmitData that now has a field for filename and a field called filebody with the photo. I realise that setting a static filename called “a file.jpg” is not particularly useful to anybody, and I will address this a little later, the point is we now have the data in the format we need.

image

By the way, behind the scenes, PowerApps stores the photo in the Data URI scheme. This is essentially a Base64 encoded version of the image with a descriptor at the start that is included in HTML. When you think about it, this makes sense in some situations because it reduces the number of HTTP round trips between browser and server. For example here is a small image encoded and embedded direct in HTML using the technique.

<img src="data:image/png;base64,iVBORw0KGgoAAA
ANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4
//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU
5ErkJggg==" alt="Red dot" />

The implication of this format is when PowerApps talks to our Azure function, it will send this sort of JSON…

[ {  "filename": "boo.jpg",
"filebody": "data:image/jpeg;base64,/9j/4AAQSkZJR [ snip heaps of Base64 ] T//Z" },
{  "filename": "boo3.jpg",
"filebody": "data:image/jpeg;base64,/9j/TwBAQMEBA [ snip heaps of Base64 ] m22I" } ]

In the next section we will set up an Azure Function and write the code to handle the above format, so save the app in its current state and give it a nice name. We are done with PowerApps for now…

2. Create and Configure Azure Function

Next we are going to create an Azure function. This is the bit that is likely new knowledge for many readers. You can read all about them on their Azure page, but my quick explanation is that they allow you to take a script or small piece of code and turn it into a fully fledged webservice. As you will soon see, this is very useful indeed (as well as very cost effective).

Now like many IT Pros, I am a PowerShell hacker and I have been using the PowerShell PnP libraries for all sorts of administrative purposes for quite some time. In fact if you are administering an Office365 tenant and you are not using PnP, then I can honestly say you are missing out on some amazing time-saving toolsvand you owe it to yourself to skill-up in this area. Of course, I realise that many readers will not be familiar with PowerShell, let alone PnP, and I expect some readers have not done much coding at all. Luckily the code we are going to use is just a few lines and I think I can sufficiently explain it.

But we are getting ahead of ourselves, let’s create the Azure function and then revisit PowerShell. Assuming you have an Azure subscription, visit functions.azure.com and log in. If not, sign up for the free account and then create a function app to host the function. I called mine MyFunctionsDemo but yours will have to be something different. This will take minute or two to complete and you will be redirected to the Azure functions portal.

image   image

Once the web application to host your functions is created, Click the + next to the Functions button to create a new function. PowerShell is still in preview, so you have to click the option to create a custom function. On the next screen, in the Language dropdown, choose PowerShell.

image  image

Our function is going to be triggered from PowerApps when a user clicks the submit button. PowerApps will make a HTTP request so this is a HttpTrigger scenario. Click the HttpTrigger-PowerShell template, give it a name (I called mine PhotoSendSP) and click the Create button. If all goes to plan you will be presented with a screen with some basic PowerShell code… essentially a “Hello World” web service.

image   image

Let’s test this newly minted Azure function before we customise it. If you look to the right of the screen above, you will see a “Test” vertical label. Click it and you will be presented with a screen that allows you to craft some data to send to your shiny new function. You can see that the test is going to be a HTTP POST by default. As you can see below, there is a basic JSON entry with a single name/value pair “name”: “Azure”. Change the Azure string to something else and then click the Run button. The result will be displayed below the JSON as shown below.

image   image

Now let’s take a quick look at the PowerShell code provided to you by default. Only lines 2, 3 and 11 matter for our purposes. What lines 2 and 3 show is that all of the details that are posted to this webservice are stored in a variable called $req. Line 2 converts this to JSON format and stores that in a variable called $requestbody. Line 3 then asks $requestbody for the value of “name”, which is you look in the screenshots above are what you set in the test. Line 11 then outputs this line to a variable called $res, which is the response back to the caller of this webservice. In this case you can see it returns “Hello “ with $name appended to it.

image
Now that we have seen the PowerShell code, let’s now update it with code to receive data from PowerApps and send it to SharePoint.

3. Write and test the PowerShell code that uploads to SharePoint

If you recall with PowerApps, the data we are sending to SharePoint is one or more photos. The data will look like this…

[ {  "filename": "boo.jpg",
"filebody": "data:image/jpeg;base64,/9j/4AAQSkZJR [ snip heaps of Base64 ] T//Z" },
{  "filename": "boo3.jpg",
"filebody": "data:image/jpeg;base64,/9j/TwBDAQMEBAUEBQ  m22I" } ]

In addition, for the purposes of keeping things simple, I am going to hard code various things like the document library to save the files to and not worry about exception handling. Below is my sample code with annotations at the end…

1.  Import-Module "D:\home\site\wwwroot\modules\SharePointPnPPowerShellOnline\2.15.1705.0\SharePointPnPPowerShellOnline.psd1" -Global
2.  $requestBody = Get-Content $req -Raw | ConvertFrom-Json
3.  $username = "paul@tenant.onmicrosoft.com"
4.  $password = $env:PW;
5.  $siteUrl = "https://tenant.sharepoint.com"
6.  $secpasswd = ConvertTo-SecureString $password -AsPlainText -Force
7.  $creds = New-Object System.Management.Automation.PSCredential ($username, $secpasswd)
8.  Connect-PnPOnline -url $siteUrl -Credentials $creds
9.  $ctx = get-pnpcontext
10. $doclib = $ctx.Web.Lists.GetByTitle("Documents")
11. ForEach ($item in $requestbody)
12. {
13.    $filename = $item.filename
14.    $rawfiledata = $item.filebody
15.    $rawfiledata = $rawfiledata -replace 'data:image/jpeg;base64,', ''
16.    $bytes = [System.Convert]::FromBase64String($rawfiledata)
17.    # uses comma notation related to .net reflection http://piers7.blogspot.com.au/2010/03/3-powershell-array-gotchas.html
18.    $memoryStream = New-Object System.IO.MemoryStream (,$bytes)
19.    $FileCreationInfo = New-Object Microsoft.SharePoint.Client.FileCreationInformation
20.    $FileCreationInfo.Overwrite = $true
21.    $FileCreationInfo.ContentStream = $memoryStream
22.    $FileCreationInfo.URL = $filename
23.    $Upload = $doclib.RootFolder.Files.Add($FileCreationInfo)
24.    $ctx.Load($Upload)
25.    $ctx.ExecuteQuery()
26. }

 

  • Line 1 loads the PnP PowerShell module. Without this, commands like Connect-PnPOnline and Get-PnPContext will not work. I’ll show how this is done after explaining the rest of the code.
  • Lines 3-7 are all about connecting to my SharePoint online tenant. Line 4 contains a variable called $env:PW. The idea here is to avoid passwords being stored in the code in clear text. The password is instead is pulled from an environment variable that I will show later.
  • Lines 7-9 connect to a site collection and then connect to the default document library within it.
  • Line 10 looks at the data sent from PowerApps and loops around to process for each image/filename pair.
  • Lines 13-18 grabs the file name and file data. It converts the file data into a memorystream, which is a way to represent a file in memory.
  • Line 19-25 then uploads the in-memory image to the document library in SharePoint, based on the filename provided. (Note: any PnP gurus wondering why I did not use Add-PnPFile, it was because this cmdlet did not properly handle the memorystream and the images were not proper binary and always broken.)

So now that we have seen the code, lets sort out some final configuration to make this all work. A lot of the next section I learnt from John Liu and watching the excellent Office Patterns and Practices Special Interest Group webinar he recently did with my all-time SharePoint hero, Vesa Juvonen.

Installing PnP PowerShell Components

First up, none of this will work without the PnP PowerShell module deployed to the Azure Function App. The easiest way to do this is to install the PnP PowerShell cmdlets locally and then copy the entire installation up to the Azure function environment. John Liu explains this in the aforementioned webinar but in summary, the easiest way to do this is to use the Kudu tool that comes bolted onto Azure functions. You can find it by clicking the Azure function name (“MyFunctionDemo” in my case) and choosing the “Platform Features” menu. From here you will find Kudu hiding under the Development Tools section. When the Kudu tab loads, click the Debug console menu and create a CMD or PowerShell console (it doesn’t matter which)

image  image  image

We are going to use this console to copy up the PnP PowerShell components. You can ignore the debug console and focus on the top half of the screen. This is showing you the top level folder structure for the Azure function application. Click on site and then wwwroot folders. This is the folder where all of your functions are stored (you will see a folder matching the name of the module we made in step 2). What we will do is install the PnP modules at this level, so it can be used for other functions that you are sure to develop Smile.

image  image

So click the + icon to create a folder here and call it Modules. From here, drag and drop the PnP PowerShell install location to this folder. In my case C:\Program Files\WindowsPowerShell\Modules\SharePointPnPPowerShellOnline\2.15.1705.0. I copied the SharePointPnPPowerShellOnline\2.15.1705.0 folder and all of its content here as I want to be able to maintain multiple versions of PnP as I develop functions over time.

image  image

Now that you have done this, the first line of the PowerShell script will make sense. Make sure you update the version number in the Import-Mobile command to the version of PnP you uploaded.

Import-Module "D:\home\site\wwwroot\modules\SharePointPnPPowerShellOnline\2.15.1705.0\SharePointPnPPowerShellOnline.psd1"

Handling passwords

The next thing we have to do is address the issue of passwords. This is where the $env:PW comes in on line 4 of my code. You see, when you set up Azure functions application, you can create your own settings that can drive the behaviour of your functions. In this case, we have made an environment variable called PW which we will store the password to access this site collection. This hides clear text passwords from code, but unfortunately it is a security by obscurity scenario, since anyone with access to the Azure function can review the environment variable and retrieve it. If I get time, I will revisit this via App Only Authentication and see how I go. I suspect though that this problem will get “properly” solved when Azure functions support using the Azure Key Vault.

In any event, you will find this under the Applications Settings link in the Platform Features tab. Scroll down until you find the “App Settings” section and add your password in as shown in the second image below.

image  image

Testing it out…

Right! At this point, we have all the plumbing done. Let’s test to see how it goes. First we need to create a JSON file in the required format that I explained earlier (the array of filename and filebody pairs). I crafted these by hand in notepad as they are pretty simple. To remind you the format was:

[ {  "filename": "boo.jpg",
"filebody": "data:image/jpeg;base64,/9j/4AAQSkZJR [ snip heaps of Base64 ] T//Z" },
{  "filename": "boo3.jpg",
"filebody": "data:image/jpeg;base64,/9j/TwBDAQMEB [ snip heaps of Base64 ] m22I" } ]

To generate the filebody elements, I used the covers of my two books (they are awesome – buy them!) and called them HG2BP and HG2M respectively. To create the base 64 encoded images in the Data URI scheme, I went to https://www.base64-image.de and generated the encoded versions. If you are lazy and want to use a pre-prepared file, just download the one I used for testing.

h2bph2m image

To test it, all we need to do is click the right hand Test link and paste the JSON into it. Click the Run button and hope for the best! As you can see in my example below, the web service returned a 200 status which means hunky dory, and the logs showed the script executing successfully.

image

Checking my document library and they are there… wohoo!!

image

So basically we have a lot of the bits in place. We have proven that our Azure function can take a JSON file with encoded images, process that file and then save it to a SharePoint document library. You might be thinking that all we need to do now is to wire up PowerApps to this function? Yeah, so did I too, but little did I realise the pain I was about to endure…

4. Create a Swagger file so that PowerApps can talk to our Azure function

Now we come to the most painful part of this whole saga. We need to describe our Azure function using a standard called Swagger (or OpenAPI). This provides important metadata so that PowerApps can make it easy for users to consume. This will make sense soon enough, but first we have to create it, which is a royal pain in the ass. I found the online documentation for swagger to be lacking and it took me a while to understand enough of the format to get it working.

So first up to make things simpler, let’s reduce some of the complexity. Our Azure function is a simple HTTP post. We have not defined any other type of requests, so lets make this formal as it will generate a much less ugly Swagger definition. Expand your function and choose the “Integrate” option. On the next screen, under Triggers, you will find a drop down with a label “Allowed HTTP methods”. Change the default value to “Selected methods” and then untick all HTTP methods except for POST. Click Save.

image  image  image

Now click back to your function app, and choose “API Definition” from the top menu. This will take you to the screen where you create/define your Swagger file.

image

On the initial screen, set your API definition source to function if asked, and you should see a screen that looks somewhat like this…

image

Click the Generate API definition template button as suggested by the comment in the code box in the middle. This will generate a swagger file and on the right side of the screen, the file has will be used to generate a summary of your API. You can see the Url of your azure app, some information about an API key (which we will deal with later) and below that, the PhotoSendSP function exposed as a webservice (/api/PhotoSendSP).

image

Now at this point you are probably thinking “okay so its unfamiliar, but this is pretty easy”, and you would be right. Where things got nightmarish for me was working out how to understand and customise the swagger file as the template is currently incomplete. All it has done is defined our function (note the paths section in the above screenshot  – can you see all those empty square brackets? that’s what you need to now fill in).

For the sake of brevity, I am not going to describe the ins and outs of this format (and I don’t fully know it yet anyway!). What I can tell you is that getting this right is a painful and time-consuming combination of trial and error, reading the swagger spec and testing in PowerApps. Let’s hope my hints here save you some frustration.

The first step is we need to create a definition for the format of data that our Azure function accepts as input. If you look closely above, you will see a section called definitions. Paste the following into the section so it looks like the screenshot below. Note: If you see any symbol apart from a benign warning message next to the “Photos:” line, then you do not have it right!.

definitions:
   Photos:
      type: object
      required:
         - filename
         - filebody
      properties:
         filename:
            type: string
            example: image.jpg
         filebody:
            type: string

image

So what we have defined here is an object called Photos which consists of two required properties, filename and filebody. Both are assumed to be string format, and filename also has an example to illustrate what is expected. Depending on how this swagger file is consumed by another application that supports swagger, one can imagine that example showing up on online help or intellisense when our function is being called.

Now lets make use of this definition.  Paste this into the parameters and description sections. Note the “schema:” section. Here we have told the swagger file that our function is expecting an array of objects based on the Photos definition that we created earlier.

parameters:
  - name: photocollection
    in: body
    description: “The encoded files”
    required: true
    schema:
       type: array
       items:
          $ref: '#/definitions/Photos'
description: "A collection of photos and filenames"

image

Finally, let’s finish off by defining that our Azure function consumes and produces its data in JSON format. Although our sample code is not producing anything back to PowerApps, you can imagine situations where we might do something like send back a JSON array of all of the SharePoint URL’s of each photo.

produces:
- application/json
consumes:
- application/json

image

Okay, so we are all set. Now to be clear, there is a lot more to Swagger, especially if you wanted to call our function from flow, but for now this is enough. Click the Save button, and then click the button “Export to PowerApps + Flow”. You will be presented with a new panel that explains the process we are about to do. Feel free to read it, but the key step is to download the swagger file we just created.

image

Okay, so if you have made it this far, you have a swagger JSON file and you are in the home stretch. Let’s now head back to PowerApps!

5. Create a custom PowerApps connection/datasource to use the Azure function

Back in PowerApps, we need to make a new custom connection to our Azure function. Click the Connections menu item and you will be redirected to web,powerapps.com. Click “Manage Custom Connections” and then click “Create a custom connector”. This will take you to a wizard.

image   image

image

The first step is to upload the swagger file you created in step 4 and before you do anything else, rename your connector to something short and sweet, as this will make it easier when displayed in the PowerApps list of connections.

image

Scroll to the bottom the of page and click the “Continue” button. Now you are presented with some security options about your connector. For context, the default settings for the PowerShell azure function template we chose in step 2 was to use an API key, so you can leave all of the defaults here, although I like to add the more meaningful “API Key” (this will make sense soon). Click Continue

image

PowerApps has now processed the swagger file and found the PhotoSendSP POST action we defined. It has also pulled some of the data from the swagger file to prepopulate some fields. Note this screen has some UI problems – you need to hover your cursor over the forms in the middle of the screen to see the scrollbar, so there is more to edit than what you initially see…

image

For now, do not enter anything into the summary field and scroll down to look at some of the other settings. The Visibility setting does not really matter for PowerApps, but remember that other online services can call our function. This visibility stuff relates more to Microsoft Flow, so you can ignore it for now. Scroll further down and you will see how our swagger schema has been processed by PowerApps. You can explore this but I suggest leaving it well alone. Below I have used all my MSPaint skills to make a montage to show how this relates to your Swagger file…

image

Finally, click Create Connector to wire it up. If all has gone to plan, you will see something resembling the following in the list of custom connectors in PowerApps. If you get this far, congratulations! You are almost there!

image

6. Test successfully and bask in the glory of your awesomeness

Now that you have a custom connection, let’s try it out. Open your PowerApp that you created in step 1. From the Content menu, click Data sources, click Add Datasource and then click New Connection. Scroll through the list of connections until you find the one you created in step 5. Click on it and you will be prompted for an API key (now you see why I added that friendly label during step 5).

image  image  image

Where to find this key? Well, it turns out that it was automagically generated for you when you first created your function! Go back to Azure functions portal and find your function. From the function sub menus, click Manage and you will be presented with a “Functions Keys” section with a default key listed. Copy this to the clipboard, and paste it back into the PowerApps API Key text box and click the Create button. Your datasource that connects to your Azure function is now configured in PowerApps!! (yay!).

image

image  image

Now way back in step 1 (gosh it seems like such a long time ago), we created a button labelled Submit, with the following formula.

Clear(SubmitData);
ForAll(PictureList,Collect(SubmitData, { filename: "a file.jpg", filebody: Url }))

The problem we are going to have is that all files submitted to the webservice will be called “a file.jpg” as I hardcoded the filename parameter for simplicity. Now if I fully developed this app, I would add a textbox to the gallery so that each photo has to be named prior to being able to submit to SharePoint. I am not going to do that here as this post is already too long, so instead I will use a trick I saw here to create a random two letter filename. I know it is not truly unique, but for our demo will suffice.

Here is the formula in all its ugliness.

Concatenate(Text( Now(), DateTimeFormat.LongDate ),Mid("0123456789ABCDEFGHIJKLMNOPQRTSTIUVWXYZ", 1 + RoundDown(Rand() * 36, 0), 1),Mid("0123456789ABCDEFGHIJKLMNOPQRTSTIUVWXYZ", 1 + RoundDown(Rand() * 36, 0), 1),".jpg"), filebody: Url } ) )

Yeah I know… let me break it down for you…

  • Text( Now(), DateTimeFormat.LongDate ) produces a string containing todays date
  • Mid(“0123456789ABCDEFGHIJKLMNOPQRTSTIUVWXYZ”, 1 + RoundDown(Rand() * 36, 0), 1) – selects a random letter/number from the string
  • Concatenate takes the above date, random letters and adds “.jpg” to it.

PowerApps does allow for line breaks in code, so it looks less ugly…

image

Let’s quickly test this before the final step of sending it off to SharePoint. Preview the app, take some photos and then examine the collections. As you can see, the SubmitData collection now has unique filenames assigned (By the way, yes I took these in a car and no, I was not driving at the time! 🙂

image

We are now ready for the final step. We need to call the Azure function! Smile

Go back to your submit button and add the following code to the end of the existing code, taking into account the name you gave to your connection/datasource. You should find that PowerApps uses intellisense to help you add the line because it has a lot of metadata from the swagger file.

'myfunctionsdemo.azurewebsites.net'.apiPhotoSendSPpost(SubmitData)

image

Important! Before we go any further, I strongly suggest you use the PowerApps web based authoring environment and not the desktop application. I have seen a problem where the desktop application does not encode images using the Data URI format, whereas the web based tool (and PowerApps clients on android and IOS) work fine.

So log into PowerApps on the web, open your app, preview it and cross your fingers! (If you want to be clever, go back to your function in Azure and keep and eye on the logs Smile

image

As the above screen shows, things are looking good. Let’s check the SharePoint document library that the script uploads to… YEAH BABY!! We have our photos!!!

image

Conclusion

Now you finally get to bask in your awesomeness. If you survived all this plumbing and have gotten this working then congratulations! you are well on your way to becoming a PowerApps, PowerShell, PnP (and flow) guru. This means you can now enhance your apps in all sorts of ways. For the non-developers who got this far, you can truly call yourself a citizen developer and design all sorts of innovative solutions via these techniques.

Even though a lot of this stuff is fiddly (especially that god-awful swagger crap), once you have gone through it a couple of times, and understand the intent of the components we have used, this is actually quite an easy solution to put together. I also found the Azure function side of things in particular, really easy to debug and see what was going on.

In terms of where we could take this, there are several avenues that I can immediately think of, but the possibilities are endless.

  • We could update the PowerShell code so it takes the destination document library as a parameter. We would need to update a new Swagger definition and then update our datasource in PowerApps, but that is not too hard a task once you have the basics working.
  • Using the same method, we could design a much more sophisticated form and capture lots of useful metadata with the pictures, and get them into SharePoint as metadata on the picture.
  • We could add in a lot more error handling into the script and return much better detail to PowerApps, such as detailed failure information.
  • Similarly, we could return detailed information to PowerApps to make the app richer. For example, we could generate unique filenames in our PowerShell function rather than PowerApps and return those names (and the URL to the image) in the reply to the HTTP Post, which would enable PowerApps to display images inline.
  • We could also take advantage of recent PowerApps enhancements and use local caching. I.e, when an internet connection is available, call the API, but if not, save to local storage and call the API once a connection is available.
  • We could not only upload images, but update lists and any other combination of SharePoint functions supported by PnP.

I hope that this post has helped you to better understand how these components hang together and I look forward to your feedback and how you have adopted/adapted and enhanced the ideas presented here.

 

Thanks for reading

 

Paul Culmsee

www.hereticsguidebooks.com

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Tips for using SPD Workflows to talk to 3rd party web services

Send to Kindle

Hi all

One workflow action that anyone getting into SharePoint Designer 2013 workflow development needs to know is the Call HTTP Web Service action. This action opens up many possibilities for workflow developers, and in some ways, turns them into an option to be taken seriously for certain types of development in SharePoint – particularly in Office 365 where your customisations options are more restrictive.

Not so long ago, I wrote a large series of posts on the topic of calling web services within workflows and was able to get around some issues encountered when utilising Managed Metadata columns. Fabian WIlliams and Andrew Connell have also done some excellent work in this area too. More recently I have turned my attention to using 3rd party cloud services with SharePoint to create hybrid scenarios. After all, there are tons of fit-for-purpose solutions for various problems in cloud land and many have an API that supports REST/JSON. As a result, they are accessible to our workflows which makes for some cool possibilities.

But if you try this yourself, you find out fairly quickly that there are three universal realities you have to deal with:

  1. Debugging SPD workflows that call web services is a total bitch
  2. SPD Workflows are very fussy when parsing the data returned by web services
  3. Web services themselves can be very fickle

In this brief post, I thought I might expand on each of these problem areas with some pointers and examples of how we got around them. While they may not be applicable or usable for you, they might give you some ideas in your own development and troubleshooting efforts.

Debugging SharePoint Designer Workflows…

The first issue that you are likely to encounter is a by product of how SharePoint 2013 workflows work. To explain, here is my all-time most dodgy conceptual diagram that is kind of wrong, but has just enough “rightness” not to get me in too much trouble with hardcore SharePoint nerds.

Snapshot

In this scenario, the SharePoint deployment consists of a web front end server and a middle tier server running Workflow Manager. Each time a workflow step runs, it is executed on the workflow manager server, not on the SharePoint server, and most certainly not on the users browser. The implication of this is that if you wish to get a debug trace of the webservice call made by the workflow manager server, you need to do it on the workflow manager server, which necessitates access to it.

Now typically this is not going to happen in production and it is sure as hell never going to happen on Office365, so you have to do this in a development environment. There are two approaches I have used to trace HTTP conversations to and from workflow manager: The first is to use a network sniffer tool like Wireshark and the second is to use a HTTP level trace tool like Fiddler. The Wireshark approach is oft overlooked by SharePoint peeps, but that’s likely because most developers tend not to operate at the TCPIP layer. It’s main pro is it is fairly easy to set up and capture a trace, but its major disadvantage is that if the remote webservice uses HTTPS, then the packet data will be encrypted at the point where the trace is operating. This rules it out when talking to most 3rd party API’s out in cloud land.

Therefore the preferred method is to use install Fiddler onto the workflow manager server, and configure it to trace HTTP calls from workflow manager. This method is more reliable and easier to work with, but is relatively tricky to set it all up. Luckily for all of us, Andrew Connell wrote comprehensive and clear instructions for using this approach.

SPD Workflows are very fussy when parsing the data returned by web services

If you have never seen the joys of JSON data, it looks like this…

{“d”:{“results”:[{“__metadata”:{“id”:”71deada5-6100-48a5-b2e3-42b97b9052a2″,”uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)”,”etag”:”\”6\””,”type”:”SP.Data.DocumentsItem”},”FirstUniqueAncestorSecurableObject”:{“__deferred”:{“uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)/FirstUniqueAncestorSecurableObject”}},”RoleAssignments”

Hard to read? yeah totally, but it was never meant to be human readable anyway. The point here that when calling a HTTP Web service, SharePoint Designer will parse JSON data like the example above into a dictionary variable for you to then work with. In part 7 of my big workflow series I demonstrated how you can parse the data returned to make decisions in your workflow.

But as Fabian has noted, some web services return additional data other than the JSON itself. For example, sometimes an API will return JSON in a JavaScript variable like so:

var Variable = {“d”:{“results”:[{“__metadata”:{“id”:”71deada5-6100-48a5-b2e3-42b97b9052a2″,”uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)”,”etag”:”\”6\””,”type”:”SP.Data.DocumentsItem”},”FirstUniqueAncestorSecurableObject”:{“__deferred”:{“uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)/FirstUniqueAncestorSecurableObject”}},”RoleAssignments”

The problem here is that the parser code used by the Call HTTP Web Service action does not handle this well at all. Fabian had this to say in his research

What we found out is that if you have anything under the Root of the JSON node other than a JSON Array, for example as in the case of a few, the Version Number [returned as a JSON Object], although it works perfectly in a browser, or JSON Viewer, or Fiddler, it doesn’t make the right call when using SPD2013 or VS2012. If after modifying the data output and removing anything that is NOT a JSON Array from the Root of the Node, it should work as expected.

Microsoft documented an approach to getting around this when demonstrating pulling data from ebay which brings in XML format instead of JSON. They used a transformer web service provided by firstamong.com and then passed the URL of the ebay web service as a parameter to the transformer web service (ie http://www.firstamong.com/json/index.php?q=http://deals.ebay.com/feeds/xml. A similar thing could be done for getting cleaned JSON.

So what to do if you have a web-service that includes data you are not interested in? Before you swear too much or use Microsoft’s approach, check with the web service provider to see if they offer a way to return “raw” JSON data back to you. I have found with a little digging, some cloud providers can do this. Usually it is a variation of the URL you call or something you add to the HTTP request header. In short, do whatever you can to get back a simple JSON array and nothing else if you want it easily parsed by SharePoint.

Now speaking of request headers…

Web Services can be fickle…

It is very common to get a HTTP 500 internal server error when calling 3rd party web services. One common reason for this is that SPD workflows add lots of stuff to the HTTP request when calling a web service because it assumes it is going to be SharePoint and some authorisation needs to take place. But 3rd party webservices are likely not to care, as they tend to use API keys. In fact, sometimes it can cause you problems if the remote webserver is not expecting this stuff in the header.

Here is an example of a problem that my colleague Hendry Khouw had. After successfully crafting a REST request to a 3rd party service on his workstation, he tried to call the same web service using the Call HTTP Web Service workflow action. But when the webservice was called from the workflow, the HTTP responsecode was 500 (internal server error). Using Andrew Connell’s method of fiddler tracing explained earlier, he captured the request that was returning a HTTP 500 error.

image

image

Hendry then pasted the web service URL directly into Fiddler and captured the following trace that returned a successful HTTP 200 response and the JSON data expected.

image

By comparing the request header between the failed and successful request, it becomes clear that Workflow Manager sends oAuth data, as well as various other things in the header that were not sent when the URL was manually called. Through a little trial and error using Fiddler’s composer function, Hendry isolated the problem down to one particular oAuth header variable, Authorization: Bearer. By using Fiddler composer and removing the Authorization variable (or setting it as a blank value), the request was successful as can be shown on the second and third images below:

image  image

image

Now that we have worked out the offensive HTTP header variable that was causing the remote end to spit the dummy, we can craft a custom request header in the workflow to prevent the default value of Bearer being set.

Step 1. Build a dictionary to be included into the RequestHeaders for the web service call. Note the blank value.

image

Step 2. Set the RequestHeaders to use the dictionary we’ve just created.

image

image

Hendry then republished his workflow, the request was successful and he was able to parse the results into a dictionary variable.

Hope this helps others…

Paul Culmsee and Hendry Khouw

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 12

This entry is part 12 of 13 in the series Workflow
Send to Kindle

Hi all, and welcome to part 12 of my articles about SharePoint 2013 Workflows and whether they are ready for prime time. Along the way we have learnt all about CAML, REST, JSON, calling web services, Fiddler, Dictionary objects and a heap of scenarios that can derail aspiring workflow developers. All this just to assign a task to a user!

Anyways, since it has been such a long journey, I felt it worthwhile to remind you of the goal here. We have a fictitious company called Megacorp trying to develop a solution to controlled documents management. The site structure is as follows:

image

The business process we have been working through looks like this:

Snapshot_thumb3

The big issue that has caused me to have to write 12 articles all boils down to the information architecture decision to use a managed metadata column to store the Organisation hierarchy.

Right now, we are in the middle of implementing an approach of calling a web service to perform step 3 in the above diagram. In part 9 and part 10 of this series, I explained the theory of embedding a CAML query into a REST query and in part 11, we built out most of the workflow. Currently the workflow has 4 stages and we have completed the first three of them.

  • 1) Get the organisation name of the current item
  • 2) Obtain an X-RequestDigest via a web service call
  • 3) Constructed the URL to search the Process Owner list and called the web service

The next stage will parse the results of the web service call to get the AssignedToID and then call another web service to get the actual userid of the user. Then we can finally have what we need to assign an approval task. So let’s get into it…

Obtaining the UserID

In the previous post, I showed how we constructed a URL similar to this one:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

This URL uses the CAML in REST method of querying the Process Owners list and returns any items where Organisation equals “Megacorp Burgers”. The JSON data returned shows the AssignedToID entry with a value of 8. Via the work we did in the last post. we already have this data available to us in a dictionary variable called ProcessOwnerJSON.

The rightmost JSON output below illustrates taking that AssignedToID value and calling another web service to return the username , i.e : http://megacorp/iso9001/_api/Web/GetUserById(8).

image   image_thumb52

Confused at this point? Then I suggest you go back and re-read parts 8 and 10 in particular for a recap.

So our immediate task is to extract the AssignedToId from the dictionary variable called ProcessOwnerJSON. Now that you are a JSON guru, you should be able to figure out that the query will be d/results(0)/AssignedToId.

Step 1:

Add a Get an Item from a Dictionary action as the first action in the Obtain Userid workflow stage. Click the item by name or path hyperlink and click the ellipses to bring up the string builder screen. Type in d/results(0)/AssignedToId.

image

Step 2:

Click on the dictionary hyperlink and choose the ProcessOwnerJSON variable from the list.

Step 3:

Click the item hyperlink and use the AssignedToID variable

image

That is basically it for now with this workflow stage as the rest of it remains unchanged from when we constructed it in part 8. At this point, the Obtain Userid stage should look like this:

image

If you look closely, you can see that it calls the GetUserById method and the JSON response is added to the dictionary variable called UserDetail. Then if the HTTP response code is OK (code 200), it will pull out the LoginName from the UserDetail variable and log it to the workflow history before assigning a task.

Phew! Are we there yet? Let’s see if it all works!

Testing the workflow

So now that we have the essential bits of the workflow done, let’s run a test. This time I will use one of the documents owned by Megacorp Iron Man Suits – the Jarvis backup and recovery procedure. The process owner for Megacorp Iron Man suits is Chris Tomich (Chris reviewed this series and insisted he be in charge of Iron Man suits!).

image  image

If we run the workflow against the Jarvis backup and recovery procedure, we should expect a task to be created and assigned to Chris Tomich. Looking at the workflow information below, it worked! HOLY CRAP IT WORKED!!!

image

So finally, after eleven and a half posts, we have a working workflow! We have gotten around the issues of using managed metadata columns to filter lists, and we have learnt a heck of a lot about REST/oData, JSON, CAML and various other stuff along the way. So having climbed this managed metadata induced mountain, is there anything left to talk about?

Of course there is! But let’s summarise the workflow in text format rather than death by screenshot

Stage: Get Organisation Name
   Find | in the Current Item: Organisation_0 (Output to Variable:Index)
   then Copy Variable:Index characters from start of Current Item: Organisation_0 (Output to Variable: Organisation)
   then Replace " " with "%20" in Variable: Organisation (Output to Variable: Organisation)
   then Log Variable: Organisation to the workflow history list
   If Variable: Organisation is not empty
      Go to Get X-RequestDigest
   else
      Go to End of Workflow

Stage: Get-X-RequestDigest
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Call [%Workflow Context: Current Site URL%]_api/contextinfo HTTP Web Service with request
       (ResponseContent to Variable: ContextInfo
        |ResponseHeaders to responseheaders
        |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/GetContextWebInformation/FormDigestValue from Variable: ContextInfo (Output to Variable: X-RequestDigest )
   If Variable: X-RequestDigest is empty
      Go to End of Workflow
   else
      Go to Prepare and execute process owners web service call

Stage: Prepare and execute process owners web service call
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Set Variable:URLStart to _api/web/Lists/GetByTitle('Process%20Owners')/GetItems(query=@v1)?@v1={"ViewXml":"<View><Query><ViewFields><FieldRef%20Name='Organisation'/><FieldRef%20Name='AssignedTo'/></ViewFields><Where><Eq><FieldRef%20Name='Organisation'/><Value%20Type='TaxonomyFieldType'>
   then Set Variable:URLEnd to </Value></Eq></Where></Query></View>"}
   then Call [%Workflow Context: Current Site URL%][Variable: URLStart][Variable: Organisation][Variable: URLEnd] HTTP Web Service with request
      (ResponseContent to Variable: ProcessOwnerJSON
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   then Log Variable: responseCode to the workflow history list
   If Variable: responseCode equals OK
      Go to Obtain Userid
   else
      Go to End of Workflow

Stage: Obtain Userid
   Get d/results(0)/AssignedToId from Variable: ProcessOwnerJSON (Output to Variable: AssignedToID)
   then Call [%Workflow Context: Current Site URL%]_api/Web/GetUserByID([Variable: AssignedToID]) HTTP Web Service with request
      (ResponseContent to Variable: userDetail 
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/LoginName from Variable: UserDetail (Output to Variable: AssignedToName)
      then Log The User to assign a task to is [%Variable: AssignedToName]
      then assign a task to Variable: AssignedToName (Task outcome to Variable:Outcome | Task ID to Variable: TaskID )
   Go to End of Workflow

Tidying up…

Just because we have our workflow working, does not mean it is optimally set up. In the above workflow, there are a whole heap of areas where I have not done any error checking. Additionally, the logging I have done is poor and not overly helpful for someone to troubleshoot later. So I will finish this post by making the workflow a bit more robust. I will not go through this step by step – instead I will paste the screenshots and summarise what I have done. Feel free to use these ideas and add your own good practices in the comments…

First up, I added a new stage at the start of the workflow for anything relation to initialisation activities. Right now, all it does is check out the current item (recall in part 3 we covered issues related to check in/out), and then set a Boolean workflow variable called EndWorkflow to No. You will see how I use this soon enough. I also added a new stage at the end of the workflow to tidy things up. I called it Clean up Workflow and it’s only operation is to check the current item back in.

image   image

In the Get Organisation Name stage, I changed it so that any error condition logs to the history list, and then set the EndWorkflow variable to Yes. Then in the Transition to stage section, I use the EndWorkflow variable to decide whether to move to the next stage or end the workflow by calling the Clean up workflow stage that I created earlier. My logic here is that there can be any number of error conditions that we might check for, and its easier to use a single variable to signify when to abort the workflow.

image

In the Get X-RequestDigest stage, I have added additional error checking. I check that the HTTP response code from the contextinfo web service call is indeed 200 (OK), and then if it is, I also check that we successfully extracted the X-RequestDigest from the response. Once again I use the EndWorkflow variable to flag which stage to move to in the transition section.

image

In the Prepare and execute process owners web service call stage, I also added more error checking – specifically with the AssignedToID variable. This variable is an integer and its default value is set to zero (0). If the value is still 0, it means that there was no process owner entry for the Organisation specified. If this happens, we need to handle for this…

image

Finally, we come to the Obtain Userid stage. Here we are checking both the HTTP code from the GetUserInfo web service call, as well as the userID that comes back via the AssignedToName variable. We assign the task to the user and then set the workflow status to “Completed workflow”. (Remember that we checked out the current item in the Workflow Initialisation stage, so we can now update the workflow status without all that check out crap that we hit in part 3).

image

Conclusion…

So there we have it. Twelve posts in and we have met the requirements for Megacorp. While there is still a heap of work to do in terms of customising the behaviour of the task itself, I am going to leave that to you!

Additionally, there are a lot of additional things we can do to make these workflows much more robust and easier to manage. To that end, I strongly urge you to check out Fabian Williams blog and his brilliant set of articles on this topic that take it much (much) further than I do here. He has written a ton of stuff and it was his work in particular inspired me to write this series. He also provided me with counsel and feedback on this series and I can’t thank him enough.

Now that we have gotten to where I wanted to, I’ll write one more article to conclude the series – reflecting on what we have covered, and its implications for organisations wanting to leverage out of the box SharePoint workflow, as well as implications for all of you citizen developers out there.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 11

This entry is part 11 of 13 in the series Workflow
Send to Kindle

Hi all, and welcome to the penultimate article in what has tuned into a fairly epic series about SharePoint 2013 Workflows. From part 6 to part 8 of this series, we implemented a workflow that made use of the web service calls as well as the new looping capabilities of SharePoint Designer 2013. We used the web service call to get all of the items in the Process Owners list, and then looped through them to find the process owner we needed based on organisation. While that method worked, the concern was that it was potentially inefficient because if there was a large list of process owners, it might consume excessive resources. This is why I referred to the approach in part 6 as the “easy but flawed” way.

Now we are going to use the “better but harder way”. To that end, the part 9 and part 10 have set the scene for this one, where we are going to implement pretty much all of the theory we covered in them. Now I will not rehash any of the theory of the journey we took to get here, but I cannot stress enough that you really should have read them before going through this article.

With that said, we are going make a bunch of changes to the current workflow by doing the following:

  • 1) Change the existing workflow to grab the Organisation name as opposed to the GUID
  • 2) Create a new workflow stage that gets us the X-RequestHeader (explained in part 9).
  • 3) Build the URL that we will use to implement the “CAML in REST” approach (explained in part 9 and part 10)
  • 4) Call the aforementioned webservice
  • 5) Extract the AssignedToId of the process owner for a given organisation
  • 6) Call the GetUserByID webservice to grab the actual userID of the process owner and assign them an approval task

In this post, we will cover the first four of the above steps…

Get the Name not the GUID…

Here is the first stage of the workflow as it is now, assuming you followed parts 6 to 8.

image

First let’s make a few changes so that we get the Name of the Organisation stored with the current item, rather than the GUID as we are doing now. If you recall from part 4, the column Organisation_0 is a hidden column that got created because Organisation is a managed metadata column. This column stores the names and Id’s of managed metadata term(s) that have been assigned in the format of <term name>|<term GUID>. For example “Metacorp+Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”.

To get the GUID, we grabbed everything to the right of the pipe symbol (“|”). Now to get the name, we need everything to the left of it.

Step 1:

Rename the stage from “Obtain Term GUID” to “Get Organisation Name” (I trust that by part 11 a screenshot is not required for this)

Step 2:

Delete the second workflow action called Calculate Variable: index plus 1 (Output to Variable:calc) as we don’t need the variable calc anymore. In addition, delete the workflow action “Copy from Current Item: Organisation_0”. You should be left with two actions and the transition to stage logic as shown below.

image

Step 3:

Add an Extract Substring from Start of String workflow action in between the two remaining actions. Click the “0” hyperlink and click the fx button. In the Lookup for Integer dialog, set it to the existing variable Index. Click on the “string” hyperlink and set it to the Organisation_0 column from the Current Item. Finally, click the (Output to…) hyperlink and create a new string variable called Organisation.

image

Now, at this point we need to pause and think about what we are doing. If you recall part 10, I had trouble getting the format right for the URL that uses CAML inside REST web service call. The culprit was that I had to encode any occurrence of a space in the URL with the HTML encoded space (a %20). Take a look at the URL that  was tested in Fiddler below to see this in action…

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

Look toward the end of the URL where the organisation is specified (marked in bold). What do you notice?

Yep – the space between Megacorp and Burgers is also encoded. But this causes a problem since the current value of the Organisation variable contains the space. So let’s deal with this now by encoding spaces.

Step 4:

Add a Replace Substring in String workflow action. Click the first string hyperlink and type in a single space. In the second string hyperlink, type in %20. In the third string hyperlink, click the fx button and add the Organisation variable. In the final hyperlink (Output to Variable:Output), choose the variable Organisation.

image

After all this manipulation of the Organisation variable, it is probably worthwhile logging it to the workflow history list so we can see if the above steps work as expected.

Step 5:

Click the Log Variable:TermGUID to the workflow history list action and change the variable from TermGUID to Organisation. The action will now be called Log Variable:Organisation to the workflow history list

image

Step 6:

In the Transition to stage section, find the “If Variable: TermGUID is not empty” condition and change the variable from TermGUID to Organisation

image

Step 7:

Create a new workflow stage and call it “Get X-RequestDigest”. Then in the Transition to stage section of the Get Organisation Name stage, find the “Go to Get Process Owners” and change the stage from Get Process Owners to Get X-RequestDigest.

The adjusted workflow should now look like the image below…

image

Getting the X-RequestDigest…

If you recall in part 9, we need to call the contextinfo web service so we can extract the FormDigestValue to use in our CAML embedded web service call to the Process Owners list. If that statement makes no sense then go back and read part 9, otherwise, you should already know what to do!.. Bring on the dictionary variables and the Call to HTTP Web service action!

Step 1:

Go to the Get Process Owners stage further down and find the very first action – a Build Dictionary action that creates a variable called RequestHeader. Right click on it and choose Move Action Up. This will move the action into the Get X-RequestDigest stage as shown below.

image  image

What are we doing here? This action was the one we created in part 9 that asks SharePoint to bring back data in JSON format. We first learnt all about this in part 4 when I explained JSON and part 5 when I explained how dictionary variables work.

Step 2:

Add a Call HTTP Web Service action after the build dictionary action. For the URL, use the string builder and add a lookup to the Current Site URL (found in Workflow Context in the data source dropdown). Then add the string “_api/contextinfo” to it to complete the URL of the web service. Also, make sure the method chosen is a HTTP POST and not a GET.

image  image

image

This will construct the URL based on which SharePoint site the workflow is run from (eg http://megacorp/iso9001/_api/contextinfo. ) but without hard-coding the URL.

Step 3:

Make sure the workflow action from step 2 is selected and in the ribbon, choose the Advanced Properties icon. In the Call HTTP Web Service Parameters dialog, click the RequestHeaders dropdown and choose the RequestHeader variable and click OK. (Now you know why we moved the build dictionary action in step 1)

image

Step 4:

Click the response hyperlink in the Call HTTP Web Service action and choose to create a new variable. Call it ContextInfo. Also check the name of the variable for the response code and make sure it is set to the responseCode and not something like responseCode2.

image  image

Step 5:

Add an If any value equals value condition below the web service call. For the first value hyperlink, choose the variable responseCode as per step 4. Click the second value hyperlink, type in “OK” as shown below:

image

This action ensures that the response to the web service call was valued (OK is the same as a HTTP 200 code). If we get anything other than an OK, there is no point continuing with the workflow.

Step 6:

Inside the condition we created in step 5, add a Get an Item from a Dictionary action. Then do the following:

  • In the item by name or path hyperlink, type in exactly “d/GetContextWebInformation/FormDigestValue” without the quotes.
  • In the dictionary hyperlink, choose the variable ContextInfo that was specified in step 4.
  • In the item hyperlink in the “Output To” section, create a new string variable called X-RequestDigest.

All this should result in the action below.

image

Now let’s take a quick pause to understand what we did in this step. You should recognise the d/GetContextWebInformation/FormDigestValue as parsing the JSON output. We get the value of FormDigestValue and assign it to the variable X-RequestDigest. As a reminder, here is the JSON output from calling the contextinfo web service using Fiddler. Note the path from d –> GetContextWebInformation –> FormDigestValue.

image_thumb17

Step 7:

In the transition to stage section, add an If any value equals value condition. For the first value hyperlink, choose the variable X-RequestDigest that we created in step 6. Click the equals hyperlink and change it to is empty.

image

Step 8:

Under the newly created If Variable: X-Request is empty condition, add a Go to a stage action and set it to End of Workflow. In the Else section of the condition, add a Go to a stage action and set it to the Get Process Owners stage.

image

Cool! We have our X-Request Digest stage all done. Here is what it looks like…

image

This has all been very easy so far hasn’t it! A big difference to some of the previous posts. But now its time to wire up the CAML inside REST web service call, and SharePoint is about to throw us another curveball…

Get the Process Owner…

Our next step is to rip the guts out of the existing stage to get the process owner. Unlike our first solution, we no longer need to loop through the process owners list which means the entire Find Matching Process Owner stage is no longer needed. So before we add new actions, lets do some tidying up.

Step 1:

Delete the entire stage called “Find Matching Process Owner”. Do this by clicking the stage to select all actions within it, and then choose delete from the SharePoint Designer ribbon. SPD will warn you that this will delete all actions. Go ahead and click OK.

image

Our next step is to attempt to make the CAML inside REST web service call. To remind you of what the URL will look like, here is the one we successfully tested in part 10. Ugly isn’t it. Now you know why developers are an odd bunch – they deal with this stuff all day!

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

Let’s take our time here, because as you can see the URL we have to craft is complex. First up, we need to use a Build a Dictionary action to create the HTTP headers we need (including the X-RequestDigest). Recall in part 9, that we also need to set Content-length to 0 and Accept to application/json;odata=verbose.

Step 2:

Add a Build dictionary action as the first action in the Get Process Owners section. Click the this hyperlink and the add button in the Build a Dictionary dialog. Add the following dictionary items:

  • Add a string called Accept and a value of: application/json;odata=verbose

image

  • Add a string called Content-length and a value of 0

image

  • Add a string called X-RequestDigest. In the value textbox, click the fx button and choose the workflow variable called X-RequestDigest.

image  image  image

Your dictionary should look like this:

image

Click ok and set the dictionary variable name to be the existing variable called RequestHeader. The completed action should look like the image below:

image

Now let’s turn our attention to creating the web service URL we need.

Step 3:

Find the existing Call HTTP Web Service action in the Get Process Owner stage. Click the URL hyperlink and click the ellipses to bring up the string builder dialog. Delete the existing URL so we can start over. Add the following entries back (carefully!)

  • 1) A lookup to the Site URL from the Workflow Context
  • 2) The string “_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>”
  • 3) A lookup to the Organisation workflow variable
  • 4) The string “</Value></Eq></Where></Query></View>”}”

This should look like the image below:

image

A snag…

Click OK and see what happens. Uh-oh. We are informed that “Using the special characters ‘[%%]’ or [%xxx%]’ in any string, or using the special character ‘{‘ in a string that also contains a workflow lookup may corrupt the string and cause an unexpected error when the workflow runs” – Ouch!

image

How do we get out of this issue?

Well, we are using two workflow lookups in the string – the first being the site URL at the start and the second being the Organisation variable embedded in the CAML bit of the URL. Since it is complaining of using certain special characters in combination with workflow lookups, let’s break up the URL into pieces by creating a couple of string variables. At the start of step 3 above, we listed 4 elements that make up the URL. Let’s use that as a basis to do this…

Step 4:

Add a Set Workflow Variable action below the build dictionary action in the Get Process Owner stage. Call the variable URLStart and set its value to: _api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>

image   image

Step 5:

Add another Set Workflow Variable action in the Get Process Owner stage. Call the variable URLEnd and set its value to: “</Value></Eq></Where></Query></View>”}”

image

Step 6:

Edit the existing Call HTTP Web Service action in the Get Process Owner stage. Click the URL hyperlink and add the following entries back (carefully!)

  • 1) A lookup to the Site URL from the Workflow Context
  • 2) A lookup to the URLStart workflow variable
  • 3) A lookup to the Organisation workflow variable
  • 4) A lookup to the URLEnd workflow variable

This should look like the image below:

image

Click OK and in the Call HTTP Web Service dialog, make sure the HTTP method is set to HTTP POST. Click OK

image  image

Step 7:

Select the Call HTTP Web Service action and click the Advanced Properties icon in the ribbon. In the Call HTTP Web Service Properties dialog box, click the RequestHeaders parameter and in the drop down list to the right of it, choose the RequestHeader variable created in step 3. Click OK.

image_thumb97    image_thumb103

 

Step 8:

Select the Call HTTP Web Service action and click the variable next to the ResponseContent to section. Create a variable called ProcessOwnerJSON. This variable will store the JSON returned from the web service call.

image    image

Step 9:

In the Transition to stage section of the Get Process Owners stage, look for the If responseCode equals OK condition. Set the stage to Obtain Userid as shown below:

image

Step 10:

To make the workflow better labelled, rename the existing Get Process Owners stage to Prepare and execute Process Owner web service call. This workflow stage is going to end when it has attempted the call and we will create a new stage to extract the process owner and create the approval task. At this point the workflow stage should look like the image below:

image

Conclusion

We will end the post at this point as it is already very long. In the next post, we will make a couple of tweaks to the Obtain Userid workflow stage and test the workflow out. For your reference, here is the complete workflow as it stands…

image

image

image

Thanks for reading

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 10

This entry is part 10 of 13 in the series Workflow
Send to Kindle

Hi there and welcome back to my series of articles that puts a real-world viewpoint to SharePoint 2013 workflow capabilities. This series is pitched more to Business Analysts, SharePoint Hackers and generally anyone who might be termed a citizen developer. This series shows the highs and lows of out of the box SharePoint Designer workflows, and hopefully helps organisations make an informed decision around whether or not to use what SharePoint provides, or moving to the 3rd party landscape.

By now you should be well aware of some of the useful new workflow capabilities such as stages, looping, support for calling web services and parsing the data via dictionary objects. You also now understand the basics of REST/oData and CAML. At the end of the last post, we just learnt that it is possible to embed CAML queries into REST/oData, which gets around the issue of not being able to filter lists via Managed metadata columns. We proved this could be done, but we did not actually try it with the actual CAML query that can filter managed metadata columns. It is now time to rectify this.

Building CAML queries

Now if you are a SharePoint developer worth your salt, you already know CAML, because their are mountains of documentation on this topic on MSDN as well as various blogs. But a useful shortcut for all you non coders out there, is to make use of a free tool called CAMLDesigner 2013. This tool, although unstable at times, is really easy to use, and in this section I will show you how I used it to create the CAML XML we need to filter the Process Owners list via the organisation column.

After you have downloaded CAMLDesigner and successfully gotten it installed, follow these steps to build your query.

Step 1:

Start CAMLDesigner 2013 and on the home screen, click the Connections menu.

image

Step 2:

In the connections screen that slides out from the right, enter http://megacorp/iso9001 into the top textbox, then click the SharePoint 2013 and Web Services buttons. Enter the credentials of a site administrator account and then click the Connect icon at the bottom. If you successfully connect to the site, CAMLDesigner will show the site in the left hand navigation.

image  image

Step 3:

Click the arrow to the left of the Megacorp site and find the Process Owners list. Click it, and all of the fields in the list will be displayed as blue boxes below the These are the fields of the list section.

image

Step 4:

Drag the Organisation column to the These are the selected fields section to the right. Then do the same for the Assigned To column. If you look closely at the second image, you will see that the CAML XML is already being built for you below.

image     image

Step 5:

Now click on the Where menu item above the columns. Drag the Organisation column across to the These are the selected fields section. As you can see in the second image below, once dragged across, a textbox appears, along with a blue button with an operator. Also take note of the CAML XML being build below. You can see that has added a <Where></Where> section.

image

image

image

Step 6:

In the Textbox in the Organisation column you just dragged, type in one of the Megacorp organisations. Eg: Megacorp Burgers. Note the XML changes…

image

Step 7:

Click the Execute button (the Play icon across the top). The CAML query will be run, and any matching data will be returned. In the example below, you can see that the user Teresa Culmsee is the process owner for Megacorp Burgers.

image

image

Step 8:

Copy the XML from the window to clipboard. We now have the XML we need to add to the REST web service call. Exit CAMLDesigner 2013.

image

Building the REST query…

Armed with your newly minted CAML XML as shown below, we need to return to fiddler and draft it into the final URL.

<ViewFields>
   <FieldRef Name='Organisation' />
   <FieldRef Name='AssignedTo' />
</ViewFields>
<Where>
   <Eq>
      <FieldRef Name='Organisation' />
      <Value Type='TaxonomyFieldType'>Megacorp Burgers</Value>
   </Eq>
</Where>

As a reminder, the XML that we had working in the past post looked like this:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

Let’s now munge them together by stripping the carriage returns from the XML and putting it between the <Query> and </Query> sections. This gives us the following large and scary looking URL.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ /> <FieldRef Name=’AssignedTo’ /> </ViewFields> <Where> <Eq> <FieldRef Name=’Organisation’ /> <Value Type=’TaxonomyFieldType’>Megacorp Burgers</Value> </Eq> </Where></Query></View>”}

Are we done? Unfortunately not. If you paste this into Fiddler composer, Fiddler will get really upset and display a red warning in the URL textbox…

image

If despite Fiddlers warning, you try and execute this request, you will get a curt response from SharePoint in the form of a HTTP/1.1 400 Bad Request response with the message HTTP Error 400. The request is badly formed.

The fact that Fiddler is complaining about this URL before it has  even been submitted to SharePoint allows us to work out the issue via trial and error. If you cut out a chunk of the URL, Fiddler is okay with it. For example: This trimmed URL is considered acceptable by Fiddler:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>

But adding this little bit makes it go red again.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ />

Any ideas what the issue could be? Well, it turns out that the use of spaces was the issue. I removed all the spaces from the URL above and where I could not, I encoded it in HTML. Thus the above URL turned into the URL below and Fiddler accepted it

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’ />

So returning to our original big URL, it now looks like this (and Fiddler is no longer showing me a red angry textbox):

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

image

So let’s see what happens. We click the execute button. Wohoo! It works! Below you can see a single matching entry and it appears to be the entry from CAMLBuilder2013. We can’t tell for sure because the Assigned To column is returned as AssignedToID and we have to call another web service to return the actual username. We covered this issue and the web service to call extensively in part 8 but to quickly recap, we need to pass the value of AssignedToID to the http://megacorp/iso9001/_api/Web/GetUserById() web service. In this case, http://megacorp/iso9001/_api/Web/GetUserById(8) because the value of AssignedToId is 8.

The images below illustrate. The first one shows the Process Owner for Megacorp burgers. Note the value of AssignedToID is 8. The second image shows what happens when 8 is passed to the GetUserById web service call. Check Title and LoginName fields.

image image

Conclusion

Okay, so now we have our web service URL’s all sorted. In the next post we are going to modify the existing workflow. Right now it has four stages:

  • Stage 1: Obtain Term GUID (extracts the GUID of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get Process Owners (makes a REST web service call to enumerate the Process Owners List and if successful, moves to stage 3)
  • Stage 3: Find Matching Process Owner (Loops through the process owners and finds the matching organisation from stage 1. For the match, grab the value of AssignedToID and if successful, move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

We will change it to the following  stages:

  • Stage 1: Obtain Term Name (extracts the name of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get the X-RequestDigest (We will grab the request digest we need to do our HTTP POST to query the Process Owners list. If successful move to stage 3)
  • Stage 3: Get Process Owner (makes the REST web service call to grab the Process Owners for the organisation specified by the Term name from stage 1. Grab the value of AssignedToID and move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

One final note: After this epic journey we have taken, you might think that doing this in SharePoint Designer workflow should be a walk in the park. Unfortunately this is not quite the case and as you will see, there are a couple more hurdles to cross.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 9

This entry is part 9 of 13 in the series Workflow
Send to Kindle

Hi all and welcome to my series that aims to illustrate the trials and tribulations of SharePoint 2013 workflow to those who consider themselves as citizen developers. In case you don’t want to go all the way back to part 1, a citizen developer is basically a user who creates new business applications themselves, without the use of developers (or IT in general). Since there is no Visual Studio in sight in this series, I think its safe to ay that SharePoint 2013 workflow has the potential to be a popular citizen developer tool, but it is important that people know what they are in for.

We start part 9 of this series having just finally assigned a task to a user in part 8. While this in itself is not particularly earth shattering, if you have followed this series to now, you will appreciate that we have had to navigate some serious potholes to get here, but along the way it is clear that there is some very powerful features available.

Currently the workflow as it stands consists of four stages.

  • Stage 1: Obtain Term GUID (extracts the GUID of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get Process Owners (makes a REST web service call to enumerate the Process Owners List and if successful, moves to stage 3)
  • Stage 3: Find Matching Process Owner (Loops through the process owners and finds the matching organisation from stage 1. For the match, grab the value of AssignedToID and if successful, move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

As mentioned at the end of part 8, one flaw in this workflow is the issue that if the process owners list has a large number of entries, the workflow has to iterate each process owner to find the one with a matching organisation. This causes a bit of concern, because in general, iterating through SharePoint lists in this way is not overly great on performance. In fact SharePoint has an  unfortunate heritage of newbie developers causing all sorts of disk and memory performance issues because of code that iterates a list in a similar way.

So this post is going to explore how we can do better. How about change the workflow behaviour so that rather than grab the entire process owners list, we grab just the entry we need from the process owners list.

But wait – didn’t you say something about this not working?

Now if you have dutifully read this series in the way I intend you to do, you might recall the issue that cropped up in parts 4 and 5. I pointed out that since the Organisation column we are using is a managed metadata column, we cannot use it to filter a list using REST/oData. So while the first query below would happily filter a list by a title of “something”, the second one will result in a big fat error.

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbyid(guid’0ffc4b79-1dd0-4c36-83bc-c31e88cb1c3a’)/Items?$filter=Title eq ‘something’ Smile

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbyid(guid’0ffc4b79-1dd0-4c36-83bc-c31e88cb1c3a’)/Items?”filter=Organisation eq ‘something’ Sad smile

So this is a pickle isn’t it – how can we filter the process owners list by organisation when its not supported by REST/oData?

The thing about managed metadata columns…

Going back in time a bit, SharePoint 2010 was the first version with support for REST and in SharePoint 2013, REST support was extended significantly. As you now know, it seems the managed metadata people never got that memo because one of the older methods that can be used to query lists is called Collaborative Application Markup Language (CAML for short), and CAML does support filtering on managed metadata columns.

CAML, in case you are not aware of it, has been used for SharePoint since the very first version. It is based on a defined XML schema that can be used to query lists and libraries – much like a SQL query does on a database table. Being XML, it is more verbose than a SQL table and for me, harder to read. As an example, the SQL statement “SELECT * from TABLE WHERE field=VALUE” would look something like:

<Query><Where>< Eq><FieldRef Name=’field’ />< Value Type=’Text’>VALUE</Value> </Eq></Where></Query>.

Turning our attention back to the Organisation column that we are having trouble with, a CAML query to bring back all documents tagged as owned by “Megacorp Burgers” would look something like this…

<Where>
   <Eq>
      <FieldRef Name='Organisation' />
      <Value Type='TaxonomyFieldType'>Megacorp Burgers</Value>
   </Eq>
</Where>

Note: By the way, if you want to prove that this works, use CAML Designer 2013, to connect to a list, apply a filter and it will generate the CAML XML it used. I will cover this in the next post.

So here is where we are at. We can definitely can filter a list with a managed metadata column by using the CAML language. But we cannot filter a list using managed metadata via the REST/oData methods that I outlined in part 4. I wonder If there a way to embed a CAML query into a REST web service call?

Turns out there is… only problem is that there is some more conceptual baggage required to understand it properly, so have a strong coffee and lets go…

A journey of CAML in REST…

A while back I came across an MSDN forum thread where Christophe Humbert asked whether CAML queries could be done via the REST API. Erik C. Jordan provided this answer:

POST https//<site>/_api/web/Lists/GetByTitle(‘[list name]‘)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>[other CAML query elements]</Query></View>”}

Take a close look at the above URL. We are still talking to SharePoint via REST and we are calling a method called GetItems. As part of the GetItems call, we see CAML XML inside some curly braces: ‘@v1={“ViewXml”:”<View><Query>[other CAML query elements]</Query></View>”}’.

Hmmm – this looks to have potential. If I can embed a valid CAML query that filters list items based on a managed metadata column, we can very likely have the workflow do that using the Call HTTP Web Service workflow action.

So let’s test this web service and see if we can get it to work. Let’s try enter the above URL on the MegaCorp process owners site.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

Uh-oh, error 400. Dammit, what now?

image_thumb3_thumb_thumb

Turns out that we cannot use a browser to test this particular web service because it is requires a HTTP POST operation, but when we type a URL into a browser, we are performing a HTTP GET operation, hence the error 400.  If you really want to know the difference between a GET and POST in relation to HTTP go and visit this link. But for the purpose of this article, we need to find a way to compose HTTP POST web service calls and guess what – you already know exactly how to do it because we covered it in part 7 – the Fiddler composer function.

So start up Fiddler and let’s craft ourselves a call to this web service….

Step 1:

Start Fiddler and click the Composer Tab. Paste in the web service call we just tried – http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}. Make sure you change the request from a GET to a POST by clicking the dropdown to the left of the URL.

image

Step 2:

Type in the string “Accept: application/json;odata=verbose” into the Request headers textbox as shown below. If you recall the HTTP interlude from part 6, this tells SharePoint to bring back the data in JSON format, rather than XML

image

Step 3:

Click the execute button to execute the request and then click the Response Headers tab as shown below to see what happened. As you can see below, things did not go so well. We got a response code of HTTP/1.1 411 Length Required

image

Hmm – so what are we missing here? It turns out that some HTTP queries require the use of a ‘Content-Length‘ field in the HTTP header. The standard for the HTTP protocol states that: “Any Content-Length greater than or equal to zero is a valid value”, so let’s add a value of zero to the request header.

Step 4:

Click the composer tab again and add the string “Content-length: 0” to the request header textbox as shown below and click execute again:

image

Checking the response and it looks like we are still not quite there as we have another error code: HTTP/1.1 403 FORBIDDEN (The security validation for this page is invalid and might be corrupted. Please use your web browser’s Back button to try your operation again). *sigh* will this ever just work?

image

The reason for this error is a little more complex than the last one. It turns out that we are missing another required HTTP header in the POST request that we are crafting. This header has the cool sounding name of X-RequestDigest and it holds something called the form digest. What is the form digest? Here is what Nadeem Ishqair from Microsoft says:

The form digest is an object that is inserted into a page by SharePoint and is used to validate client requests. Validation of client requests is specific to a user, a site and time-frame. SharePoint relies on the form digest as a form of security validation that helps prevent replay attacks wherein users may be tricked into posting data to the server. As described on this page on MSDN, you can retrieve this value by making a POST request with an empty body to http://site/_api/contextinfo and extracting the value of the “d:FormDigestValue” node in the XML that the contextinfo endpoint returns.

So there you go – it is a security function that validates web service requests. So our workflow is going to have to make yet another web service call to handle this. We will make a POST request with an empty body to http://megacorp/iso9001/_api/contextinfo and then extract the value of the “d:FormDigestValue” node in the information returned.

This probably sounds as clear as mud, so let’s use Fiddler to do it so we know what we have to do in the workflow.

Step 5:

Start Fiddler and click the Composer Tab. Paste in the web service of http://megacorp/iso9001/_api/contextinfo. Make sure you change the request from a GET to a POST and add the string “Accept: application/json;odata=verbose” into the Request headers textbox as shown below

image

Step 6:

Click the execute button to execute the request and make sure the response headers tab is selected. Confirm that the response you get from the server is 200 OK.

image

Step 7:

Click the JSON button and look for an entry called FormDigestValue in the response.

image

Step 8:

Right click on the FormDigestValue entry and choose copy to get it into the clipboard.

image

Step 9:

Click on the composer tab again and paste the FormDigestValue into the Request Headers textbox as shown below. Replace the string “FormDigestValue =” with “X-RequestDigest: “ to make it the correct format needed as shown in the second image below.

image    image

Step 10:

Paste in the original request into the URL: http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”} and click Execute.

image

Check that the HTTP return code is 200 (OK) and then click the JSON tab to see what has come back. You should see a JSON result set that looks like the image below. If you examine the JSON data returned detail of this image, you will it is exactly the same JSON structure that was returned when we used fiddler in part 7.

image  image

Let’s pause here for a moment and reflect. If you have made it this far, you have pretty much nailed the hard bit. If you had an issue, fear not as there are a couple of common problems that are usually easy to rectify.

Firstly, if you receive an error HTTP/1.1 400 Bad Request with a message that looks something like “Invalid JSON. A colon character ‘:’ is expected after the property name ‘â’, but none was found.”, just double check the use of quotes (“”) in the URL. Sometimes when you paste strings from your browser or RSS reader, the quotes can get messed up because of autocorrect. Look closely at the URL below and note the quotes are angled:

{“ViewXml”:”<View><Query></Query></View>”}

To resolve this issue, simply replace the angled quotes with a regular boring old quote so they are not angled and the problem will go away. Compare the string below to the one above to see what I mean…

{“ViewXml”:”<View><Query></Query></View>”}

The second common problem is a HTTP/1.1 403 FORBIDDEN response with the message: “The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again”. If you see this error, your X-RequestDigest may have expired and you need to regenerate it via repeating steps 5 to 9 above. The other possibility is that you did not properly paste the FormDigest into the request header. Double check this as well.

Conclusion

Okay, so that was a rather large dollop of conceptual baggage I handed out in this post. You got introduced to the older method of querying SharePoint lists called CAML, and we have successfully been able to call a REST web service and pass in a CAML XML string and get back data. We learnt about the HTTP POST request and some of the additional HTTP headers that need to be sent, like the Content-length and the X-RequestDigest to make it all work. As an added bonus, we are all Fiddler composer gurus now.

However all we sent across was an empty CAML string. The string <View><Query></Query></View> pretty much says “give me everything and don’t filter”, which is not what we want. So in the next post, we will learn how to create a valid CAML string that filters by the Organisation column. Once we have successfully tested it, we will modify the workflow to use this method instead.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 7

This entry is part 7 of 13 in the series Workflow
Send to Kindle

Hi and welcome to part 7 of my series of articles around SharePoint 2013 workflows. We have been examining the trials and tribulations (mainly trials so far) of implementing a relatively simple document approval workflow for a mythical organisation called Megacorp Inc. In the last post, I started illustrating the first of two approaches to filtering a list based on a managed metadata column using the Call HTTP Web Service capability. If you have been following along with me, we just successfully called the SharePoint lists web service via REST. The diagram below shows where we are at… Now it is time to work with the data that is returned.

image

Very soon we are going to use the loop function of workflows to go through each process owner and check the GUID of the Organisation field. This is probably best done as a new workflow stage that only runs if the response code from the webservice is 200 (OK). So let’s make some minor changes to the workflow…

Step 1:

Add a new workflow stage and name it “Find Matching Process Owner

Step 2.

In the Transition to stage section of the new stage, add a Go to a stage action and set the stage as “End of Workflow

image

Step 3:

In the Transition to stage section of “Get Process Owners” Stage, delete the “Go to End of Workflow” action. In its place, add an “If any value equals value” condition

Step 4:

Click the first value hyperlink of the newly added condition from step 3 and click the fx button. In the Define Workflow lookup dialog, choose Workflow Variables and Parameters as the Data source and choose the responseCode variable in the Field from Source dropdown. Click OK

image  image

Step 5:

Click the second value link of the newly added condition from step 3. Type in the value “OK” and press enter.

image

Step 6:

Under the If Variable: responseCode equals OK condition, add a Go to a stage action. Set the stage as Find Matching Process Owner (the stage created in step 1).

Step 7.

Under the Else condition, add a Go to a stage action. Set the stage as End of Workflow

image

Ok, so now we have a new workflow stage to work with called Find Matching Process Owner. This is where we are going to manipulate dictionary variables and in particular, the ProcessOwnersList variable that contains the data returned by the call we made to the REST webservice in part 6. The first thing we will do is count how many items are in the ProcessOwnersList dictionary, otherwise we won’t know when to stop the loop. This is where the Count Items in a Dictionary workflow action comes in. Let’s try it on the ProcessOwnerList variable now.

Step 8:

In the Find Matching Process Owners stage, add a Count Items in a Dictionary action. Click the dictionary hyperlink and choose Variable: ProcessOwnersList from the list of dictionary variables.

image

Now we should check to see if the value of count is what  we are expecting. For your reference, here are the 9 entries currently in the Process Owners list…

Megacorp Defense Paul Culmsee
Megacorp Burgers Teresa Culmsee
Megacorp Iron Man suits Chris Tomich
Megacorp Gamma Radiation Experiments Peter Chow
Megacorp Pharmaceutical Hendry Khouw
Megacorp Fossil Fuels Sam Zhou
Perth Du Le
Sydney Paul Culmsee
Alaksa Du Le

Step 9:

Add a Log to Workflow History action. Click the message hyperlink and click the ellipses button. In the string builder dialog, type in “Process Owners count: “

image

Step 10:

Click the Add or Change Lookup button. In the Lookup for string dialog, choose Workflow Variables and Parameters from the data source dropdown and choose the variable count. Click OK. You should see the following text in the string builder dialog. Click OK. Review the workflow stage and confirm it looks like the second image below.

image  image

Publish the workflow and run it. Take a look at the workflow history and see what we get. Uh-oh… what now? It says we only have 1 item in the dictionary, but we have 9 process owners. What the…?

image

A JSON (and fiddler) interlude…

In part 6, I stressed the point that dictionary variables can contain any type of variable available in the SharePoint 2013 Workflow platform, including other dictionary variables! This now becomes important because of the way JSON works. To explain better, it is time for me to formally introduce you Fiddler.

Fiddler is a multipurpose utility that sits between the web browser and web sites. It logs all HTTP(s) traffic and is used to view and debug web traffic. Any web developer worth their salt has Fiddler as part of the troubleshooting toolkit because of its ability to manipulate the data as it travels between browser and server. Bets of all, it can be used to compose your own HTTP requests.

So first up, install Fiddler and then start it. In your browser, head over to a web site like Google. Go back to Fiddler and you should see entries logged in the left hand panel. Click on an entry and click the inspectors tab. From here you can see all the gory detail of the conversation between your web browser and the web site.  Neat eh?

Snapshot

If you look in the tabs in the Fiddler window, you will see one labelled Composer. Let’s use it right now… Paste the webservice URL that we used in part 6 into the composer textbox as shown below. Also, paste the string “Accept: application/json;odata=verbose” into the Request headers textbox as shown below. If you recall the HTTP interlude from part 6, this tells SharePoint to bring back the data in JSON format, rather than XML.

image

Click execute and Fiddler will send the request to the server. It will also switch to inspector mode for you to see the request and the response as it happened. If you look closely at the response (by clicking the JSON tab), you will see a structure that looks a bit like that I have pasted below. This is the same data that is now being stored in the ProcessOwnerList dictionary variable.

image

So why did the workflow report that there was only one item in the dictionary then? The short answer is, because there is only one item in the dictionary! A dictionary called “d”. To understand better, take another look at the JSON structure below. What is the first entry in the JSON data? a section called “d”. If you were to collapse d, all other data would be inside it. Therefore, as far as the dictionary variable is concerned, there truly is only one entry.

- JSON
  + d

Note: In case you are wondering what the deal is with the whole “d” thing, given that the it appears to be somewhat redundant. The answer that Microsoft oData standards stipulate that all data returned is wrapped by an outer “d”. Why you may ask? Well you really don’t want to know, but if you must insist, it is for security reasons related to JavaScript (see, I told you that you didn’t want to know!).

So if there is only one entry in the dictionary, than how can we get to the data we need buried deeper? The answer is that this is one of those “dictionary in a dictionary” situations that I described in part 6. The dictionary variable ProcessOwnerList has a single item. That item happens to be another dictionary with multiple items in it. So our first task is to get to the inner dictionary that contains the data we need!

Getting to the real count…

Now we will make use of the Get Item from a Dictionary action to get stuff under the d branch. Looking at the fiddler JSON output above, we need to get to the children of the results branch.

Step 1:

In the Find Matching Process Owner stage, add a Get Item from a Dictionary action as the first action in the stage.

image

Step 2:

Click the item by name or path hyperlink and type in “d/results”. Note: This is case sensitive, so has to match the JSON data shown in the fiddler screenshots. Click the dictionary hyperlink and choose the ProcessOwnersList dictionary variable as the source data.

image

Step 3:

Click the item hyperlink and choose the create a new variable (the bottom option in the dropdown). Call the variable InnerProcessOwnerList and make sure it is set to type dictionary.

image

Step 4:

Now we need to modify the count items action to count the new variable InnerProcessOwnerList. In the Count Items action below the action we just created, click the Variable: ProcessOwnersList and change it to InnerProcessOwnerList.

image

Right! Let’s retest this workflow by republishing it and running it. Now that’s more like it!!!

image

Building the loop…

Now we get to a powerful new feature of workflows in SharePoint 2013 – the ability to perform loops. What we need to do here is loop through the 9 process owners, and for each, compare the GUID of the Organisation column to the GUID on the document from which the workflow was triggered. If the term matches, we exit the loop and assign a task to the person in the Assigned To column.

Let’s go through the process step by step.

Step 1:

Make sure that your cursor is below the last action in the Find Matching Process Owner stage. Add a Set Workflow Variable action. Click the workflow variable hyperlink and choose to create a new variable. Call the variable loop and make it an integer. Click the value hyperlink and set it to zero “0”.

image  image

We are going to use this variable in the looping section  of the workflow to get to each process owner. It’s purpose will become clear soon…

Step 2:

In the ribbon, click the Loop button and choose Loop with Condition. This will add a loop step into the stage. The name “1” is not exactly meaningful so lets change it. Click the title bar of the loop and change the name to For each process owner…

image  image

Now we will set the condition for how long this loop will run for. Note that we have a variable called count that stores the number of entries in the process owners list and in step 1, we created a variable called loop. We will set the workflow to loop around while the value of loop is less than the value of count.

Step 3:

Click the leftmost value hyperlink. In the LoopCondition Properties dialog, click the fx button for the top value. In the Define Workflow Lookup dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop. Click the equals dropdown and choose Is less than. Finally, click the fx button for the bottom value. In the Define Workflow Lookup dialog, choose Workflow Variables and Parameters as the data source and find the variable called count.

image  image  image   image

image

Step 4:

Inside the newly created loop, add a Get Item from a Dictionary action. Click the item by name or path hyperlink and click the ellipses button to display the string builder. Type in a left bracket “(“ and then click the Add or Change lookup button. In the Lookup for String dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop  and click OK. Finally, type in the following string “)/Organisation/TermGuid”.

image

If you look closely at this string, it is referring to the TermGuid in the JSON data. The value of loop (currently 0) will be used to create the string. Hence “(0)/Organisation/TermGuid” will be used to grab the first Organsiation GUID, (1)/Organisation/TermGuid for the second and so on…

Step 5:

Click the dictionary hyperlink and choose InnerProcessOwnerList.

Step 6:

Click the item hyperlink and choose to create a new variable. Call the variable ProcessOwnerTermGUID and set it to a string.

image  image

By this stage, we should have the value of the term GUID for the process owner. Let’s log the value to workflow history so we can confirm things are working…

Step 7:

Add a log to workflow history action and click the message hyperlink. Click the fx button and in the Lookup for String dialog, choose Workflow Variables and Parameters as the data source and find the variable called ProcessOwnerTermGUID.

Step 8:

Now we need to increment the value of the loop variable so it can select the next process owner from the InnerProcessOwnerList variable. Add a Do calculation workflow action, click the first value hyperlink and click the fx button. In the Lookup for Number dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop. Click the second value hyperlink and type in the value “1”.

image

Note that this action creates a new workflow variable (called calc1 by default) that stores the value of loop + 1. We will use this variable in the next workflow step.

Step 9:

Add a Set Workflow Variable action. Click the workflow variable hyperlink and choose loop. Click the value hyperlink and click the fx button. In the Lookup for Number dialog, choose Workflow Variables and Parameters as the data source and find the variable called calc1.

image

Right, that should be enough to test. The each iteration of the loop will log the Organisation term GUID for each process owner. It then increments the value of loop and does it again. Once the value of loop is equal to the value of the variable count, the loop should finish. So publish the workflow and let’s see what happens.. Brilliant! Below the count of process owners (line 3), we have looped though the process owners list and extracted the GUID for the organisation term!

image

Conclusion…

Well, it has taken us a while to get here, but we now have all of the information we need to be able to assign a task to the process owner. if you look in the log above, the first entry was the GUID for the organisation assigned to the document this workflow was run against. Scanning the list of GUID’s from the process owners list, we can see that the matching GUID was on line 5. So in the next post, we will modify our loop to stop when it has a match, and we will examine what we need to do to assign a task to the appropriate process owner.

Until then thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 6

This entry is part 6 of 13 in the series Workflow
Send to Kindle

Hi and welcome to part 6 of my series of articles aimed at demystifying various aspects to SharePoint 2013 workflows. We have been using a mythical example of a document approval workflow from our mythical multinational called Megacorp Inc. We have been trying to create a workflow attempting to implement the process below…

Snapshot_thumb3

Seems straightforward enough, but in part 3, we foiled by the use of check in/check out on document libraries and a completely useless error message didn’t help matters. We eventually worked around that issue, but in part 4, we got stuck on a bigger snag because of our chosen information architecture. The Organisation column we created is a managed metadata column. It turns out that you cannot use a Managed Metadata column as a filter for a list (steps 2 and 3 above). In the last article, we took a detour into the world of dictionary variables and a very powerful new workflow action called “Call HTTP Web Service”. We learnt that in situations where a built-in workflow action does not cut it for you, but you might be able to use Call HTTP Web Service to do what you need. This sets the scene for our next exciting instalment. Perhaps we can get around this managed metadata issue with one of SharePoint’s many web services? If so, which one do I need to use and why?

In this post and the next few, I am going to show you two ways that we can get around the problem of not being able to filter via Managed metadata using the Call HTTP Web Service capability. The first method is a little easier to build than the second method, but it has a flaw that hopefully will become self evident as we proceed. Having said this, I feel it is really important to cover both approaches, because each showcases different features and capabilities of SharePoint Designer 2013 workflows. Therefore, this article and the next two will show the easier but flawed way, and articles 9, 10, 11 and 12 will show what I think is the better way to go.

The workflow looping method…

The gist of the approach we are going to take is to:

  • Get the unique ID of the Organisation for the selected document in the Documents library
  • Using the SharePoint lists REST web service, we will load the the Assigned to and Organisation columns from the Process Owners list and store it into a Dictionary variable
  • Using workflow looping capability, we will step through each item in the dictionary, and find the first entry where the unique ID of the Organisation from step 1 matches the Organisation in process owners
  • For the marching entry, Assign a task to the person mentioned in the Assigned to column.

Now to pull this off, we are going to bring together all of the topics that I have covered in this series. I am also going to be a little less verbose with screenshots, because by now some aspects of workflow creation using SharePoint designer should be getting more familiar. Speaking of more familiar, let’s take a closer look at the lists web service again. In my second REST interlude in part 4, I demonstrated how you could specify the columns that you want to bring back from a web service call, rather than all columns. In the example below, I am showing how you can bring back just the Organisation and Assigned to columns from the Process Owners list (AssignedToId a REST specific thing that represents the Assigned To column. More about that in part 8).

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbytitle(‘Process%20Owners’)/Items?$select=AssignedToId,Organisation

Here is the XML for a single process owner entry… Note that we never get to see the name of the Organisation in the XML for the Organisation column (for that matter, we don’t see the name person in the Assigned column either – an issue I will deal with later). Instead, we have the GUID for the Organisation in the <d:TermGuid> section.

  - <content type="application/xml">
    - <m:properties>
      - <d:Organisation m:type="SP.Taxonomy.TaxonomyFieldValue">
          <d:Label>14</d:Label> 
          <d:TermGuid>e2f8e2e0-9521-4c7c-95a2-f195ccad160f</d:TermGuid> 
          <d:WssId m:type="Edm.Int32">14</d:WssId> 
        </d:Organisation>
        <d:AssignedToId m:type="Edm.Int32">7</d:AssignedToId> 
      </m:properties>
    </content>

Now also in part 4, I explained the Organsiation_0 hidden column and showed that it stores both the organisation name, as well as the GUID of that organisation. So if Organisation has been set to Megacorp Burgers for a document, the value of Organsiation_0 for that document would be:

Megacorp Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f

The common element between the XML from the Process Qwners list, and the value of Organsiation_0 from the Documents library is the Term GUID. Therefore if we can extract the GUID part of Organsiation_0, we can use it to search the Process Owners list and find which entry where the GUID specified in the <d:TermGuid> matches. So first up, let’s clean things up, then use some workflow actions to get hold of the GUID from the Organsiation_0 column.

Getting the GUID…

Step 1:

Turning our attention back to the Process Owners Approval workflow, let’s delete our existing workflow actions, workflow variables and start afresh. Click on any existing workflow actions and choose Delete Action from the dropdown menu as shown below. To delete variables, click the local variables ribbon icon and remove any listed…

image  image  image

Now you should be looking at a clean workflow.

Step 2:

Add the workflow action Find substring in string. To complete the configuration of this action, click the substring hyperlink and add a pipe symbol “|”. Click the string hyperlink, the fx button and from Current Item, choose Organisation_0 as shown below…

image  image

image

The result of this workflow action, will be the position in the string of the pipe symbol will be stored in a variable called index. For example, if you count the number of characters until you get to the pipe symbol in the string, “Megacorp Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”, the answer is 17.

Our next step is to grab all of the characters in the string after the pipe symbol because that is the GUID we need. The way we will do this, we will use another workflow action called Extract substring from index of string. This action takes a string and an index position, and returns all characters to the right of the index. Thus, with the string “Megacorp Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”, if we start at position 17 we will end up with “|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”. This is not quite right because we do not want the pipe symbol, so we will use another workflow action called Do Calculation to add 1 to the index variable first.

Step 3:

Add the Do Calculation action, click the value hyperlink and click the fx button. Change the data source to Workflow Variables and Parameters and choose the variable called index. Click the value hyperlink and type in the number 1.

image

image

The net result of this is we have a variable called calc that storing the position after the pipe symbol in Organsiation_0.

Step 4:

Add the Extract substring from index of string workflow action. Click the string hyperlink, the fx button and from Current Item, choose Organisation_0. Click the “0” hyperlink next to “starting from” and click the fx button. Change the data source to Workflow Variables and Parameters and choose the variable called calc. Finally, click on Variable: substring and choose to Create a new variable… and call it TermGUID as shown below…

image  image

At this point, it might be handy to use the log the value of TermGUID to the workflow history to make sure that things are working as we expect. We can delete this step later…

Step 5:

Add a log to workflow history action and log the value of TermGUID. The final workflow should look like this…

image

Step 6:

Publish this workflow, confirm there are no errors and then run it against a document in the documents library. Wohoo! we now have the GUID!

image

Using stages…

Now that we have the GUID, it makes sense that we can make this sequence of actions a workflow stage. Then we can add a new stage for the rest of the workflow and add some error checking logic.

Step 1:

Click the stage header and rename the stage to Obtain Term GUID.

image

Step 2:

Click outside the stage and from the ribbon, click the Stage icon. A new stage will be added to the workflow. Call this stage Get Process Owners.

image

Now let’s create the logic that connects up the stages. We will set it that we will only move to the Get Process Owners stage if the TermGUID variable has a value. After all, if there is not a valid GUID, there is no point continuing the workflow.

Step 3:

In the Obtain Term GUID stage, select the Go to End of Workflow action and delete it. In the ribbon, click the Condition button and choose If any value equals value from the drop down menu. Confirm that the condition section has been added to the Transition to stage section of the workflow stage…

image   image

image

Step 4:

Click the value hyperlink, click the fx button and choose Workflow Variables and Parameters from the Data source dropdown. Find the TermGUID variable in the Field from source dropdown. Click the equals hyperlink and from the dropdown, choose “is not empty”.

image  image  image

Step 5:

Click on the top “Insert go-to actions with conditions” section, and add a Go to a Stage action. From the stage dropdown, choose “Get Process Owners”

image

Step 6:

Click on the bottom “Insert go-to actions with conditions” section, and add a Go to a Stage action. From the stage dropdown, choose “End of Workflow”. The complete workflow should look like the image below:

image

A HTTP interlude…

Our next task is to get all of the process owners into a dictionary variable. Before we do this, I am going to give you a little lesson on how the HTTP protocol works, because we are literally going to be hand crafting our own request. Therefore it is handy to understand the basics.

When your browser makes a request to a website or web service, it does not just say “Gimme this URL”. Often the server will change its behaviour based on the nature of the request.  For example, if the requestor is a mobile device, the server will send back different HTML compared to a PC browser. So how can the server tell if a request is made from a mobile device versus a PC? The answer is, that when the browser makes a request, it sends additional information in the form of request headers. Request headers are used for all sorts of things, and we are going to need to make use of them. Why? Remember in part 4, that I mentioned the JSON data format and that we need to tell SharePoint that any data it sends us has to be JSON format. Here is another of my dodgy diagrams explaining this by example…

Snapshot

Technically, we have to send the string “Accept: application/json;odata=verbose” in the request header to make this happen. So let’s see how we can put this request together via SharePoint Designer. Just to remind you the URL of the web service to get all of the process owners is:

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbytitle(‘Process%20Owners’)/Items?$select=AssignedToId,Organisation

Crafting the request…

The first thing we need to do is to create the request header that tells SharePoint to return the data in JSON format. This is done via creating a dictionary variable.

Step 1:

In the Get Process Owners workflow stage, add a Build Dictionary action. Click the this hyperlink to display the Build Dictionary window. Click the Add button and type “Accept” into the Name textbox and application/jason;odata=verbose into the value textbox. Click OK twice, then click the Variable: dictionary hyperlink and create a new variable called RequestHeader.

image  image  image

image  image  image

Step 2:

Add a Call HTTP Web Service action and then click to select it as shown below. If you look at the parameters you can set, there is no mention of request header. To set it, click the Advanced Properties icon in the ribbon. In the Call HTTP Web Service Properties dialog box, click the RequestHeaders parameter and in the drop down list to the right of it, choose the RequestHeader variable created in step 1. Click OK. Now the request for JSON format has been set.

image   image

image  image

Step 3:

Click on the “this” hyperlink to the left of “HTTP web service”. This will bring up the HTTP Web Service Details screen. At this point we could paste in the URL above, but we are going to this a little smarter than that. Click the ellipsis next to the textbox to bring up the String Builder dialog box.

image  image

Click the Add or Change Lookup button and in the Data source dropdown, choose Workflow context. This data source comes built in with any workflow you create and as you will see, contains some very handy information that we can use in our workflows. From the Field from source dropdown, choose Current Site URL and click OK. What this will do is take whatever site the workflow is run from and bring it back as a string – in this case http://megacorp/iso9001/. The reason this is a good thing is when you want to use this workflow on another site, such as from development to production. If you use the Current Site URL workflow context, we are not hard-coding the current site name into the workflow.

image  image  image

Anyhow, now that we have the site name, let’s complete the rest of the URL. In the string builder dialog, add “_vti_bin/client.svc/web/lists/getbytitle(‘Process Owners’)/Items?$select=AssignedToId,Organisation” and click OK

image

Our call HTTP Web service now looks like this:

image

Now we are expecting a JSON data feed as a response to this request, so we need to create another dictionary variable to handle it.

Step 4:

Click the response hyperlink and choose to Create a new variable and call it ProcessOwnersList.

image  image

image

Right! At this point, we have built the Call HTTP Web Service and we should test things to make sure it is working. If you look closely at the Call HTTP Web Service action, one of the variables that get created is called responseCode, which is the way the HTTP protocol reports whether the request worked or not. If the response code is 200 (OK), then the query worked. So let’s log the response code to the workflow history and run a test.

Step 5:

Add a Log to History List action. Click the message hyperlink and click the fx button. In the Lookup for String dialog box, choose Workflow Variables and Parameters from the Data source dropdown and choose responseCode from the Field from source dropdown and click ok.

image

Step 6:

In the Transition to stage section, add a Go to a Stage action and set the stage as End of Workflow. Click OK and review the workflow. It should look like the screen below.

image

Testing our progress and next steps…

Publish the workflow, run it and check the results in workflow history.  Wohoo! Our HTTP call worked! Note the OK in the workflow history!

image

At this point I will stop with this post, as it is getting rather long and we still have a bit to do. Although we know that the HTTP call worked, we have not looked at the data that came back. In the next post, we will use some more workflow actions to loop through the data returned to find the matching process owner.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 5

This entry is part 5 of 13 in the series Workflow
Send to Kindle

Hi and welcome to part 5 of my series of articles that take a peek under the hood of SharePoint 2013 workflows. These articles are pitched at wide audience, and my hope is that they give you some insights into what SharePoint Designer 2013 workflows can do (and cannot). This is a long tale of initially dashed hopes and then redemption, based around what I think is a fairly common business scenario. To that end, the scenario we are using for this series is a basic document approval workflow for a fictitious diversified multinational company called Megacorp. It consist a Documents library and Process Owners list. A managed metadata based site column called Organisation has been added to each of them. In the second post we created a very basic workflow, using the task approval action. In part 3 and part 4, we have been trying to get around various issues encountered. At the end of the last post, we just learnt that Managed Metadata column cannot be filtered via the REST calls used by the built-in SharePoint Designer workflow actions.

…or can they?

In this post, we are going to take a look at two particular capabilities of the new SharePoint 2013 workflow regime and see if we can use them to get out of this pickle we are in. Once again, a reminder that this article is pitched at a wide audience, some of whom are non developers. As a result I am taking things slow…

Capability 1: Dictionaries

SharePoint workflows have always been able to store data in variables, which allows for temporary storage of data within the workflow. When creating a variable, you have to specify what format the data is in, such as string, integer or date. In SharePoint 2013, there is a new variable type called a Dictionary. A dictionary can be used to store quite complex data, because it is in effect, a collection of other variables. Why does this matter? Well, consider this small snippet of XML data below. I could store all of this data in a single dictionary variable and call it CD, rather than multiple stand-alone variables.

Snapshot  <CD>
    <ARTIST>Bob Dylan</ARTIST>
    <COUNTRY>USA</COUNTRY>
    <COMPANY>Columbia</COMPANY>
    <YEAR>1985</YEAR>
  </CD>

Now storing complex data in a single variable is all well and good, but what about manipulating it once you have it? As it happens, three new workflow actions have been specifically designed to work with dictionary data, namely:

  • Build Dictionary
  • Get an Item from a Dictionary
  • Count Items in a Dictionary

The diagram below illustrates these actions (this figure came from an excellent MSDN article on the dictionary capability). You will be using the “Build Dictionary” and “Get an item from a dictionary” actions quite a bit before we are done with this series.

image

There is one additional thing worth noting with dictionaries. Something subtle but super important.  A Dictionary can contain any type of variable available in the SharePoint 2013 Workflow platform, including other dictionary variables! If this messes with your head, let’s extend upon the XML example above of the Bob Dylan album. Let’s say you have an entire catalog of CD’s. For example:

<CATALOG>
   <CD>
     <ARTIST>Bob Dylan</ARTIST>
     <COUNTRY>USA</COUNTRY>
     <COMPANY>Columbia</COMPANY>
     <YEAR>1985</YEAR>
   </CD>
   <CD>
     <ARTIST>Keith Urban</ARTIST>
     <COUNTRY>USA</COUNTRY>
     <COMPANY>Columbia</COMPANY>
     <YEAR>2006</YEAR>
   </CD>
</CATALOG>

Using a dictionary, we can create a single variable to store details about all of the CD’s in the catalog. We could make a Dictionary variable called Catalog, which contains a dictionary variable called CD. The CD variable contains the string and date/time details for each individual CD. This structure enables the Catalog dictionary to store details of many CD’s. Below is a representation on what 3 CD’s would look like in this model…

Snapshot

Okay, so after explaining to you this idea of a dictionary, you might be thinking “What has all of this got to do with our workflow?” To answer that, have another look at the JSON output from part 4 in this series. If you recall, this is the output from talking to SharePoint via REST and asking for all documents in the document library. What do you notice about the information that has come back? To give you a hint, below is the JSON output I will remind you what I said in the last post…

Now let’s take a closer look at Organisation entry in the JSON data. What you might notice is that some of the other data entries have data values specified, such as EditorID = 1, AuthorID =  1 and Modified = 2013-11-10. But not so with Organisation. Rather than have a data value, it has sub entries. Expanding the Organisation section and you can see that we have more data elements within it.

image_thumb14  image_thumb16

In case it is still not clear, essentially we are looking at a data structure that is perfectly suited to being stored in a dictionary variable. In the case of the Organisation column, it is a “dictionary within a dictionary” scenario like my CD catalog example.

Okay, I hear you say – “I get that a dictionary can store complex data structures. Why is this important?”

The answer to that my friends, is that there is a new, powerful workflow action that makes extensive use of dictionaries. You will come to love this particular workflow action, such as its versatility.

Capability 2: The one workflow action to rule them all…

image

In part 3 and part 4 of this series, I have shown examples of talking to SharePoint via REST webservices and showing what the returning JSON data looks like. This was quite deliberate, because In SharePoint 2013, Microsoft has included a workflow action called Call HTTP Web Service to do exactly the same thing. This is a huge advance on previous versions of SharePoint, because it means the actions that workflows can take are only limited by the webservices that they talk to. This goes way beyond SharePoint too, as many other platforms expose data via a REST API, such as YouTube, Ebay, Twitter, as well as enterprise systems like MySQL. Already, various examples exist online where people have wired up SharePoint workflows to other systems. Fabian Williams in particular has done a brilliant job in this regard.

The workflow action can be seen below. Take a moment to examine all of the bits you need to fill in as there are several parameters you might use. The first parameter (the hyperlink labelled “this”) is the URL of the webservice that you wish to access. The request parameter is a dictionary variable that is sometimes used when making a request to the webservice. The response and responseheaders variables are also dictionaries and they store the response received from the webservice. The responseCode parameter represents the HTTP response code that came back, which enables us to check if there was an error (like a HTTP 400 bad request).

image

Dictionaries and web services – a simple example…

The best way to understand what we can do with this workflow action (and the dictionary variables that it requires) is via example. So let’s leave our document approval workflow for the time being and quickly make a site workflow that calls a public webservice, grabs some data and displays it in SharePoint. The public webservice we will use is called Feedzilla. Feedzilla is a news aggregator service that lets you search for articles of interest and bring them back as a data feed. For example: the following feedzilla URL will display any top news (category 26) that has SharePoint in the content. It will return this information in JSON format:

http://api.feedzilla.com/v1/categories/26/articles/search.json?q=SharePoint

Here is a fiddler trace of the above URL, showing the JSON output.  Note the structure of articles, where below the JSON label at the top we have articles –> {} and then the properties of author, publish_date, source, source_url, summary, title and url.

image

Therefore the string “articles(0)/title” should return us the string “Microsoft Certifications for High School Students in Australia (Slashdot)” as it is the title of the first article. The string articles(1)/title should bring back “Microsoft to deliver Office 2013 SP1 in early ’14 (InfoWorld)” as it is the second article. So with this in mind, let’s see if we can get SharePoint to extract the title of the first article in the feed.

Testing it out…

So let’s make a new site workflow based workflow.

Step 1:

From the ribbon, choose Site Workflow. Call the site workflow “Feedzilla test” as shown below…

image  image

Now we will add our Call HTTP Web Service action. This time, we will add the action a different way.

Step 2:

Click the flashing cursor underneath the workflow stage and type in “Call”. As you type in each letter, SharePoint Designer will suggest matching actions. By the time you write “call”,  there is only the Call HTTP Web Service action to choose from. Pressing enter will add it to the workflow.

image

image

Step 3:

Now click on the “this” hyperlink and paste in the feedzilla URL of: http://api.feedzilla.com/v1/categories/26/articles/search.json?q=SharePoint. Then click OK.

image

Next, we need to create a dictionary variable to store the JSON data that is going to come back from Feedzilla.

Step 4:

Click on the “response” hyperlink next to the “ResponseContent to” label and choose to Create a new variable…

image

Step 5:

Call the variable JSONResponse and confirm that it’s type is Dictionary. We are now done with the Call HTTP web service action.

image  image

Step 6:

Next step in the workflow is to extract just the article title from the JSON data returned by the web service call. For this, we need to use the Get an Item from a Dictionary action. We will use this action to extract the title property from the very first article in the feed. Type in the word “get” and press enter – the action we want will be added…

image

Step 7:

In the “item by name or path” hyperlink next to the Get label, type in this exactly: articles(0)/title as shown below. Then click on the “dictionary” hyperlink next to the from label and choose the JSONResponse variable that was created earlier. Finally, we need to save the extracted article title to a string variable. Click on the “this” hyperlink next to the Output to label and choose Create a new variable… In the edit variable screen, name the variable ArticleTitle and set its Type to String.

image

image

Step 8:

The next step is to log the contents of the variable ArticleTitle to the workflow history list. The action is called Log to History List as shown below.  Click the “message” hyperlink for this action and click the fx button. Choose Workflow Variables and Parameters from the Data Source dropdown and in the Field from Source dropdown, choose the variable called ArticleTitle. Click OK.

image

image  image

Step 9:

Finally, add a Go to End of Workflow action in the Transition to Stage section. The workflow is now complete and ready for testing.

image

Testing the workflow…

To run a site workflow, navigate to site contents in SharePoint, and click on the SITE WORKFLOWS link to the right of the “Lists, Libraries and other Apps” label. Your newly minted workflow should be listed under the Start a New Workflow link. Click your workflow to run it.

image  image

The workflow will be fired off in the background, and you will be redirected back to the workflow status screen. Click to refresh the page and you should see your workflow listed as completed as shown below…

image

Click on the workflow link in the “My Completed Workflows” section and examine the detailed workflow output. Look to the bottom where the workflow history is stored. Wohoo! There is our article name! It worked!

image

Conclusion…

It was nice to have a post that was more tribulation than trial eh?

By now you should be more familiar with the idea of calling HTTP web services within workflows and parsing dictionary variables for the output. This functionality is really important because it opens up possibilities for SharePoint workflows that were previously not possible. For citizen developers, the implication (at least in a SharePoint context) is that understanding how to call a web service and parse the result is a must. Therefore all that REST/oData stuff you skipped in part 4 will need to be understood to progress forward in this series.

Speaking of progressing forward, in the next post, we are going to revisit our approval workflow and see if we can use this newfound knowledge to move forward. First up, we need to find out if there is a web service available that can help us look up the process owner for an organisation. To achieve this, we are going to need to learn more about the Fiddler web debugging tool, as well as delve deeper into web services than you ever thought possible. Along the way, SharePoint will continue to put some roadblocks up but fear not, we are turning the corner…

Until then, thanks for reading and I hope these articles have been helpful to you.

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle