Glyma is now open source!

Send to Kindle

Hi all

If you are not aware, my colleagues and I have spend a large chunk of the last few years developing a software tool for SharePoint called Glyma (pronounced “Glimmer”). Glyma is a very powerful  knowledge management solution for SharePoint 2010/2013, that deals with knowledge that is highly valuable, yet difficult to capture in writing – all that hard earned knowledge that tends to walk out the door in organisations.

Glyma was born from Seven Sigma’s Dialogue Mapping skills and it represents a lot of what we do as an organisation, and the culmination of many years of experience in the world of complex problem facilitation. We have been using Glyma as a consultancy value-add for some time, and our clients have gained a lot of benefit from it. Clients have also deployed it in their environments for reasons such as capture of knowledge, lessons learnt, strategic planning, corporate governance as well as business analysis, critical thinking and other knowledge visualisation/knowledge exchange scenarios.

image

I am very pleased to let people know that we have now decided to release Glyma under an open source license (Apache 2). This means you are free to download the source and use it in any manner you see fit.

You can download the source code from Chris Tomich’s githib site or you can contact me or Chris for the binaries. The install/user and admin manuals can be found from the Glyma web site, which also has a really nice help system, tutorial videos and advice on how to build good Glyma maps.

This is not just some sample code we have uploaded. This is a highly featured, well architected and robust product with some really nice SharePoint integration. In particular for my colleague, Chris Tomich, this represents a massive achievement as a developer/product architect. He has created a highly flexible graph database with some real innovation behind it. Technically, Glyma is a hypergraph database, that sits on SQL/SharePoint. Very few databases of this type exist outside of academia/maths nerds and very few people could pull off what he has done.

image

For those of you that use/have tried Compendium software, Glyma extends the ideas of Compendium (and can import Compendium maps), while bringing it into the world of enterprise information management via SharePoint.

Below I have embedded a video to give you an idea of what Glyma is capable of. More videos exist on Youtube as well as the Glyma site, so be sure to dig deeper.

 

I look forward to hearing how organsiations make use of it. Of course, feel free to contact me for training/mentoring and any other value-add services Smile

 

Regards

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Rewriting the knowledge management rulebook… The story of “Glyma” for SharePoint

Send to Kindle

“If Jeff ever leaves…”

I’m sure you have experienced the “Oh crap” feeling where you have a problem and Jeff is on vacation or unavailable. Jeff happens to be one of those people who’s worked at your organisation for years and has developed such a deep working knowledge of things, it seems like he has a sixth sense about everything that goes on. As a result, Jeff is one of the informal organisational “go to guys” – the calming influence amongst all the chaos. An oft cited refrain among staff is “If Jeff ever leaves, we are in trouble.”

In Microsoft’s case, this scenario is quite close to home. Jeff Teper, who has been an instrumental part of SharePoint’s evolution is moving to another area of Microsoft, leaving SharePoint behind. The implications of this are significant enough that I can literally hear Bjorn Furuknap’s howls of protest all the way from here in Perth.

So, what is Microsoft to do?

Enter the discipline of knowledge management to save the day. We have SharePoint, and with all of that metadata and search, we can ask Jeff to write down his knowledge “to get it out of his head.” After all, if we can capture this knowledge, we can then churn out an entire legion of Jeffs and Microsoft’s continued SharePoint success is assured, right?

Right???

There is only one slight problem with this incredibly common scenario that often underpins a SharePoint business case… the entire premise of “getting it out of your head” is seriously flawed. As such, knowledge management initiatives have never really lived up to expectations. While I will save a detailed explanation as to why this is so for another post, let me just say that Nonaka’s SECI model has a lot to answer for as it is based on a misinterpretation of what tacit knowledge is all about.

Tacit knowledge is expert knowledge that is often associated with intuition and cannot be transferred to others by writing it down. It is the “spider senses” that experts often seem to have when they look at a problem and see things that others do not. Little patterns, subtleties or anomalies that are invisible to the untrained eye. Accordingly, it is precisely this form of knowledge that is of the most value in organisations, yet is the hardest to codify and most vulnerable to knowledge drain. If tacit knowledge could truly be captured and codified in writing, then every project manager who has ever studied PMBOK would have flawless projects, because the body of knowledge is supposed to be all the codified wisdom of many project managers and the projects they have delivered. There would also be no need for Agile coaches, Microsoft’s SharePoint documentation should result in flawless SharePoint projects and reading Wictor’s blog would make you a SAML claims guru.

The truth of tacit knowledge is this: You cannot transfer it, but you acquire it. This is otherwise known as the journey of learning!

Accountants are presently scratching their heads trying to figure out how to measure tacit knowledge. They call it intellectual capital, and the reason it is important to them is that most of the value of organisations today is classified on the books as “intangibles”. According to the book Balanced Scorecard, a company’s physical assets accounted for 62% of its market value in 1982, 38% of its market value in 1992 and only 21% in 2003. This is in part a result of the global shift toward knowledge economies and the resulting rise in the value of intellectual capital. Intellectual capital is the sum total of the skills, knowledge and experience of staff and is critical to sustaining competitiveness, performance and ultimately shareholder value. Organisations must therefore not only protect, but extract maximum value from their intellectual capital.

image

Now consider this. We are in an era where baby boomers are retiring, taking all of their hard-earned knowledge with them. This is often referred to as “the knowledge tsunami”, “the organisational brain drain” and the more nerdy “human capital flight”. The issue of human capital flight is a major risk area for organisations. Not only is the exodus of baby boomers an issue, but there are challenges around recruitment and retention of a younger, technologically savvy and mobile workforce with a different set of values and expectations. One of the most pressing management problems of the coming years is the question of how organisations can transfer the critical expertise and experience of their employees before that knowledge walks out the door.

The failed solutions…

After the knowledge management fad of the late 1990’s, a lot of organisations did come to realise that asking experts to “write it down” only worked in limited situations. As broadband came along, enabling the rise of rich media services like YouTube, a digital storytelling movement arose in the early 2000’s. Digital storytelling is the process by which people share stories and reflections while being captured on video.

Unfortunately though, digital storytelling had its own issues. Users were not prepared to sit through hours of footage of an expert explaining their craft or reflecting on a project. To address this, the material was commonly edited down to create much smaller mini-documentaries lasting a few minutes – often by media production companies, so the background music was always nice and inoffensive. But this approach also commonly failed. One reason for failure was well put by David Snowden when he saidInsight cannot be compressed”. While there was value in the edited videos, much of the rich value within the videos was lost. After all, how can one judge ahead of time what someone else finds insightful. The other problem with this approach was that people tended not to use them. There was little means for users to find out these videos existed, let alone watch them.

Our Aha moment

In 2007, my colleagues and I started using a sensemaking approach called Dialogue Mapping in Perth. Since that time, we have performed dialogue mapping across a wide range of public and private sector organisations in areas such as urban planning, strategic planning, process reengineering, organisational redesign and team alignment. If you have read my blog, you would be familiar with dialogue mapping, but just in case you are not, it looks like this…

Dialogue Mapping has proven to be very popular with clients because of its ability to make knowledge more explicit to participants. This increases the chances of collective breakthroughs in understanding. During one dialogue mapping session a few years back, a soon-to-be retiring, long serving employee relived a project from thirty years prior that he realised was relevant to the problem being discussed. This same employee was spending a considerable amount of time writing procedure manuals to capture his knowledge. No mention of this old project was made in the manuals he spent so much time writing, because there was no context to it when he was writing it down. In fact, if he had not been in the room at the time, the relevance of this obscure project would never have been known to other participants.

My immediate thought at the time when mapping this participant was “There is no way that he has written down what he just said”. My next thought was “Someone ought to give him a beer and film him talking. I can then map the video…”

This idea stuck with me and I told this story to my colleagues later that day. We concluded that the value of asking our retiring expert to write his “memoirs” was not making the best use of his limited time. The dialogue mapping session illustrated plainly that much valuable knowledge was not being captured in the manuals. As a result, we seriously started to consider the value of filming this employee discussing his reflections of all of the projects he had worked on as per the digital storytelling approach. However, rather than create ‘mini documentaries’, utilise the entire footage and instead, visually map the rationale using Dialogue Mapping techniques. In this scenario, the map serves as a navigation mechanism and the full video content is retained. By clicking on a particular node in the map, the video is played from the time that particular point was made. We drew a mock-up of the idea, which looked like the picture below.

image

While thinking the idea would be original and cool to do, we also saw several strategic advantages to this approach…

  • It allows the user to quickly find the key points in the conversation that is of value to them, while presenting the entire rationale of the discussion at a glance.
  • It significantly reduces the codification burden on the person or group with the knowledge. They are not forced to put their thoughts into writing, which enables more effective use of their time
  • The map and video content can be linked to the in-built search and content aggregation features of SharePoint.
    • Users can enter a search from their intranet home page and retrieve not only traditional content such as documents, but now will also be able to review stories, reflections and anecdotes from past and present experts.
  • The dialogue mapping notation when stored in a database, also lends itself to more advanced forms of queries. Consider the following examples:
    • “I would like any ideas from lessons learnt discussions in the Calgary area”
    • “What pros or cons have been made about this particular building material?”
  • The applicability of the approach is wide.
    • Any knowledge related industry could take advantage of it easily because it fits into exiting information systems like SharePoint, rather than creating an additional information silo.

This was the moment the vision for Glyma (pronounced “glimmer”) was born…

Enter Glyma…

Glyma (pronounced ‘glimmer’) is a software platform for ‘thought leaders’, knowledge workers, organisations, and other ‘knowledge economy participants’ to capture and trade their knowledge in a way that reduces effort but preserves rich context. It achieves this by providing a new way for users to visually capture and link their ideas with rich media such as video, documents and web sites. As Glyma is a very visually oriented environment, it’s easier to show Glyma rather than talk to it.

Ted

image

What you’re looking at in the first image above are the concepts and knowledge that were captured from a TED talk on education augmented with additional information from Wikipedia. The second is a map that brings together the rationale from a number of SPC14 Vegas videos on the topic of Hybrid SharePoint deployments.

Glyma brings together different types of media, like geographical maps, video, audio, documents etc. and then “glues” them together by visualising the common concepts they exemplify. The idea is to reduce the burden on the expert for codifying their knowledge, while at the same time improving the opportunity for insight for those who are learning. Glyma is all about understanding context, gaining a deeper understanding of issues, and asking the right questions.

We see that depending on your focus area, Glyma offers multiple benefits.

For individuals…

As knowledge workers our task is to gather and learn information, sift through it all, and connect the dots between the relevant information. We create our knowledge by weaving together all this information. This takes place through reading articles, explaining on napkins, diagramming on whiteboards etc. But no one observes us reading, people throw away napkins, whiteboards are wiped clean for re-use. Our journey is too “disposable”, people only care about the “output” – that is until someone needs to understand our “quilt of information”.

Glyma provides end users with an environment to catalogue this journey. The techniques it incorporates helps knowledge workers with learning and “connecting the dots”, or as we know it synthesising. Not only does it help us with doing these two critical tasks, it then provides a way for us to get recognition for that work.

For teams…

Like the scenario I started this post with, we’ve all been on the giving and receiving end of it. That call to Jeff who has gone on holiday for a month prior to starting his promotion and now you need to know the background to solving an issue that has arisen on your watch. Whether you were the person under pressure at the office thinking, “Jeff has left me nothing of use!”, or you are Jeff trying to enjoy your new promotion thinking, “Why do they keep on calling me!”, it’s an uncomfortable situation for all involved.

Because Glyma provides a medium and techniques that aid and enhance the learning journey, it can then act as the project memory long after the project has completed and the team members have moved onto their next challenge. The context and the lessons it captures can then be searched and used both as a historical look at what has happened and, more importantly, as a tool for improving future projects.

For organisations…

As I said earlier, intangible assets now dominate the balance sheets of many organisations. Where in the past, we might have valued companies based on how many widgets they sold and how much they have in their inventory, nowadays intellectual capital is the key driver of value. Like any asset, organisations need to extract maximum value from intellectual capital and in doing so, avoid repeat mistakes, foster innovation and continue growth. Charles G. Sieloff summed this up well in the name of his paper, “if only HP knew what HP knows”.

As Glyma aids, enhances, and captures an individual’s learning journey, that journey can now be shared with others. With Glyma, learning is no longer a silo, it becomes a shared journey. Not only does it do this for individuals but it extends to group work so that the dynamics of a group’s learning is also captured. Continuous improvement of organisational processes and procedures is then possible with this captured knowledge. With Glyma, your knowledge assets are now tangible.

Lemme see it!

So after reading this post this far, I assume that you would like to take a look. Well as luck would have it, we put out a public Glyma site the other day that contains some of my own personal maps. The maps on the SP2013 apps model and hybrid SP2013 deployments in particular represent my own learning journey, so hopefully should help you if you want a synthesis of all the pros and cons of these issues. Be sure to check the videos on the getting started area of the site, and check the sample maps! Smile

glymasite

I hope you like what you see. I have a ton of maps to add to this site, and very soon we will be inviting others to curate their own maps. We are also running a closed beta, so if you want to see this in your organisation, go to the site and then register your interest.

All in all, I am super proud of my colleagues at Seven Sigma for being able to deliver on this vision. I hope that this becomes a valuable knowledge resource for the SharePoint community and that you all like it. I look forward to seeing how history judges this… we think Glyma is innovative, but we are biased! 🙂

 

Thanks for reading…

Paul Culmsee

www.glyma.co

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

“Assumption is the mother of all f**k ups”

Send to Kindle

My business partner, Chris Tomich, is the John Deacon of Seven Sigma.

In case you do not know who John Deacon is, he is the bass player from Queen who usually said very little publicly and didn’t write that many songs (and by songs I mean blog posts). But when Deacon finally did getting around to writing a song, they tended to be big – think Another One Bites the Dust, I Want To Break Free and Your My Best Friend.

Chris is like that, which is a pity for the SharePoint community because he outstanding SharePoint architect, software engineer and one of the best Dialogue Mappers on the planet. If he had the time to write on his learning and insight, the community would have a very valuable resource. So this is why I am pleased that he has started writing what will be a series of articles on how he utilises Dialogue Mapping in practice, which is guaranteed to be much less verbose than my own hyperbole but probably much more useful to many readers. The title of my post here is a direct quote from his first article, so do yourself a favour and have a read it if you want a different perspective on sense-making.

The article is called From Analyst to Sense-maker and can be found here:

http://mymemorysucks.wordpress.com/2014/01/07/from-analyst-to-sense-maker/#!

thanks for reading

 

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

p.s Now all I need to do is get my other Business Partner, mild mannered intellectual juggernaut known as Peter (Yoda) Chow to start writing Smile

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 12

This entry is part 12 of 13 in the series Workflow
Send to Kindle

Hi all, and welcome to part 12 of my articles about SharePoint 2013 Workflows and whether they are ready for prime time. Along the way we have learnt all about CAML, REST, JSON, calling web services, Fiddler, Dictionary objects and a heap of scenarios that can derail aspiring workflow developers. All this just to assign a task to a user!

Anyways, since it has been such a long journey, I felt it worthwhile to remind you of the goal here. We have a fictitious company called Megacorp trying to develop a solution to controlled documents management. The site structure is as follows:

image

The business process we have been working through looks like this:

Snapshot_thumb3

The big issue that has caused me to have to write 12 articles all boils down to the information architecture decision to use a managed metadata column to store the Organisation hierarchy.

Right now, we are in the middle of implementing an approach of calling a web service to perform step 3 in the above diagram. In part 9 and part 10 of this series, I explained the theory of embedding a CAML query into a REST query and in part 11, we built out most of the workflow. Currently the workflow has 4 stages and we have completed the first three of them.

  • 1) Get the organisation name of the current item
  • 2) Obtain an X-RequestDigest via a web service call
  • 3) Constructed the URL to search the Process Owner list and called the web service

The next stage will parse the results of the web service call to get the AssignedToID and then call another web service to get the actual userid of the user. Then we can finally have what we need to assign an approval task. So let’s get into it…

Obtaining the UserID

In the previous post, I showed how we constructed a URL similar to this one:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

This URL uses the CAML in REST method of querying the Process Owners list and returns any items where Organisation equals “Megacorp Burgers”. The JSON data returned shows the AssignedToID entry with a value of 8. Via the work we did in the last post. we already have this data available to us in a dictionary variable called ProcessOwnerJSON.

The rightmost JSON output below illustrates taking that AssignedToID value and calling another web service to return the username , i.e : http://megacorp/iso9001/_api/Web/GetUserById(8).

image   image_thumb52

Confused at this point? Then I suggest you go back and re-read parts 8 and 10 in particular for a recap.

So our immediate task is to extract the AssignedToId from the dictionary variable called ProcessOwnerJSON. Now that you are a JSON guru, you should be able to figure out that the query will be d/results(0)/AssignedToId.

Step 1:

Add a Get an Item from a Dictionary action as the first action in the Obtain Userid workflow stage. Click the item by name or path hyperlink and click the ellipses to bring up the string builder screen. Type in d/results(0)/AssignedToId.

image

Step 2:

Click on the dictionary hyperlink and choose the ProcessOwnerJSON variable from the list.

Step 3:

Click the item hyperlink and use the AssignedToID variable

image

That is basically it for now with this workflow stage as the rest of it remains unchanged from when we constructed it in part 8. At this point, the Obtain Userid stage should look like this:

image

If you look closely, you can see that it calls the GetUserById method and the JSON response is added to the dictionary variable called UserDetail. Then if the HTTP response code is OK (code 200), it will pull out the LoginName from the UserDetail variable and log it to the workflow history before assigning a task.

Phew! Are we there yet? Let’s see if it all works!

Testing the workflow

So now that we have the essential bits of the workflow done, let’s run a test. This time I will use one of the documents owned by Megacorp Iron Man Suits – the Jarvis backup and recovery procedure. The process owner for Megacorp Iron Man suits is Chris Tomich (Chris reviewed this series and insisted he be in charge of Iron Man suits!).

image  image

If we run the workflow against the Jarvis backup and recovery procedure, we should expect a task to be created and assigned to Chris Tomich. Looking at the workflow information below, it worked! HOLY CRAP IT WORKED!!!

image

So finally, after eleven and a half posts, we have a working workflow! We have gotten around the issues of using managed metadata columns to filter lists, and we have learnt a heck of a lot about REST/oData, JSON, CAML and various other stuff along the way. So having climbed this managed metadata induced mountain, is there anything left to talk about?

Of course there is! But let’s summarise the workflow in text format rather than death by screenshot

Stage: Get Organisation Name
   Find | in the Current Item: Organisation_0 (Output to Variable:Index)
   then Copy Variable:Index characters from start of Current Item: Organisation_0 (Output to Variable: Organisation)
   then Replace " " with "%20" in Variable: Organisation (Output to Variable: Organisation)
   then Log Variable: Organisation to the workflow history list
   If Variable: Organisation is not empty
      Go to Get X-RequestDigest
   else
      Go to End of Workflow

Stage: Get-X-RequestDigest
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Call [%Workflow Context: Current Site URL%]_api/contextinfo HTTP Web Service with request
       (ResponseContent to Variable: ContextInfo
        |ResponseHeaders to responseheaders
        |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/GetContextWebInformation/FormDigestValue from Variable: ContextInfo (Output to Variable: X-RequestDigest )
   If Variable: X-RequestDigest is empty
      Go to End of Workflow
   else
      Go to Prepare and execute process owners web service call

Stage: Prepare and execute process owners web service call
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Set Variable:URLStart to _api/web/Lists/GetByTitle('Process%20Owners')/GetItems(query=@v1)?@v1={"ViewXml":"<View><Query><ViewFields><FieldRef%20Name='Organisation'/><FieldRef%20Name='AssignedTo'/></ViewFields><Where><Eq><FieldRef%20Name='Organisation'/><Value%20Type='TaxonomyFieldType'>
   then Set Variable:URLEnd to </Value></Eq></Where></Query></View>"}
   then Call [%Workflow Context: Current Site URL%][Variable: URLStart][Variable: Organisation][Variable: URLEnd] HTTP Web Service with request
      (ResponseContent to Variable: ProcessOwnerJSON
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   then Log Variable: responseCode to the workflow history list
   If Variable: responseCode equals OK
      Go to Obtain Userid
   else
      Go to End of Workflow

Stage: Obtain Userid
   Get d/results(0)/AssignedToId from Variable: ProcessOwnerJSON (Output to Variable: AssignedToID)
   then Call [%Workflow Context: Current Site URL%]_api/Web/GetUserByID([Variable: AssignedToID]) HTTP Web Service with request
      (ResponseContent to Variable: userDetail 
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/LoginName from Variable: UserDetail (Output to Variable: AssignedToName)
      then Log The User to assign a task to is [%Variable: AssignedToName]
      then assign a task to Variable: AssignedToName (Task outcome to Variable:Outcome | Task ID to Variable: TaskID )
   Go to End of Workflow

Tidying up…

Just because we have our workflow working, does not mean it is optimally set up. In the above workflow, there are a whole heap of areas where I have not done any error checking. Additionally, the logging I have done is poor and not overly helpful for someone to troubleshoot later. So I will finish this post by making the workflow a bit more robust. I will not go through this step by step – instead I will paste the screenshots and summarise what I have done. Feel free to use these ideas and add your own good practices in the comments…

First up, I added a new stage at the start of the workflow for anything relation to initialisation activities. Right now, all it does is check out the current item (recall in part 3 we covered issues related to check in/out), and then set a Boolean workflow variable called EndWorkflow to No. You will see how I use this soon enough. I also added a new stage at the end of the workflow to tidy things up. I called it Clean up Workflow and it’s only operation is to check the current item back in.

image   image

In the Get Organisation Name stage, I changed it so that any error condition logs to the history list, and then set the EndWorkflow variable to Yes. Then in the Transition to stage section, I use the EndWorkflow variable to decide whether to move to the next stage or end the workflow by calling the Clean up workflow stage that I created earlier. My logic here is that there can be any number of error conditions that we might check for, and its easier to use a single variable to signify when to abort the workflow.

image

In the Get X-RequestDigest stage, I have added additional error checking. I check that the HTTP response code from the contextinfo web service call is indeed 200 (OK), and then if it is, I also check that we successfully extracted the X-RequestDigest from the response. Once again I use the EndWorkflow variable to flag which stage to move to in the transition section.

image

In the Prepare and execute process owners web service call stage, I also added more error checking – specifically with the AssignedToID variable. This variable is an integer and its default value is set to zero (0). If the value is still 0, it means that there was no process owner entry for the Organisation specified. If this happens, we need to handle for this…

image

Finally, we come to the Obtain Userid stage. Here we are checking both the HTTP code from the GetUserInfo web service call, as well as the userID that comes back via the AssignedToName variable. We assign the task to the user and then set the workflow status to “Completed workflow”. (Remember that we checked out the current item in the Workflow Initialisation stage, so we can now update the workflow status without all that check out crap that we hit in part 3).

image

Conclusion…

So there we have it. Twelve posts in and we have met the requirements for Megacorp. While there is still a heap of work to do in terms of customising the behaviour of the task itself, I am going to leave that to you!

Additionally, there are a lot of additional things we can do to make these workflows much more robust and easier to manage. To that end, I strongly urge you to check out Fabian Williams blog and his brilliant set of articles on this topic that take it much (much) further than I do here. He has written a ton of stuff and it was his work in particular inspired me to write this series. He also provided me with counsel and feedback on this series and I can’t thank him enough.

Now that we have gotten to where I wanted to, I’ll write one more article to conclude the series – reflecting on what we have covered, and its implications for organisations wanting to leverage out of the box SharePoint workflow, as well as implications for all of you citizen developers out there.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 10

This entry is part 10 of 13 in the series Workflow
Send to Kindle

Hi there and welcome back to my series of articles that puts a real-world viewpoint to SharePoint 2013 workflow capabilities. This series is pitched more to Business Analysts, SharePoint Hackers and generally anyone who might be termed a citizen developer. This series shows the highs and lows of out of the box SharePoint Designer workflows, and hopefully helps organisations make an informed decision around whether or not to use what SharePoint provides, or moving to the 3rd party landscape.

By now you should be well aware of some of the useful new workflow capabilities such as stages, looping, support for calling web services and parsing the data via dictionary objects. You also now understand the basics of REST/oData and CAML. At the end of the last post, we just learnt that it is possible to embed CAML queries into REST/oData, which gets around the issue of not being able to filter lists via Managed metadata columns. We proved this could be done, but we did not actually try it with the actual CAML query that can filter managed metadata columns. It is now time to rectify this.

Building CAML queries

Now if you are a SharePoint developer worth your salt, you already know CAML, because their are mountains of documentation on this topic on MSDN as well as various blogs. But a useful shortcut for all you non coders out there, is to make use of a free tool called CAMLDesigner 2013. This tool, although unstable at times, is really easy to use, and in this section I will show you how I used it to create the CAML XML we need to filter the Process Owners list via the organisation column.

After you have downloaded CAMLDesigner and successfully gotten it installed, follow these steps to build your query.

Step 1:

Start CAMLDesigner 2013 and on the home screen, click the Connections menu.

image

Step 2:

In the connections screen that slides out from the right, enter http://megacorp/iso9001 into the top textbox, then click the SharePoint 2013 and Web Services buttons. Enter the credentials of a site administrator account and then click the Connect icon at the bottom. If you successfully connect to the site, CAMLDesigner will show the site in the left hand navigation.

image  image

Step 3:

Click the arrow to the left of the Megacorp site and find the Process Owners list. Click it, and all of the fields in the list will be displayed as blue boxes below the These are the fields of the list section.

image

Step 4:

Drag the Organisation column to the These are the selected fields section to the right. Then do the same for the Assigned To column. If you look closely at the second image, you will see that the CAML XML is already being built for you below.

image     image

Step 5:

Now click on the Where menu item above the columns. Drag the Organisation column across to the These are the selected fields section. As you can see in the second image below, once dragged across, a textbox appears, along with a blue button with an operator. Also take note of the CAML XML being build below. You can see that has added a <Where></Where> section.

image

image

image

Step 6:

In the Textbox in the Organisation column you just dragged, type in one of the Megacorp organisations. Eg: Megacorp Burgers. Note the XML changes…

image

Step 7:

Click the Execute button (the Play icon across the top). The CAML query will be run, and any matching data will be returned. In the example below, you can see that the user Teresa Culmsee is the process owner for Megacorp Burgers.

image

image

Step 8:

Copy the XML from the window to clipboard. We now have the XML we need to add to the REST web service call. Exit CAMLDesigner 2013.

image

Building the REST query…

Armed with your newly minted CAML XML as shown below, we need to return to fiddler and draft it into the final URL.

<ViewFields>
   <FieldRef Name='Organisation' />
   <FieldRef Name='AssignedTo' />
</ViewFields>
<Where>
   <Eq>
      <FieldRef Name='Organisation' />
      <Value Type='TaxonomyFieldType'>Megacorp Burgers</Value>
   </Eq>
</Where>

As a reminder, the XML that we had working in the past post looked like this:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

Let’s now munge them together by stripping the carriage returns from the XML and putting it between the <Query> and </Query> sections. This gives us the following large and scary looking URL.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ /> <FieldRef Name=’AssignedTo’ /> </ViewFields> <Where> <Eq> <FieldRef Name=’Organisation’ /> <Value Type=’TaxonomyFieldType’>Megacorp Burgers</Value> </Eq> </Where></Query></View>”}

Are we done? Unfortunately not. If you paste this into Fiddler composer, Fiddler will get really upset and display a red warning in the URL textbox…

image

If despite Fiddlers warning, you try and execute this request, you will get a curt response from SharePoint in the form of a HTTP/1.1 400 Bad Request response with the message HTTP Error 400. The request is badly formed.

The fact that Fiddler is complaining about this URL before it has  even been submitted to SharePoint allows us to work out the issue via trial and error. If you cut out a chunk of the URL, Fiddler is okay with it. For example: This trimmed URL is considered acceptable by Fiddler:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>

But adding this little bit makes it go red again.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ />

Any ideas what the issue could be? Well, it turns out that the use of spaces was the issue. I removed all the spaces from the URL above and where I could not, I encoded it in HTML. Thus the above URL turned into the URL below and Fiddler accepted it

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’ />

So returning to our original big URL, it now looks like this (and Fiddler is no longer showing me a red angry textbox):

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

image

So let’s see what happens. We click the execute button. Wohoo! It works! Below you can see a single matching entry and it appears to be the entry from CAMLBuilder2013. We can’t tell for sure because the Assigned To column is returned as AssignedToID and we have to call another web service to return the actual username. We covered this issue and the web service to call extensively in part 8 but to quickly recap, we need to pass the value of AssignedToID to the http://megacorp/iso9001/_api/Web/GetUserById() web service. In this case, http://megacorp/iso9001/_api/Web/GetUserById(8) because the value of AssignedToId is 8.

The images below illustrate. The first one shows the Process Owner for Megacorp burgers. Note the value of AssignedToID is 8. The second image shows what happens when 8 is passed to the GetUserById web service call. Check Title and LoginName fields.

image image

Conclusion

Okay, so now we have our web service URL’s all sorted. In the next post we are going to modify the existing workflow. Right now it has four stages:

  • Stage 1: Obtain Term GUID (extracts the GUID of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get Process Owners (makes a REST web service call to enumerate the Process Owners List and if successful, moves to stage 3)
  • Stage 3: Find Matching Process Owner (Loops through the process owners and finds the matching organisation from stage 1. For the match, grab the value of AssignedToID and if successful, move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

We will change it to the following  stages:

  • Stage 1: Obtain Term Name (extracts the name of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get the X-RequestDigest (We will grab the request digest we need to do our HTTP POST to query the Process Owners list. If successful move to stage 3)
  • Stage 3: Get Process Owner (makes the REST web service call to grab the Process Owners for the organisation specified by the Term name from stage 1. Grab the value of AssignedToID and move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

One final note: After this epic journey we have taken, you might think that doing this in SharePoint Designer workflow should be a walk in the park. Unfortunately this is not quite the case and as you will see, there are a couple more hurdles to cross.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 9

This entry is part 9 of 13 in the series Workflow
Send to Kindle

Hi all and welcome to my series that aims to illustrate the trials and tribulations of SharePoint 2013 workflow to those who consider themselves as citizen developers. In case you don’t want to go all the way back to part 1, a citizen developer is basically a user who creates new business applications themselves, without the use of developers (or IT in general). Since there is no Visual Studio in sight in this series, I think its safe to ay that SharePoint 2013 workflow has the potential to be a popular citizen developer tool, but it is important that people know what they are in for.

We start part 9 of this series having just finally assigned a task to a user in part 8. While this in itself is not particularly earth shattering, if you have followed this series to now, you will appreciate that we have had to navigate some serious potholes to get here, but along the way it is clear that there is some very powerful features available.

Currently the workflow as it stands consists of four stages.

  • Stage 1: Obtain Term GUID (extracts the GUID of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get Process Owners (makes a REST web service call to enumerate the Process Owners List and if successful, moves to stage 3)
  • Stage 3: Find Matching Process Owner (Loops through the process owners and finds the matching organisation from stage 1. For the match, grab the value of AssignedToID and if successful, move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

As mentioned at the end of part 8, one flaw in this workflow is the issue that if the process owners list has a large number of entries, the workflow has to iterate each process owner to find the one with a matching organisation. This causes a bit of concern, because in general, iterating through SharePoint lists in this way is not overly great on performance. In fact SharePoint has an  unfortunate heritage of newbie developers causing all sorts of disk and memory performance issues because of code that iterates a list in a similar way.

So this post is going to explore how we can do better. How about change the workflow behaviour so that rather than grab the entire process owners list, we grab just the entry we need from the process owners list.

But wait – didn’t you say something about this not working?

Now if you have dutifully read this series in the way I intend you to do, you might recall the issue that cropped up in parts 4 and 5. I pointed out that since the Organisation column we are using is a managed metadata column, we cannot use it to filter a list using REST/oData. So while the first query below would happily filter a list by a title of “something”, the second one will result in a big fat error.

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbyid(guid’0ffc4b79-1dd0-4c36-83bc-c31e88cb1c3a’)/Items?$filter=Title eq ‘something’ Smile

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbyid(guid’0ffc4b79-1dd0-4c36-83bc-c31e88cb1c3a’)/Items?”filter=Organisation eq ‘something’ Sad smile

So this is a pickle isn’t it – how can we filter the process owners list by organisation when its not supported by REST/oData?

The thing about managed metadata columns…

Going back in time a bit, SharePoint 2010 was the first version with support for REST and in SharePoint 2013, REST support was extended significantly. As you now know, it seems the managed metadata people never got that memo because one of the older methods that can be used to query lists is called Collaborative Application Markup Language (CAML for short), and CAML does support filtering on managed metadata columns.

CAML, in case you are not aware of it, has been used for SharePoint since the very first version. It is based on a defined XML schema that can be used to query lists and libraries – much like a SQL query does on a database table. Being XML, it is more verbose than a SQL table and for me, harder to read. As an example, the SQL statement “SELECT * from TABLE WHERE field=VALUE” would look something like:

<Query><Where>< Eq><FieldRef Name=’field’ />< Value Type=’Text’>VALUE</Value> </Eq></Where></Query>.

Turning our attention back to the Organisation column that we are having trouble with, a CAML query to bring back all documents tagged as owned by “Megacorp Burgers” would look something like this…

<Where>
   <Eq>
      <FieldRef Name='Organisation' />
      <Value Type='TaxonomyFieldType'>Megacorp Burgers</Value>
   </Eq>
</Where>

Note: By the way, if you want to prove that this works, use CAML Designer 2013, to connect to a list, apply a filter and it will generate the CAML XML it used. I will cover this in the next post.

So here is where we are at. We can definitely can filter a list with a managed metadata column by using the CAML language. But we cannot filter a list using managed metadata via the REST/oData methods that I outlined in part 4. I wonder If there a way to embed a CAML query into a REST web service call?

Turns out there is… only problem is that there is some more conceptual baggage required to understand it properly, so have a strong coffee and lets go…

A journey of CAML in REST…

A while back I came across an MSDN forum thread where Christophe Humbert asked whether CAML queries could be done via the REST API. Erik C. Jordan provided this answer:

POST https//<site>/_api/web/Lists/GetByTitle(‘[list name]‘)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>[other CAML query elements]</Query></View>”}

Take a close look at the above URL. We are still talking to SharePoint via REST and we are calling a method called GetItems. As part of the GetItems call, we see CAML XML inside some curly braces: ‘@v1={“ViewXml”:”<View><Query>[other CAML query elements]</Query></View>”}’.

Hmmm – this looks to have potential. If I can embed a valid CAML query that filters list items based on a managed metadata column, we can very likely have the workflow do that using the Call HTTP Web Service workflow action.

So let’s test this web service and see if we can get it to work. Let’s try enter the above URL on the MegaCorp process owners site.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

Uh-oh, error 400. Dammit, what now?

image_thumb3_thumb_thumb

Turns out that we cannot use a browser to test this particular web service because it is requires a HTTP POST operation, but when we type a URL into a browser, we are performing a HTTP GET operation, hence the error 400.  If you really want to know the difference between a GET and POST in relation to HTTP go and visit this link. But for the purpose of this article, we need to find a way to compose HTTP POST web service calls and guess what – you already know exactly how to do it because we covered it in part 7 – the Fiddler composer function.

So start up Fiddler and let’s craft ourselves a call to this web service….

Step 1:

Start Fiddler and click the Composer Tab. Paste in the web service call we just tried – http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}. Make sure you change the request from a GET to a POST by clicking the dropdown to the left of the URL.

image

Step 2:

Type in the string “Accept: application/json;odata=verbose” into the Request headers textbox as shown below. If you recall the HTTP interlude from part 6, this tells SharePoint to bring back the data in JSON format, rather than XML

image

Step 3:

Click the execute button to execute the request and then click the Response Headers tab as shown below to see what happened. As you can see below, things did not go so well. We got a response code of HTTP/1.1 411 Length Required

image

Hmm – so what are we missing here? It turns out that some HTTP queries require the use of a ‘Content-Length‘ field in the HTTP header. The standard for the HTTP protocol states that: “Any Content-Length greater than or equal to zero is a valid value”, so let’s add a value of zero to the request header.

Step 4:

Click the composer tab again and add the string “Content-length: 0” to the request header textbox as shown below and click execute again:

image

Checking the response and it looks like we are still not quite there as we have another error code: HTTP/1.1 403 FORBIDDEN (The security validation for this page is invalid and might be corrupted. Please use your web browser’s Back button to try your operation again). *sigh* will this ever just work?

image

The reason for this error is a little more complex than the last one. It turns out that we are missing another required HTTP header in the POST request that we are crafting. This header has the cool sounding name of X-RequestDigest and it holds something called the form digest. What is the form digest? Here is what Nadeem Ishqair from Microsoft says:

The form digest is an object that is inserted into a page by SharePoint and is used to validate client requests. Validation of client requests is specific to a user, a site and time-frame. SharePoint relies on the form digest as a form of security validation that helps prevent replay attacks wherein users may be tricked into posting data to the server. As described on this page on MSDN, you can retrieve this value by making a POST request with an empty body to http://site/_api/contextinfo and extracting the value of the “d:FormDigestValue” node in the XML that the contextinfo endpoint returns.

So there you go – it is a security function that validates web service requests. So our workflow is going to have to make yet another web service call to handle this. We will make a POST request with an empty body to http://megacorp/iso9001/_api/contextinfo and then extract the value of the “d:FormDigestValue” node in the information returned.

This probably sounds as clear as mud, so let’s use Fiddler to do it so we know what we have to do in the workflow.

Step 5:

Start Fiddler and click the Composer Tab. Paste in the web service of http://megacorp/iso9001/_api/contextinfo. Make sure you change the request from a GET to a POST and add the string “Accept: application/json;odata=verbose” into the Request headers textbox as shown below

image

Step 6:

Click the execute button to execute the request and make sure the response headers tab is selected. Confirm that the response you get from the server is 200 OK.

image

Step 7:

Click the JSON button and look for an entry called FormDigestValue in the response.

image

Step 8:

Right click on the FormDigestValue entry and choose copy to get it into the clipboard.

image

Step 9:

Click on the composer tab again and paste the FormDigestValue into the Request Headers textbox as shown below. Replace the string “FormDigestValue =” with “X-RequestDigest: “ to make it the correct format needed as shown in the second image below.

image    image

Step 10:

Paste in the original request into the URL: http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”} and click Execute.

image

Check that the HTTP return code is 200 (OK) and then click the JSON tab to see what has come back. You should see a JSON result set that looks like the image below. If you examine the JSON data returned detail of this image, you will it is exactly the same JSON structure that was returned when we used fiddler in part 7.

image  image

Let’s pause here for a moment and reflect. If you have made it this far, you have pretty much nailed the hard bit. If you had an issue, fear not as there are a couple of common problems that are usually easy to rectify.

Firstly, if you receive an error HTTP/1.1 400 Bad Request with a message that looks something like “Invalid JSON. A colon character ‘:’ is expected after the property name ‘â’, but none was found.”, just double check the use of quotes (“”) in the URL. Sometimes when you paste strings from your browser or RSS reader, the quotes can get messed up because of autocorrect. Look closely at the URL below and note the quotes are angled:

{“ViewXml”:”<View><Query></Query></View>”}

To resolve this issue, simply replace the angled quotes with a regular boring old quote so they are not angled and the problem will go away. Compare the string below to the one above to see what I mean…

{“ViewXml”:”<View><Query></Query></View>”}

The second common problem is a HTTP/1.1 403 FORBIDDEN response with the message: “The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again”. If you see this error, your X-RequestDigest may have expired and you need to regenerate it via repeating steps 5 to 9 above. The other possibility is that you did not properly paste the FormDigest into the request header. Double check this as well.

Conclusion

Okay, so that was a rather large dollop of conceptual baggage I handed out in this post. You got introduced to the older method of querying SharePoint lists called CAML, and we have successfully been able to call a REST web service and pass in a CAML XML string and get back data. We learnt about the HTTP POST request and some of the additional HTTP headers that need to be sent, like the Content-length and the X-RequestDigest to make it all work. As an added bonus, we are all Fiddler composer gurus now.

However all we sent across was an empty CAML string. The string <View><Query></Query></View> pretty much says “give me everything and don’t filter”, which is not what we want. So in the next post, we will learn how to create a valid CAML string that filters by the Organisation column. Once we have successfully tested it, we will modify the workflow to use this method instead.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 7

This entry is part 7 of 13 in the series Workflow
Send to Kindle

Hi and welcome to part 7 of my series of articles around SharePoint 2013 workflows. We have been examining the trials and tribulations (mainly trials so far) of implementing a relatively simple document approval workflow for a mythical organisation called Megacorp Inc. In the last post, I started illustrating the first of two approaches to filtering a list based on a managed metadata column using the Call HTTP Web Service capability. If you have been following along with me, we just successfully called the SharePoint lists web service via REST. The diagram below shows where we are at… Now it is time to work with the data that is returned.

image

Very soon we are going to use the loop function of workflows to go through each process owner and check the GUID of the Organisation field. This is probably best done as a new workflow stage that only runs if the response code from the webservice is 200 (OK). So let’s make some minor changes to the workflow…

Step 1:

Add a new workflow stage and name it “Find Matching Process Owner

Step 2.

In the Transition to stage section of the new stage, add a Go to a stage action and set the stage as “End of Workflow

image

Step 3:

In the Transition to stage section of “Get Process Owners” Stage, delete the “Go to End of Workflow” action. In its place, add an “If any value equals value” condition

Step 4:

Click the first value hyperlink of the newly added condition from step 3 and click the fx button. In the Define Workflow lookup dialog, choose Workflow Variables and Parameters as the Data source and choose the responseCode variable in the Field from Source dropdown. Click OK

image  image

Step 5:

Click the second value link of the newly added condition from step 3. Type in the value “OK” and press enter.

image

Step 6:

Under the If Variable: responseCode equals OK condition, add a Go to a stage action. Set the stage as Find Matching Process Owner (the stage created in step 1).

Step 7.

Under the Else condition, add a Go to a stage action. Set the stage as End of Workflow

image

Ok, so now we have a new workflow stage to work with called Find Matching Process Owner. This is where we are going to manipulate dictionary variables and in particular, the ProcessOwnersList variable that contains the data returned by the call we made to the REST webservice in part 6. The first thing we will do is count how many items are in the ProcessOwnersList dictionary, otherwise we won’t know when to stop the loop. This is where the Count Items in a Dictionary workflow action comes in. Let’s try it on the ProcessOwnerList variable now.

Step 8:

In the Find Matching Process Owners stage, add a Count Items in a Dictionary action. Click the dictionary hyperlink and choose Variable: ProcessOwnersList from the list of dictionary variables.

image

Now we should check to see if the value of count is what  we are expecting. For your reference, here are the 9 entries currently in the Process Owners list…

Megacorp Defense Paul Culmsee
Megacorp Burgers Teresa Culmsee
Megacorp Iron Man suits Chris Tomich
Megacorp Gamma Radiation Experiments Peter Chow
Megacorp Pharmaceutical Hendry Khouw
Megacorp Fossil Fuels Sam Zhou
Perth Du Le
Sydney Paul Culmsee
Alaksa Du Le

Step 9:

Add a Log to Workflow History action. Click the message hyperlink and click the ellipses button. In the string builder dialog, type in “Process Owners count: “

image

Step 10:

Click the Add or Change Lookup button. In the Lookup for string dialog, choose Workflow Variables and Parameters from the data source dropdown and choose the variable count. Click OK. You should see the following text in the string builder dialog. Click OK. Review the workflow stage and confirm it looks like the second image below.

image  image

Publish the workflow and run it. Take a look at the workflow history and see what we get. Uh-oh… what now? It says we only have 1 item in the dictionary, but we have 9 process owners. What the…?

image

A JSON (and fiddler) interlude…

In part 6, I stressed the point that dictionary variables can contain any type of variable available in the SharePoint 2013 Workflow platform, including other dictionary variables! This now becomes important because of the way JSON works. To explain better, it is time for me to formally introduce you Fiddler.

Fiddler is a multipurpose utility that sits between the web browser and web sites. It logs all HTTP(s) traffic and is used to view and debug web traffic. Any web developer worth their salt has Fiddler as part of the troubleshooting toolkit because of its ability to manipulate the data as it travels between browser and server. Bets of all, it can be used to compose your own HTTP requests.

So first up, install Fiddler and then start it. In your browser, head over to a web site like Google. Go back to Fiddler and you should see entries logged in the left hand panel. Click on an entry and click the inspectors tab. From here you can see all the gory detail of the conversation between your web browser and the web site.  Neat eh?

Snapshot

If you look in the tabs in the Fiddler window, you will see one labelled Composer. Let’s use it right now… Paste the webservice URL that we used in part 6 into the composer textbox as shown below. Also, paste the string “Accept: application/json;odata=verbose” into the Request headers textbox as shown below. If you recall the HTTP interlude from part 6, this tells SharePoint to bring back the data in JSON format, rather than XML.

image

Click execute and Fiddler will send the request to the server. It will also switch to inspector mode for you to see the request and the response as it happened. If you look closely at the response (by clicking the JSON tab), you will see a structure that looks a bit like that I have pasted below. This is the same data that is now being stored in the ProcessOwnerList dictionary variable.

image

So why did the workflow report that there was only one item in the dictionary then? The short answer is, because there is only one item in the dictionary! A dictionary called “d”. To understand better, take another look at the JSON structure below. What is the first entry in the JSON data? a section called “d”. If you were to collapse d, all other data would be inside it. Therefore, as far as the dictionary variable is concerned, there truly is only one entry.

- JSON
  + d

Note: In case you are wondering what the deal is with the whole “d” thing, given that the it appears to be somewhat redundant. The answer that Microsoft oData standards stipulate that all data returned is wrapped by an outer “d”. Why you may ask? Well you really don’t want to know, but if you must insist, it is for security reasons related to JavaScript (see, I told you that you didn’t want to know!).

So if there is only one entry in the dictionary, than how can we get to the data we need buried deeper? The answer is that this is one of those “dictionary in a dictionary” situations that I described in part 6. The dictionary variable ProcessOwnerList has a single item. That item happens to be another dictionary with multiple items in it. So our first task is to get to the inner dictionary that contains the data we need!

Getting to the real count…

Now we will make use of the Get Item from a Dictionary action to get stuff under the d branch. Looking at the fiddler JSON output above, we need to get to the children of the results branch.

Step 1:

In the Find Matching Process Owner stage, add a Get Item from a Dictionary action as the first action in the stage.

image

Step 2:

Click the item by name or path hyperlink and type in “d/results”. Note: This is case sensitive, so has to match the JSON data shown in the fiddler screenshots. Click the dictionary hyperlink and choose the ProcessOwnersList dictionary variable as the source data.

image

Step 3:

Click the item hyperlink and choose the create a new variable (the bottom option in the dropdown). Call the variable InnerProcessOwnerList and make sure it is set to type dictionary.

image

Step 4:

Now we need to modify the count items action to count the new variable InnerProcessOwnerList. In the Count Items action below the action we just created, click the Variable: ProcessOwnersList and change it to InnerProcessOwnerList.

image

Right! Let’s retest this workflow by republishing it and running it. Now that’s more like it!!!

image

Building the loop…

Now we get to a powerful new feature of workflows in SharePoint 2013 – the ability to perform loops. What we need to do here is loop through the 9 process owners, and for each, compare the GUID of the Organisation column to the GUID on the document from which the workflow was triggered. If the term matches, we exit the loop and assign a task to the person in the Assigned To column.

Let’s go through the process step by step.

Step 1:

Make sure that your cursor is below the last action in the Find Matching Process Owner stage. Add a Set Workflow Variable action. Click the workflow variable hyperlink and choose to create a new variable. Call the variable loop and make it an integer. Click the value hyperlink and set it to zero “0”.

image  image

We are going to use this variable in the looping section  of the workflow to get to each process owner. It’s purpose will become clear soon…

Step 2:

In the ribbon, click the Loop button and choose Loop with Condition. This will add a loop step into the stage. The name “1” is not exactly meaningful so lets change it. Click the title bar of the loop and change the name to For each process owner…

image  image

Now we will set the condition for how long this loop will run for. Note that we have a variable called count that stores the number of entries in the process owners list and in step 1, we created a variable called loop. We will set the workflow to loop around while the value of loop is less than the value of count.

Step 3:

Click the leftmost value hyperlink. In the LoopCondition Properties dialog, click the fx button for the top value. In the Define Workflow Lookup dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop. Click the equals dropdown and choose Is less than. Finally, click the fx button for the bottom value. In the Define Workflow Lookup dialog, choose Workflow Variables and Parameters as the data source and find the variable called count.

image  image  image   image

image

Step 4:

Inside the newly created loop, add a Get Item from a Dictionary action. Click the item by name or path hyperlink and click the ellipses button to display the string builder. Type in a left bracket “(“ and then click the Add or Change lookup button. In the Lookup for String dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop  and click OK. Finally, type in the following string “)/Organisation/TermGuid”.

image

If you look closely at this string, it is referring to the TermGuid in the JSON data. The value of loop (currently 0) will be used to create the string. Hence “(0)/Organisation/TermGuid” will be used to grab the first Organsiation GUID, (1)/Organisation/TermGuid for the second and so on…

Step 5:

Click the dictionary hyperlink and choose InnerProcessOwnerList.

Step 6:

Click the item hyperlink and choose to create a new variable. Call the variable ProcessOwnerTermGUID and set it to a string.

image  image

By this stage, we should have the value of the term GUID for the process owner. Let’s log the value to workflow history so we can confirm things are working…

Step 7:

Add a log to workflow history action and click the message hyperlink. Click the fx button and in the Lookup for String dialog, choose Workflow Variables and Parameters as the data source and find the variable called ProcessOwnerTermGUID.

Step 8:

Now we need to increment the value of the loop variable so it can select the next process owner from the InnerProcessOwnerList variable. Add a Do calculation workflow action, click the first value hyperlink and click the fx button. In the Lookup for Number dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop. Click the second value hyperlink and type in the value “1”.

image

Note that this action creates a new workflow variable (called calc1 by default) that stores the value of loop + 1. We will use this variable in the next workflow step.

Step 9:

Add a Set Workflow Variable action. Click the workflow variable hyperlink and choose loop. Click the value hyperlink and click the fx button. In the Lookup for Number dialog, choose Workflow Variables and Parameters as the data source and find the variable called calc1.

image

Right, that should be enough to test. The each iteration of the loop will log the Organisation term GUID for each process owner. It then increments the value of loop and does it again. Once the value of loop is equal to the value of the variable count, the loop should finish. So publish the workflow and let’s see what happens.. Brilliant! Below the count of process owners (line 3), we have looped though the process owners list and extracted the GUID for the organisation term!

image

Conclusion…

Well, it has taken us a while to get here, but we now have all of the information we need to be able to assign a task to the process owner. if you look in the log above, the first entry was the GUID for the organisation assigned to the document this workflow was run against. Scanning the list of GUID’s from the process owners list, we can see that the matching GUID was on line 5. So in the next post, we will modify our loop to stop when it has a match, and we will examine what we need to do to assign a task to the appropriate process owner.

Until then thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 6

This entry is part 6 of 13 in the series Workflow
Send to Kindle

Hi and welcome to part 6 of my series of articles aimed at demystifying various aspects to SharePoint 2013 workflows. We have been using a mythical example of a document approval workflow from our mythical multinational called Megacorp Inc. We have been trying to create a workflow attempting to implement the process below…

Snapshot_thumb3

Seems straightforward enough, but in part 3, we foiled by the use of check in/check out on document libraries and a completely useless error message didn’t help matters. We eventually worked around that issue, but in part 4, we got stuck on a bigger snag because of our chosen information architecture. The Organisation column we created is a managed metadata column. It turns out that you cannot use a Managed Metadata column as a filter for a list (steps 2 and 3 above). In the last article, we took a detour into the world of dictionary variables and a very powerful new workflow action called “Call HTTP Web Service”. We learnt that in situations where a built-in workflow action does not cut it for you, but you might be able to use Call HTTP Web Service to do what you need. This sets the scene for our next exciting instalment. Perhaps we can get around this managed metadata issue with one of SharePoint’s many web services? If so, which one do I need to use and why?

In this post and the next few, I am going to show you two ways that we can get around the problem of not being able to filter via Managed metadata using the Call HTTP Web Service capability. The first method is a little easier to build than the second method, but it has a flaw that hopefully will become self evident as we proceed. Having said this, I feel it is really important to cover both approaches, because each showcases different features and capabilities of SharePoint Designer 2013 workflows. Therefore, this article and the next two will show the easier but flawed way, and articles 9, 10, 11 and 12 will show what I think is the better way to go.

The workflow looping method…

The gist of the approach we are going to take is to:

  • Get the unique ID of the Organisation for the selected document in the Documents library
  • Using the SharePoint lists REST web service, we will load the the Assigned to and Organisation columns from the Process Owners list and store it into a Dictionary variable
  • Using workflow looping capability, we will step through each item in the dictionary, and find the first entry where the unique ID of the Organisation from step 1 matches the Organisation in process owners
  • For the marching entry, Assign a task to the person mentioned in the Assigned to column.

Now to pull this off, we are going to bring together all of the topics that I have covered in this series. I am also going to be a little less verbose with screenshots, because by now some aspects of workflow creation using SharePoint designer should be getting more familiar. Speaking of more familiar, let’s take a closer look at the lists web service again. In my second REST interlude in part 4, I demonstrated how you could specify the columns that you want to bring back from a web service call, rather than all columns. In the example below, I am showing how you can bring back just the Organisation and Assigned to columns from the Process Owners list (AssignedToId a REST specific thing that represents the Assigned To column. More about that in part 8).

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbytitle(‘Process%20Owners’)/Items?$select=AssignedToId,Organisation

Here is the XML for a single process owner entry… Note that we never get to see the name of the Organisation in the XML for the Organisation column (for that matter, we don’t see the name person in the Assigned column either – an issue I will deal with later). Instead, we have the GUID for the Organisation in the <d:TermGuid> section.

  - <content type="application/xml">
    - <m:properties>
      - <d:Organisation m:type="SP.Taxonomy.TaxonomyFieldValue">
          <d:Label>14</d:Label> 
          <d:TermGuid>e2f8e2e0-9521-4c7c-95a2-f195ccad160f</d:TermGuid> 
          <d:WssId m:type="Edm.Int32">14</d:WssId> 
        </d:Organisation>
        <d:AssignedToId m:type="Edm.Int32">7</d:AssignedToId> 
      </m:properties>
    </content>

Now also in part 4, I explained the Organsiation_0 hidden column and showed that it stores both the organisation name, as well as the GUID of that organisation. So if Organisation has been set to Megacorp Burgers for a document, the value of Organsiation_0 for that document would be:

Megacorp Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f

The common element between the XML from the Process Qwners list, and the value of Organsiation_0 from the Documents library is the Term GUID. Therefore if we can extract the GUID part of Organsiation_0, we can use it to search the Process Owners list and find which entry where the GUID specified in the <d:TermGuid> matches. So first up, let’s clean things up, then use some workflow actions to get hold of the GUID from the Organsiation_0 column.

Getting the GUID…

Step 1:

Turning our attention back to the Process Owners Approval workflow, let’s delete our existing workflow actions, workflow variables and start afresh. Click on any existing workflow actions and choose Delete Action from the dropdown menu as shown below. To delete variables, click the local variables ribbon icon and remove any listed…

image  image  image

Now you should be looking at a clean workflow.

Step 2:

Add the workflow action Find substring in string. To complete the configuration of this action, click the substring hyperlink and add a pipe symbol “|”. Click the string hyperlink, the fx button and from Current Item, choose Organisation_0 as shown below…

image  image

image

The result of this workflow action, will be the position in the string of the pipe symbol will be stored in a variable called index. For example, if you count the number of characters until you get to the pipe symbol in the string, “Megacorp Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”, the answer is 17.

Our next step is to grab all of the characters in the string after the pipe symbol because that is the GUID we need. The way we will do this, we will use another workflow action called Extract substring from index of string. This action takes a string and an index position, and returns all characters to the right of the index. Thus, with the string “Megacorp Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”, if we start at position 17 we will end up with “|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”. This is not quite right because we do not want the pipe symbol, so we will use another workflow action called Do Calculation to add 1 to the index variable first.

Step 3:

Add the Do Calculation action, click the value hyperlink and click the fx button. Change the data source to Workflow Variables and Parameters and choose the variable called index. Click the value hyperlink and type in the number 1.

image

image

The net result of this is we have a variable called calc that storing the position after the pipe symbol in Organsiation_0.

Step 4:

Add the Extract substring from index of string workflow action. Click the string hyperlink, the fx button and from Current Item, choose Organisation_0. Click the “0” hyperlink next to “starting from” and click the fx button. Change the data source to Workflow Variables and Parameters and choose the variable called calc. Finally, click on Variable: substring and choose to Create a new variable… and call it TermGUID as shown below…

image  image

At this point, it might be handy to use the log the value of TermGUID to the workflow history to make sure that things are working as we expect. We can delete this step later…

Step 5:

Add a log to workflow history action and log the value of TermGUID. The final workflow should look like this…

image

Step 6:

Publish this workflow, confirm there are no errors and then run it against a document in the documents library. Wohoo! we now have the GUID!

image

Using stages…

Now that we have the GUID, it makes sense that we can make this sequence of actions a workflow stage. Then we can add a new stage for the rest of the workflow and add some error checking logic.

Step 1:

Click the stage header and rename the stage to Obtain Term GUID.

image

Step 2:

Click outside the stage and from the ribbon, click the Stage icon. A new stage will be added to the workflow. Call this stage Get Process Owners.

image

Now let’s create the logic that connects up the stages. We will set it that we will only move to the Get Process Owners stage if the TermGUID variable has a value. After all, if there is not a valid GUID, there is no point continuing the workflow.

Step 3:

In the Obtain Term GUID stage, select the Go to End of Workflow action and delete it. In the ribbon, click the Condition button and choose If any value equals value from the drop down menu. Confirm that the condition section has been added to the Transition to stage section of the workflow stage…

image   image

image

Step 4:

Click the value hyperlink, click the fx button and choose Workflow Variables and Parameters from the Data source dropdown. Find the TermGUID variable in the Field from source dropdown. Click the equals hyperlink and from the dropdown, choose “is not empty”.

image  image  image

Step 5:

Click on the top “Insert go-to actions with conditions” section, and add a Go to a Stage action. From the stage dropdown, choose “Get Process Owners”

image

Step 6:

Click on the bottom “Insert go-to actions with conditions” section, and add a Go to a Stage action. From the stage dropdown, choose “End of Workflow”. The complete workflow should look like the image below:

image

A HTTP interlude…

Our next task is to get all of the process owners into a dictionary variable. Before we do this, I am going to give you a little lesson on how the HTTP protocol works, because we are literally going to be hand crafting our own request. Therefore it is handy to understand the basics.

When your browser makes a request to a website or web service, it does not just say “Gimme this URL”. Often the server will change its behaviour based on the nature of the request.  For example, if the requestor is a mobile device, the server will send back different HTML compared to a PC browser. So how can the server tell if a request is made from a mobile device versus a PC? The answer is, that when the browser makes a request, it sends additional information in the form of request headers. Request headers are used for all sorts of things, and we are going to need to make use of them. Why? Remember in part 4, that I mentioned the JSON data format and that we need to tell SharePoint that any data it sends us has to be JSON format. Here is another of my dodgy diagrams explaining this by example…

Snapshot

Technically, we have to send the string “Accept: application/json;odata=verbose” in the request header to make this happen. So let’s see how we can put this request together via SharePoint Designer. Just to remind you the URL of the web service to get all of the process owners is:

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbytitle(‘Process%20Owners’)/Items?$select=AssignedToId,Organisation

Crafting the request…

The first thing we need to do is to create the request header that tells SharePoint to return the data in JSON format. This is done via creating a dictionary variable.

Step 1:

In the Get Process Owners workflow stage, add a Build Dictionary action. Click the this hyperlink to display the Build Dictionary window. Click the Add button and type “Accept” into the Name textbox and application/jason;odata=verbose into the value textbox. Click OK twice, then click the Variable: dictionary hyperlink and create a new variable called RequestHeader.

image  image  image

image  image  image

Step 2:

Add a Call HTTP Web Service action and then click to select it as shown below. If you look at the parameters you can set, there is no mention of request header. To set it, click the Advanced Properties icon in the ribbon. In the Call HTTP Web Service Properties dialog box, click the RequestHeaders parameter and in the drop down list to the right of it, choose the RequestHeader variable created in step 1. Click OK. Now the request for JSON format has been set.

image   image

image  image

Step 3:

Click on the “this” hyperlink to the left of “HTTP web service”. This will bring up the HTTP Web Service Details screen. At this point we could paste in the URL above, but we are going to this a little smarter than that. Click the ellipsis next to the textbox to bring up the String Builder dialog box.

image  image

Click the Add or Change Lookup button and in the Data source dropdown, choose Workflow context. This data source comes built in with any workflow you create and as you will see, contains some very handy information that we can use in our workflows. From the Field from source dropdown, choose Current Site URL and click OK. What this will do is take whatever site the workflow is run from and bring it back as a string – in this case http://megacorp/iso9001/. The reason this is a good thing is when you want to use this workflow on another site, such as from development to production. If you use the Current Site URL workflow context, we are not hard-coding the current site name into the workflow.

image  image  image

Anyhow, now that we have the site name, let’s complete the rest of the URL. In the string builder dialog, add “_vti_bin/client.svc/web/lists/getbytitle(‘Process Owners’)/Items?$select=AssignedToId,Organisation” and click OK

image

Our call HTTP Web service now looks like this:

image

Now we are expecting a JSON data feed as a response to this request, so we need to create another dictionary variable to handle it.

Step 4:

Click the response hyperlink and choose to Create a new variable and call it ProcessOwnersList.

image  image

image

Right! At this point, we have built the Call HTTP Web Service and we should test things to make sure it is working. If you look closely at the Call HTTP Web Service action, one of the variables that get created is called responseCode, which is the way the HTTP protocol reports whether the request worked or not. If the response code is 200 (OK), then the query worked. So let’s log the response code to the workflow history and run a test.

Step 5:

Add a Log to History List action. Click the message hyperlink and click the fx button. In the Lookup for String dialog box, choose Workflow Variables and Parameters from the Data source dropdown and choose responseCode from the Field from source dropdown and click ok.

image

Step 6:

In the Transition to stage section, add a Go to a Stage action and set the stage as End of Workflow. Click OK and review the workflow. It should look like the screen below.

image

Testing our progress and next steps…

Publish the workflow, run it and check the results in workflow history.  Wohoo! Our HTTP call worked! Note the OK in the workflow history!

image

At this point I will stop with this post, as it is getting rather long and we still have a bit to do. Although we know that the HTTP call worked, we have not looked at the data that came back. In the next post, we will use some more workflow actions to loop through the data returned to find the matching process owner.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle