Back to Cleverworkarounds mainpage
 

Oct 30 2014

So what is this newfangled apps model anyway and why do I care? (part 1)

Send to Kindle

Hiya

I’ve been meaning to write about the topic of the apps model of SharePoint 2013 for a while now, because it is a topic I am both fascinated and slightly repulsed by. While lots of really excellent material is out there on the apps model (not to mention a few good rants by the usual suspects as well), it is understandably written by developers tasked with making sense of it all, so they can put the model into practice delivering solutions for their clients or organisations.

I spent considerable time reading and researching the various articles and videos on this topic produced by Microsoft and the broader SharePoint community and made a large IBIS map of it. As I slowly got my head around it all, subtle, but significant implications begin to emerge for me. The more I got to know this topic, the more I realised that the opportunities and risks of the apps model holds many important insights and lessons for how organisations should be strategically thinking about SharePoint. In other words, it is not so much about the apps model itself, but what the apps model represents for organisations who have invested into the SharePoint platform.

So these posts are squarely aimed at the business camp. Therefore I am going to skip all sorts of things that I don’t deem necessary to make my points. Developers in particular may find it frustrating that I gloss over certain things, give not quite technically correct explanations and focus on seemingly trivial matters. But like I said, you are not my audience anyway.

So let’s see if we can work out what motivated Microsoft head in this direction and make such a significant change. As always, context is everything…

As it once was…

I want you to picture Microsoft in 2011. SharePoint 2010 has come out to positive reviews and well and truly cemented itself in the market. It adorns the right place in multiple Gartner magic quadrants, demand for talent is outstripping supply and many organsiations are busy embarking on costly projects to migrate from their legacy SharePoint 2007 and 2003 deployments, on the basis that this version has fixed all the problems and that they will definitely get it right this time around. As a result, SharePoint is selling like hotcakes and is about to crack the 2 billion dollar revenue barrier for Microsoft. Consultants are also doing well in this time since someone has to help organsiations get all of that upgrade work done. Life is good… allegedly.

But even before the release of SharePoint 2010, winds of change were starting to blow. In fact, way back in 2008, at my first ever talk at a SharePoint conference, I showed the Microsoft pie chart of buzzwords and asked the crowd what other buzzwords were missing. The answer that I anticipated and received was of course “cloud”, which was good because I had created a new version of the pie and invited Microsoft to license it from me. Unfortunately no-one called.

image

Winds of change…

While my cloud picture was aimed at a few cheap laughs at the time, it holds an important lesson. Early in the release cycle of SharePoint 2007, cloud was already beginning to influence people’s thinking. Quickly, services traditionally hosted within organisations began to appear online, requiring a swipe of the credit card each month from the opex budget which made CFO’s happy. A good example is Dropbox which was founded in 2008. By 2010, won over the hearts and minds of many people who were using FTP. Point solutions such as Salesforce appeared, which further demonstrated how the the competitive landscape was starting to heat up. These smaller, more nimble organsiations were competing successfully on the basis of simplicity and focus on doing one thing well, while taking implementation complexity away.

Now while these developments were on Microsoft’s radar, there was really only one company that seriously scared them. That was Google via their Google Docs product. Here was a company just as big and powerful as Microsoft, playing in Microsoft’s patch using Microsoft’s own approach of chasing the enterprise by bundling products and services together. This emerged as a credible alternative to not only SharePoint, but to Office and Exchange as well.

Some of you might be thinking that Apple was just a big a threat to Microsoft as Google. But Microsoft viewed Apple through the eyes of envy, as opposed to a straight out threat. Apple created new markets that Microsoft wanted a piece of, but Apple’s threat to their core enterprise offerings remained limited. Nevertheless, Microsoft’s strong case of crimson green Apple envy did have a strategic element. Firstly, someone at Microsoft decided to read the disruptive innovation book and decided that disruptive was obviously cool. Secondly, they saw the power of the app store and how quickly it enabled an developer ecosystem and community to emerge, which created barriers for the competition wanting to enter the market later.

Meanwhile, deeper in the bowels of Microsoft, two parallel problems were emerging. Firstly, it was taking an eternity to work through an increasingly large backlog of tech support calls for SharePoint. Clients would call, complaining of slow performance, broken deployments after updates, unhandled exceptions and so on. More often than not though, these issues had were not caused by the base SharePoint platform, but via a combination of SharePoint and custom code that leaked memory, chewed CPU or just plain broke. Troubleshooting and isolating the root cause very difficult which led to the second problem. Some of Microsoft’s biggest enterprise customers were postponing or not bothering with upgrades to SharePoint 2010. They deemed it too complex, costly and not worth the trouble. For others, they were simply too scared to mess with what they had.

A perfect storm of threats

image  image

So to sum up the situation, Microsoft were (and still are) dealing with five major forces:

  • Changing perceptions to cloud technologies (and the opex pricing that comes with it)
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange
  • An increasing number of smaller point solution players who chip away at SharePoint features with cheaper and easier to use offerings
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace
  • Customers unable to contend with the ever increasing complexity of SharePoint and putting off upgrades

So what would you do if you were Microsoft? What would your strategy be to thrive in this paradigm?

Now Microsoft is a big organisation, which affords it the luxury of engaging expensive management consultants, change managers and corporate coaches. Despite the fact that it doesn’t take an MBA to realise that just a couple of these factors alone combine as a threat to the future of SharePoint, lots of strategic workshops were no doubt had with associated whiteboard diagrams, postit notes, dodgy team building games and more than one SWOT analyses to confirm the strategic threats they faced were a clear and present danger. The conclusion drawn? Microsoft had to put cloud at the centrepiece of their strategy. After all, how else can you bring the fight to the annoying cloud upstarts, stave off the serious Google threat, all the while reducing the complexity burden on their customers?

A new weapon and new challenges…

In 2011, Microsoft debuted Office365 as the first salvo in their quest to mitigate threats and take on their competitors at their own game. It combined Exchange, Lync and SharePoint 2010 – packaging them up in a per-user per month approach. The value proposition for some of Microsoft’s customers was pretty compelling because up-front capital costs reduced significantly, they now could get the benefits of better scalability, bigger limits on things like mailboxes, while procurement and deployment was pretty easy compared to doing it in-house. Given the heritage of SharePoint, Exchange and Lync, Microsoft was suddenly competitive enough to put Google firmly on the back foot. My own business dumped gmail and took up Office365 at this time, and have used it since.

Of course, there were many problems that had to be solved. Microsoft was now switching from a software provider to a service provider which necessitated new thinking, processes, skills and infrastructure internally. But outside of Microsoft there were bigger implications. The change from software provider to service provider did not go down well with many Microsoft partners who performed that role prior. It also freaked out a lot of sysadmins who suddenly realised their job of maintaining servers was changing. (Many are still in denial to this day). But more importantly there was a big implication for development and customisation of SharePoint. This all happened mid-way through the life-cycle of SharePoint 2010 and that version was not fully architected for cloud. First and foremost, Microsoft were not about to transfer the problem of dodgy 3rd party customisations onto their servers. Recall that they were getting heaps of support calls that were not core SharePoint issues, but caused by custom code written by 3rd parties. Imagine that in a cloud scenario when multiple clients share the same servers. This means that that one clients dodgy code could have a detrimental effect on everybody else, affecting SLA’s while not solving the core problem Microsoft were having of wearing the cost and blame via tech support calls for problems not of their doing.

So with Office365, Microsoft had little choice but to disallow the dominant approach to SharePoint customisation that had been used for a long time. Developers could no longer use the methods they had come to know and love if a client wanted to use Office 365. This meant that the consultancies who employed them would have to change their approach too, and customers would have to adjust their expectations as well. Office365 was now a much more restricted environment than the freedom of on-premises.

Is it little wonder then, that one of Microsoft’s big focus areas for SharePoint 2013 was to come up with a way to readdress the balance. They needed a customisation model that allowed a consistent approach, whether you chose to keep SharePoint on-premises, or move off to the cloudy world of Office 365. As you will see in the next post, that is not a simple challenge,. The magnitude of change required was going to be significant and some pain was going to have to happen all around.

Coming next…

So with that background set, I will spend time in the next post explaining the apps model in non technical terms, explaining how it helps to address all of the above issues. This is actually quite a challenge, but with the right dodgy metaphors, its definitely possible. :-) Then we will take a more critical viewpoint of the apps model, and finally, see what this whole saga tells us about how we should be thinking about SharePoint in the future…

thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Jun 04 2014

Rewriting the knowledge management rulebook… The story of “Glyma” for SharePoint

Send to Kindle

“If Jeff ever leaves…”

I’m sure you have experienced the “Oh crap” feeling where you have a problem and Jeff is on vacation or unavailable. Jeff happens to be one of those people who’s worked at your organisation for years and has developed such a deep working knowledge of things, it seems like he has a sixth sense about everything that goes on. As a result, Jeff is one of the informal organisational “go to guys” – the calming influence amongst all the chaos. An oft cited refrain among staff is “If Jeff ever leaves, we are in trouble.”

In Microsoft’s case, this scenario is quite close to home. Jeff Teper, who has been an instrumental part of SharePoint’s evolution is moving to another area of Microsoft, leaving SharePoint behind. The implications of this are significant enough that I can literally hear Bjorn Furuknap’s howls of protest all the way from here in Perth.

So, what is Microsoft to do?

Enter the discipline of knowledge management to save the day. We have SharePoint, and with all of that metadata and search, we can ask Jeff to write down his knowledge “to get it out of his head.” After all, if we can capture this knowledge, we can then churn out an entire legion of Jeffs and Microsoft’s continued SharePoint success is assured, right?

Right???

There is only one slight problem with this incredibly common scenario that often underpins a SharePoint business case… the entire premise of “getting it out of your head” is seriously flawed. As such, knowledge management initiatives have never really lived up to expectations. While I will save a detailed explanation as to why this is so for another post, let me just say that Nonaka’s SECI model has a lot to answer for as it is based on a misinterpretation of what tacit knowledge is all about.

Tacit knowledge is expert knowledge that is often associated with intuition and cannot be transferred to others by writing it down. It is the “spider senses” that experts often seem to have when they look at a problem and see things that others do not. Little patterns, subtleties or anomalies that are invisible to the untrained eye. Accordingly, it is precisely this form of knowledge that is of the most value in organisations, yet is the hardest to codify and most vulnerable to knowledge drain. If tacit knowledge could truly be captured and codified in writing, then every project manager who has ever studied PMBOK would have flawless projects, because the body of knowledge is supposed to be all the codified wisdom of many project managers and the projects they have delivered. There would also be no need for Agile coaches, Microsoft’s SharePoint documentation should result in flawless SharePoint projects and reading Wictor’s blog would make you a SAML claims guru.

The truth of tacit knowledge is this: You cannot transfer it, but you acquire it. This is otherwise known as the journey of learning!

Accountants are presently scratching their heads trying to figure out how to measure tacit knowledge. They call it intellectual capital, and the reason it is important to them is that most of the value of organisations today is classified on the books as “intangibles”. According to the book Balanced Scorecard, a company’s physical assets accounted for 62% of its market value in 1982, 38% of its market value in 1992 and only 21% in 2003. This is in part a result of the global shift toward knowledge economies and the resulting rise in the value of intellectual capital. Intellectual capital is the sum total of the skills, knowledge and experience of staff and is critical to sustaining competitiveness, performance and ultimately shareholder value. Organisations must therefore not only protect, but extract maximum value from their intellectual capital.

image

Now consider this. We are in an era where baby boomers are retiring, taking all of their hard-earned knowledge with them. This is often referred to as “the knowledge tsunami”, “the organisational brain drain” and the more nerdy “human capital flight”. The issue of human capital flight is a major risk area for organisations. Not only is the exodus of baby boomers an issue, but there are challenges around recruitment and retention of a younger, technologically savvy and mobile workforce with a different set of values and expectations. One of the most pressing management problems of the coming years is the question of how organisations can transfer the critical expertise and experience of their employees before that knowledge walks out the door.

The failed solutions…

After the knowledge management fad of the late 1990’s, a lot of organisations did come to realise that asking experts to “write it down” only worked in limited situations. As broadband came along, enabling the rise of rich media services like YouTube, a digital storytelling movement arose in the early 2000’s. Digital storytelling is the process by which people share stories and reflections while being captured on video.

Unfortunately though, digital storytelling had its own issues. Users were not prepared to sit through hours of footage of an expert explaining their craft or reflecting on a project. To address this, the material was commonly edited down to create much smaller mini-documentaries lasting a few minutes – often by media production companies, so the background music was always nice and inoffensive. But this approach also commonly failed. One reason for failure was well put by David Snowden when he saidInsight cannot be compressed”. While there was value in the edited videos, much of the rich value within the videos was lost. After all, how can one judge ahead of time what someone else finds insightful. The other problem with this approach was that people tended not to use them. There was little means for users to find out these videos existed, let alone watch them.

Our Aha moment

In 2007, my colleagues and I started using a sensemaking approach called Dialogue Mapping in Perth. Since that time, we have performed dialogue mapping across a wide range of public and private sector organisations in areas such as urban planning, strategic planning, process reengineering, organisational redesign and team alignment. If you have read my blog, you would be familiar with dialogue mapping, but just in case you are not, it looks like this…

Dialogue Mapping has proven to be very popular with clients because of its ability to make knowledge more explicit to participants. This increases the chances of collective breakthroughs in understanding. During one dialogue mapping session a few years back, a soon-to-be retiring, long serving employee relived a project from thirty years prior that he realised was relevant to the problem being discussed. This same employee was spending a considerable amount of time writing procedure manuals to capture his knowledge. No mention of this old project was made in the manuals he spent so much time writing, because there was no context to it when he was writing it down. In fact, if he had not been in the room at the time, the relevance of this obscure project would never have been known to other participants.

My immediate thought at the time when mapping this participant was “There is no way that he has written down what he just said”. My next thought was “Someone ought to give him a beer and film him talking. I can then map the video…”

This idea stuck with me and I told this story to my colleagues later that day. We concluded that the value of asking our retiring expert to write his “memoirs” was not making the best use of his limited time. The dialogue mapping session illustrated plainly that much valuable knowledge was not being captured in the manuals. As a result, we seriously started to consider the value of filming this employee discussing his reflections of all of the projects he had worked on as per the digital storytelling approach. However, rather than create ‘mini documentaries’, utilise the entire footage and instead, visually map the rationale using Dialogue Mapping techniques. In this scenario, the map serves as a navigation mechanism and the full video content is retained. By clicking on a particular node in the map, the video is played from the time that particular point was made. We drew a mock-up of the idea, which looked like the picture below.

image

While thinking the idea would be original and cool to do, we also saw several strategic advantages to this approach…

  • It allows the user to quickly find the key points in the conversation that is of value to them, while presenting the entire rationale of the discussion at a glance.
  • It significantly reduces the codification burden on the person or group with the knowledge. They are not forced to put their thoughts into writing, which enables more effective use of their time
  • The map and video content can be linked to the in-built search and content aggregation features of SharePoint.
    • Users can enter a search from their intranet home page and retrieve not only traditional content such as documents, but now will also be able to review stories, reflections and anecdotes from past and present experts.
  • The dialogue mapping notation, in combination with the Glyma rationale database also lends itself to more advanced forms of queries. Consider the following examples:
    • “I would like any ideas from lessons learnt discussions in the Calgary area”
    • “What pros or cons have been made about this particular building material?”
  • The applicability of the approach is wide.
    • Any knowledge related industry could take advantage of it easily because it fits into exiting information systems like SharePoint, rather than creating an additional information silo.

This was the moment the vision for Glyma (pronounced “glimmer”) was born…

Enter Glyma…

Glyma (pronounced ‘glimmer’) is a software platform for ‘thought leaders’, knowledge workers, organisations, and other ‘knowledge economy participants’ to capture and trade their knowledge in a way that reduces effort but preserves rich context. It achieves this by providing a new way for users to visually capture and link their ideas with rich media such as video, documents and web sites. As Glyma is a very visually oriented environment, it’s easier to show Glyma rather than talk to it.

TED Talk on Education Glyma Map  image

What you’re looking at in the first image above are the concepts and knowledge that were captured from a TED talk on education augmented with additional information from Wikipedia. The second is a map that brings together the rationale from a number of SPC14 Vegas videos on the topic of Hybrid SharePoint deployments.

Glyma brings together different types of media, like geographical maps, video, audio, documents etc. and then “glues” them together by visualising the common concepts they exemplify. The idea is to reduce the burden on the expert for codifying their knowledge, while at the same time improving the opportunity for insight for those who are learning. Glyma is all about understanding context, gaining a deeper understanding of issues, and asking the right questions.

We see that depending on your focus area, Glyma offers multiple benefits.

For individuals…

As knowledge workers our task is to gather and learn information, sift through it all, and connect the dots between the relevant information. We create our knowledge by weaving together all this information. This takes place through reading articles, explaining on napkins, diagramming on whiteboards etc. But no one observes us reading, people throw away napkins, whiteboards are wiped clean for re-use. Our journey is too “disposable”, people only care about the “output” – that is until someone needs to understand our “quilt of information”.

Glyma provides end users with an environment to catalogue this journey. The techniques it incorporates helps knowledge workers with learning and “connecting the dots”, or as we know it synthesising. Not only does it help us with doing these two critical tasks, it then provides a way for us to get recognition for that work.

For teams…

Like the scenario I started this post with, we’ve all been on the giving and receiving end of it. That call to Jeff who has gone on holiday for a month prior to starting his promotion and now you need to know the background to solving an issue that has arisen on your watch. Whether you were the person under pressure at the office thinking, “Jeff has left me nothing of use!”, or you are Jeff trying to enjoy your new promotion thinking, “Why do they keep on calling me!”, it’s an uncomfortable situation for all involved.

Because Glyma provides a medium and techniques that aid and enhance the learning journey, it can then act as the project memory long after the project has completed and the team members have moved onto their next challenge. The context and the lessons it captures can then be searched and used both as a historical look at what has happened and, more importantly, as a tool for improving future projects.

For organisations…

As I said earlier, intangible assets now dominate the balance sheets of many organisations. Where in the past, we might have valued companies based on how many widgets they sold and how much they have in their inventory, nowadays intellectual capital is the key driver of value. Like any asset, organisations need to extract maximum value from intellectual capital and in doing so, avoid repeat mistakes, foster innovation and continue growth. Charles G. Sieloff summed this up well in the name of his paper, “if only HP knew what HP knows”.

As Glyma aids, enhances, and captures an individual’s learning journey, that journey can now be shared with others. With Glyma, learning is no longer a silo, it becomes a shared journey. Not only does it do this for individuals but it extends to group work so that the dynamics of a group’s learning is also captured. Continuous improvement of organisational processes and procedures is then possible with this captured knowledge. With Glyma, your knowledge assets are now tangible.

Lemme see it!

So after reading this post this far, I assume that you would like to take a look. Well as luck would have it, we put out a public Glyma site the other day that contains some of my own personal maps. The maps on the SP2013 apps model and hybrid SP2013 deployments in particular represent my own learning journey, so hopefully should help you if you want a synthesis of all the pros and cons of these issues. Be sure to check the videos on the getting started area of the site, and make sure you try the search! Smile

image

I hope you like what you see. I have a ton of maps to add to this site, and very soon we will be inviting others to curate their own maps. We are also running a closed beta, so if you want to see this in your organisation, go to the site and then register your interest.

All in all, I am super proud of my colleagues at Seven Sigma for being able to deliver on this vision. I hope that this becomes a valuable knowledge resource for the SharePoint community and that you all like it. I look forward to seeing how history judges this… we think Glyma is innovative, but we are biased! :)

 

Thanks for reading…

Paul Culmsee

www.glyma.co

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Feb 05 2014

Tips for using SPD Workflows to talk to 3rd party web services

Send to Kindle

Hi all

One workflow action that anyone getting into SharePoint Designer 2013 workflow development needs to know is the Call HTTP Web Service action. This action opens up many possibilities for workflow developers, and in some ways, turns them into an option to be taken seriously for certain types of development in SharePoint – particularly in Office 365 where your customisations options are more restrictive.

Not so long ago, I wrote a large series of posts on the topic of calling web services within workflows and was able to get around some issues encountered when utilising Managed Metadata columns. Fabian WIlliams and Andrew Connell have also done some excellent work in this area too. More recently I have turned my attention to using 3rd party cloud services with SharePoint to create hybrid scenarios. After all, there are tons of fit-for-purpose solutions for various problems in cloud land and many have an API that supports REST/JSON. As a result, they are accessible to our workflows which makes for some cool possibilities.

But if you try this yourself, you find out fairly quickly that there are three universal realities you have to deal with:

  1. Debugging SPD workflows that call web services is a total bitch
  2. SPD Workflows are very fussy when parsing the data returned by web services
  3. Web services themselves can be very fickle

In this brief post, I thought I might expand on each of these problem areas with some pointers and examples of how we got around them. While they may not be applicable or usable for you, they might give you some ideas in your own development and troubleshooting efforts.

Debugging SharePoint Designer Workflows…

The first issue that you are likely to encounter is a by product of how SharePoint 2013 workflows work. To explain, here is my all-time most dodgy conceptual diagram that is kind of wrong, but has just enough “rightness” not to get me in too much trouble with hardcore SharePoint nerds.

Snapshot

In this scenario, the SharePoint deployment consists of a web front end server and a middle tier server running Workflow Manager. Each time a workflow step runs, it is executed on the workflow manager server, not on the SharePoint server, and most certainly not on the users browser. The implication of this is that if you wish to get a debug trace of the webservice call made by the workflow manager server, you need to do it on the workflow manager server, which necessitates access to it.

Now typically this is not going to happen in production and it is sure as hell never going to happen on Office365, so you have to do this in a development environment. There are two approaches I have used to trace HTTP conversations to and from workflow manager: The first is to use a network sniffer tool like Wireshark and the second is to use a HTTP level trace tool like Fiddler. The Wireshark approach is oft overlooked by SharePoint peeps, but that’s likely because most developers tend not to operate at the TCPIP layer. It’s main pro is it is fairly easy to set up and capture a trace, but its major disadvantage is that if the remote webservice uses HTTPS, then the packet data will be encrypted at the point where the trace is operating. This rules it out when talking to most 3rd party API’s out in cloud land.

Therefore the preferred method is to use install Fiddler onto the workflow manager server, and configure it to trace HTTP calls from workflow manager. This method is more reliable and easier to work with, but is relatively tricky to set it all up. Luckily for all of us, Andrew Connell wrote comprehensive and clear instructions for using this approach.

SPD Workflows are very fussy when parsing the data returned by web services

If you have never seen the joys of JSON data, it looks like this…

{“d”:{“results”:[{“__metadata”:{“id”:”71deada5-6100-48a5-b2e3-42b97b9052a2″,”uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)”,”etag”:”\”6\””,”type”:”SP.Data.DocumentsItem”},”FirstUniqueAncestorSecurableObject”:{“__deferred”:{“uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)/FirstUniqueAncestorSecurableObject”}},”RoleAssignments”

Hard to read? yeah totally, but it was never meant to be human readable anyway. The point here that when calling a HTTP Web service, SharePoint Designer will parse JSON data like the example above into a dictionary variable for you to then work with. In part 7 of my big workflow series I demonstrated how you can parse the data returned to make decisions in your workflow.

But as Fabian has noted, some web services return additional data other than the JSON itself. For example, sometimes an API will return JSON in a JavaScript variable like so:

var Variable = {“d”:{“results”:[{“__metadata”:{“id”:”71deada5-6100-48a5-b2e3-42b97b9052a2″,”uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)”,”etag”:”\”6\””,”type”:”SP.Data.DocumentsItem”},”FirstUniqueAncestorSecurableObject”:{“__deferred”:{“uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)/FirstUniqueAncestorSecurableObject”}},”RoleAssignments”

The problem here is that the parser code used by the Call HTTP Web Service action does not handle this well at all. Fabian had this to say in his research

What we found out is that if you have anything under the Root of the JSON node other than a JSON Array, for example as in the case of a few, the Version Number [returned as a JSON Object], although it works perfectly in a browser, or JSON Viewer, or Fiddler, it doesn’t make the right call when using SPD2013 or VS2012. If after modifying the data output and removing anything that is NOT a JSON Array from the Root of the Node, it should work as expected.

Microsoft documented an approach to getting around this when demonstrating pulling data from ebay which brings in XML format instead of JSON. They used a transformer web service provided by firstamong.com and then passed the URL of the ebay web service as a parameter to the transformer web service (ie http://www.firstamong.com/json/index.php?q=http://deals.ebay.com/feeds/xml. A similar thing could be done for getting cleaned JSON.

So what to do if you have a web-service that includes data you are not interested in? Before you swear too much or use Microsoft’s approach, check with the web service provider to see if they offer a way to return “raw” JSON data back to you. I have found with a little digging, some cloud providers can do this. Usually it is a variation of the URL you call or something you add to the HTTP request header. In short, do whatever you can to get back a simple JSON array and nothing else if you want it easily parsed by SharePoint.

Now speaking of request headers…

Web Services can be fickle…

It is very common to get a HTTP 500 internal server error when calling 3rd party web services. One common reason for this is that SPD workflows add lots of stuff to the HTTP request when calling a web service because it assumes it is going to be SharePoint and some authorisation needs to take place. But 3rd party webservices are likely not to care, as they tend to use API keys. In fact, sometimes it can cause you problems if the remote webserver is not expecting this stuff in the header.

Here is an example of a problem that my colleague Hendry Khouw had. After successfully crafting a REST request to a 3rd party service on his workstation, he tried to call the same web service using the Call HTTP Web Service workflow action. But when the webservice was called from the workflow, the HTTP responsecode was 500 (internal server error). Using Andrew Connell’s method of fiddler tracing explained earlier, he captured the request that was returning a HTTP 500 error.

image

image

Hendry then pasted the web service URL directly into Fiddler and captured the following trace that returned a successful HTTP 200 response and the JSON data expected.

image

By comparing the request header between the failed and successful request, it becomes clear that Workflow Manager sends oAuth data, as well as various other things in the header that were not sent when the URL was manually called. Through a little trial and error using Fiddler’s composer function, Hendry isolated the problem down to one particular oAuth header variable, Authorization: Bearer. By using Fiddler composer and removing the Authorization variable (or setting it as a blank value), the request was successful as can be shown on the second and third images below:

image  image

image

Now that we have worked out the offensive HTTP header variable that was causing the remote end to spit the dummy, we can craft a custom request header in the workflow to prevent the default value of Bearer being set.

Step 1. Build a dictionary to be included into the RequestHeaders for the web service call. Note the blank value.

image

Step 2. Set the RequestHeaders to use the dictionary we’ve just created.

image

image

Hendry then republished his workflow, the request was successful and he was able to parse the results into a dictionary variable.

Hope this helps others…

Paul Culmsee and Hendry Khouw

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Jan 04 2014

Trials or tribulation? Inside SharePoint 2013 workflows–conclusion and reflections

Send to Kindle

Hi all

In case you have not been paying attention, I’ve churned out a large series of posts – twelve in all – on the topic of SharePoint Designer 2013 workflows. The premise of the series was to answer a couple of questions:

1.  Is there enough workflow functionality in SharePoint 2013 to avoid having to jump straight to 3rd party tools?

2. Is there enough workflow functionality to enable and empower citizen developers to create lightweight solutions to solve organisational problems?

To answer these questions, I took a relatively simple real world scenario to illustrate what the journey looks like. Well – sort of simple in the sense that I deliberately chose a scenario that involved managed metadata. Because of this seemingly innocuous information architecture decision, we encountered SharePoint default settings that break stuff, crazy error messages that make no sense, learnt all about REST/oData, JSON, a dash of CAML and mastered the Fiddler tool to make sense of it all. We learnt a few SharePoint (and non SharePoint) web services, played with new features like dictionaries, loops and stages. Hopefully, if you have stuck with me as we progressed through this series, you have a much better understanding of the power and potential peril of this technology.

So where does that leave us with our questions?

In terms of the question of whether this edition enables you to avoid 3rd party tools – I think the answer is an absolute yes for SharePoint Foundation and a qualified yes for everything else. On the plus side, the new architecture certainly addresses some of the previous scalability issues and the ability to call web services and parse the data returned, opens up all sorts of really interesting possibilities. If “no custom development” solutions are your mantra (which is really “no managed code” usually) , then you have at your disposal a powerful development tool. Don’t forget that I have shown you a glimpse of what can be done. Very clever people like Fabian WIlliams have taken it much further than me, such as creating new SharePoint lists, creating no code timer jobs and creating your own declarative workflows – probably the most interesting feature of all.

In a nutshell, with this version, many things that were only possible in Visual Studio now become very doable using SharePoint Designer – especially important for Office365 scenarios.

So then, why a qualified yes as opposed to an enthusiastic yes?

Because it is still all so… how do I put this…  so #$%#ing fiddly!

Fiddly is just a euphemism for complexity, and in SharePoint it manifests in the minefield of caveats and “watch out for…” type of advice that SharePoint consultants often have to give. It has afflicted SharePoint since the very beginning and Microsoft are seemingly powerless to address it while they address issues of complexity by making things more complex. As an example: Here is my initial workflow action to assign the process owner a task from part 2. One single, simple action that looks up the process owner based on the organisation column.

image_thumb43  image

Now the above solution never worked of course because managed metadata columns are not supported in the list item filtering capability of SPD workflows. Yes, we were able to work around the issue successfully without sacrificing our information architecture, but take a look below at the price we paid in terms of complexity to achieve it. From one action to dozens. Whilst I prefer this in a workflow rather than in Visual studio and compiled to a WSP file, it required a working knowledge of JSON, REST/oData, CAML and debugging HTTP traffic via Fiddler. Not exactly the tools of your average information worker or citizen developer.

image_thumb10  image_thumb18    image_thumb22

image_thumb25  image_thumb27  image_thumb14

A lot of code above to assign a task to someone eh?

Another consideration on the 3rd party vs. out of the box discussion is of course all of the features that the 3rd party workflow tools have. The most obvious example is a decent forms solution. Whilst InfoPath still is around, the fact that Microsoft did precisely nothing with it in SharePoint 2013 and removed support for its use in SharePoint 2013 workflows suggests that they won’t have a change of heart anytime soon.

In fact, my prediction is that Microsoft are working on their own forms based solution and will be seriously bolstering workflow capability in SharePoint vNext. They will create many additional declarative workflow actions, and probably model a hybrid forms solution that works in a similar way to the way Nintex live forms does. Why I do I think this? It’s just a hunch, based on the observation that a lot of the plumbing to do this is there in SharePoint 2013/Workflow Manager and also that there is a serious gap in the forms story in SharePoint 2013. How else will they be able to tell a good multi-device story going forward?

But perhaps the ultimate lead indicator to the suitability of this new functionality to citizen developers is to gauge feedback from citizen developers who took the time to understand the twelve articles I wrote. In fact, if you are truly evil IT manager, concerned with the risk of information worker committing SharePoint atrocities, then get your potential citizen developers to read this series of articles as a way to set expectations and test their mettle. If they get through them, give them the benefit of the doubt and let them at it!

So all you citizen developers, do you feel inspired that we were able to get around the issues, or feel somewhat shell shocked at all of the conceptual baggage, caveats and workarounds? If you are in the latter camp, then maybe serious SharePoint 2013 workflow development is not for you, but then again, if you are not blessed with a large budget to invest in 3rd party tools, you want to get SharePoint onto your CV, all the while, helping organisations escape those annoying project managers and elitist developers, at least you now know what you need to learn!

On a more serious note, if you are on a SharePoint governance, strategy or steering team (which almost by definition means you are only reading this conclusion and not the twelve articles that preceded it), then you should consider how you define value when looking at the ROI of 3rd party verses going out of the box for workflow. For me, if part of your intention or strategy is to build a deeper knowledge and capacity of SharePoint in your information workers and citizen developers, then I would look closely at out of the box because it does force people to better understand how SharePoint works more broadly. But (and its a big but), remember that the 3rd party tools are more mature offerings. While they may mitigate the need for workflow authors to learn SharePoint’s deeper plumbing, they nevertheless produce workflows that are much simpler and more understandable than what I produced using out of the box approaches. Therefore from a resource based view (ie take the least amount of time to develop and publish workflows), one would lean toward the third party tools.

I hope you enjoyed the series and thanks so much for reading

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Jan 03 2014

Trials or tribulation? Inside SharePoint 2013 workflows–Part 12

Send to Kindle

Hi all, and welcome to part 12 of my articles about SharePoint 2013 Workflows and whether they are ready for prime time. Along the way we have learnt all about CAML, REST, JSON, calling web services, Fiddler, Dictionary objects and a heap of scenarios that can derail aspiring workflow developers. All this just to assign a task to a user!

Anyways, since it has been such a long journey, I felt it worthwhile to remind you of the goal here. We have a fictitious company called Megacorp trying to develop a solution to controlled documents management. The site structure is as follows:

image

The business process we have been working through looks like this:

Snapshot_thumb3

The big issue that has caused me to have to write 12 articles all boils down to the information architecture decision to use a managed metadata column to store the Organisation hierarchy.

Right now, we are in the middle of implementing an approach of calling a web service to perform step 3 in the above diagram. In part 9 and part 10 of this series, I explained the theory of embedding a CAML query into a REST query and in part 11, we built out most of the workflow. Currently the workflow has 4 stages and we have completed the first three of them.

  • 1) Get the organisation name of the current item
  • 2) Obtain an X-RequestDigest via a web service call
  • 3) Constructed the URL to search the Process Owner list and called the web service

The next stage will parse the results of the web service call to get the AssignedToID and then call another web service to get the actual userid of the user. Then we can finally have what we need to assign an approval task. So let’s get into it…

Obtaining the UserID

In the previous post, I showed how we constructed a URL similar to this one:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

This URL uses the CAML in REST method of querying the Process Owners list and returns any items where Organisation equals “Megacorp Burgers”. The JSON data returned shows the AssignedToID entry with a value of 8. Via the work we did in the last post. we already have this data available to us in a dictionary variable called ProcessOwnerJSON.

The rightmost JSON output below illustrates taking that AssignedToID value and calling another web service to return the username , i.e : http://megacorp/iso9001/_api/Web/GetUserById(8).

image   image_thumb52

Confused at this point? Then I suggest you go back and re-read parts 8 and 10 in particular for a recap.

So our immediate task is to extract the AssignedToId from the dictionary variable called ProcessOwnerJSON. Now that you are a JSON guru, you should be able to figure out that the query will be d/results(0)/AssignedToId.

Step 1:

Add a Get an Item from a Dictionary action as the first action in the Obtain Userid workflow stage. Click the item by name or path hyperlink and click the ellipses to bring up the string builder screen. Type in d/results(0)/AssignedToId.

image

Step 2:

Click on the dictionary hyperlink and choose the ProcessOwnerJSON variable from the list.

Step 3:

Click the item hyperlink and use the AssignedToID variable

image

That is basically it for now with this workflow stage as the rest of it remains unchanged from when we constructed it in part 8. At this point, the Obtain Userid stage should look like this:

image

If you look closely, you can see that it calls the GetUserById method and the JSON response is added to the dictionary variable called UserDetail. Then if the HTTP response code is OK (code 200), it will pull out the LoginName from the UserDetail variable and log it to the workflow history before assigning a task.

Phew! Are we there yet? Let’s see if it all works!

Testing the workflow

So now that we have the essential bits of the workflow done, let’s run a test. This time I will use one of the documents owned by Megacorp Iron Man Suits – the Jarvis backup and recovery procedure. The process owner for Megacorp Iron Man suits is Chris Tomich (Chris reviewed this series and insisted he be in charge of Iron Man suits!).

image  image

If we run the workflow against the Jarvis backup and recovery procedure, we should expect a task to be created and assigned to Chris Tomich. Looking at the workflow information below, it worked! HOLY CRAP IT WORKED!!!

image

So finally, after eleven and a half posts, we have a working workflow! We have gotten around the issues of using managed metadata columns to filter lists, and we have learnt a heck of a lot about REST/oData, JSON, CAML and various other stuff along the way. So having climbed this managed metadata induced mountain, is there anything left to talk about?

Of course there is! But let’s summarise the workflow in text format rather than death by screenshot

Stage: Get Organisation Name
   Find | in the Current Item: Organisation_0 (Output to Variable:Index)
   then Copy Variable:Index characters from start of Current Item: Organisation_0 (Output to Variable: Organisation)
   then Replace " " with "%20" in Variable: Organisation (Output to Variable: Organisation)
   then Log Variable: Organisation to the workflow history list
   If Variable: Organisation is not empty
      Go to Get X-RequestDigest
   else
      Go to End of Workflow

Stage: Get-X-RequestDigest
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Call [%Workflow Context: Current Site URL%]_api/contextinfo HTTP Web Service with request
       (ResponseContent to Variable: ContextInfo
        |ResponseHeaders to responseheaders
        |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/GetContextWebInformation/FormDigestValue from Variable: ContextInfo (Output to Variable: X-RequestDigest )
   If Variable: X-RequestDigest is empty
      Go to End of Workflow
   else
      Go to Prepare and execute process owners web service call

Stage: Prepare and execute process owners web service call
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Set Variable:URLStart to _api/web/Lists/GetByTitle('Process%20Owners')/GetItems(query=@v1)?@v1={"ViewXml":"<View><Query><ViewFields><FieldRef%20Name='Organisation'/><FieldRef%20Name='AssignedTo'/></ViewFields><Where><Eq><FieldRef%20Name='Organisation'/><Value%20Type='TaxonomyFieldType'>
   then Set Variable:URLEnd to </Value></Eq></Where></Query></View>"}
   then Call [%Workflow Context: Current Site URL%][Variable: URLStart][Variable: Organisation][Variable: URLEnd] HTTP Web Service with request
      (ResponseContent to Variable: ProcessOwnerJSON
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   then Log Variable: responseCode to the workflow history list
   If Variable: responseCode equals OK
      Go to Obtain Userid
   else
      Go to End of Workflow

Stage: Obtain Userid
   Get d/results(0)/AssignedToId from Variable: ProcessOwnerJSON (Output to Variable: AssignedToID)
   then Call [%Workflow Context: Current Site URL%]_api/Web/GetUserByID([Variable: AssignedToID]) HTTP Web Service with request
      (ResponseContent to Variable: userDetail 
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/LoginName from Variable: UserDetail (Output to Variable: AssignedToName)
      then Log The User to assign a task to is [%Variable: AssignedToName]
      then assign a task to Variable: AssignedToName (Task outcome to Variable:Outcome | Task ID to Variable: TaskID )
   Go to End of Workflow

Tidying up…

Just because we have our workflow working, does not mean it is optimally set up. In the above workflow, there are a whole heap of areas where I have not done any error checking. Additionally, the logging I have done is poor and not overly helpful for someone to troubleshoot later. So I will finish this post by making the workflow a bit more robust. I will not go through this step by step – instead I will paste the screenshots and summarise what I have done. Feel free to use these ideas and add your own good practices in the comments…

First up, I added a new stage at the start of the workflow for anything relation to initialisation activities. Right now, all it does is check out the current item (recall in part 3 we covered issues related to check in/out), and then set a Boolean workflow variable called EndWorkflow to No. You will see how I use this soon enough. I also added a new stage at the end of the workflow to tidy things up. I called it Clean up Workflow and it’s only operation is to check the current item back in.

image   image

In the Get Organisation Name stage, I changed it so that any error condition logs to the history list, and then set the EndWorkflow variable to Yes. Then in the Transition to stage section, I use the EndWorkflow variable to decide whether to move to the next stage or end the workflow by calling the Clean up workflow stage that I created earlier. My logic here is that there can be any number of error conditions that we might check for, and its easier to use a single variable to signify when to abort the workflow.

image

In the Get X-RequestDigest stage, I have added additional error checking. I check that the HTTP response code from the contextinfo web service call is indeed 200 (OK), and then if it is, I also check that we successfully extracted the X-RequestDigest from the response. Once again I use the EndWorkflow variable to flag which stage to move to in the transition section.

image

In the Prepare and execute process owners web service call stage, I also added more error checking – specifically with the AssignedToID variable. This variable is an integer and its default value is set to zero (0). If the value is still 0, it means that there was no process owner entry for the Organisation specified. If this happens, we need to handle for this…

image

Finally, we come to the Obtain Userid stage. Here we are checking both the HTTP code from the GetUserInfo web service call, as well as the userID that comes back via the AssignedToName variable. We assign the task to the user and then set the workflow status to “Completed workflow”. (Remember that we checked out the current item in the Workflow Initialisation stage, so we can now update the workflow status without all that check out crap that we hit in part 3).

image

Conclusion…

So there we have it. Twelve posts in and we have met the requirements for Megacorp. While there is still a heap of work to do in terms of customising the behaviour of the task itself, I am going to leave that to you!

Additionally, there are a lot of additional things we can do to make these workflows much more robust and easier to manage. To that end, I strongly urge you to check out Fabian Williams blog and his brilliant set of articles on this topic that take it much (much) further than I do here. He has written a ton of stuff and it was his work in particular inspired me to write this series. He also provided me with counsel and feedback on this series and I can’t thank him enough.

Now that we have gotten to where I wanted to, I’ll write one more article to conclude the series – reflecting on what we have covered, and its implications for organisations wanting to leverage out of the box SharePoint workflow, as well as implications for all of you citizen developers out there.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Dec 30 2013

Trials or tribulation? Inside SharePoint 2013 workflows–Part 11

Send to Kindle

Hi all, and welcome to the penultimate article in what has tuned into a fairly epic series about SharePoint 2013 Workflows. From part 6 to part 8 of this series, we implemented a workflow that made use of the web service calls as well as the new looping capabilities of SharePoint Designer 2013. We used the web service call to get all of the items in the Process Owners list, and then looped through them to find the process owner we needed based on organisation. While that method worked, the concern was that it was potentially inefficient because if there was a large list of process owners, it might consume excessive resources. This is why I referred to the approach in part 6 as the “easy but flawed” way.

Now we are going to use the “better but harder way”. To that end, the part 9 and part 10 have set the scene for this one, where we are going to implement pretty much all of the theory we covered in them. Now I will not rehash any of the theory of the journey we took to get here, but I cannot stress enough that you really should have read them before going through this article.

With that said, we are going make a bunch of changes to the current workflow by doing the following:

  • 1) Change the existing workflow to grab the Organisation name as opposed to the GUID
  • 2) Create a new workflow stage that gets us the X-RequestHeader (explained in part 9).
  • 3) Build the URL that we will use to implement the “CAML in REST” approach (explained in part 9 and part 10)
  • 4) Call the aforementioned webservice
  • 5) Extract the AssignedToId of the process owner for a given organisation
  • 6) Call the GetUserByID webservice to grab the actual userID of the process owner and assign them an approval task

In this post, we will cover the first four of the above steps…

Get the Name not the GUID…

Here is the first stage of the workflow as it is now, assuming you followed parts 6 to 8.

image

First let’s make a few changes so that we get the Name of the Organisation stored with the current item, rather than the GUID as we are doing now. If you recall from part 4, the column Organisation_0 is a hidden column that got created because Organisation is a managed metadata column. This column stores the names and Id’s of managed metadata term(s) that have been assigned in the format of <term name>|<term GUID>. For example “Metacorp+Burgers|e2f8e2e0-9521-4c7c-95a2-f195ccad160f”.

To get the GUID, we grabbed everything to the right of the pipe symbol (“|”). Now to get the name, we need everything to the left of it.

Step 1:

Rename the stage from “Obtain Term GUID” to “Get Organisation Name” (I trust that by part 11 a screenshot is not required for this)

Step 2:

Delete the second workflow action called Calculate Variable: index plus 1 (Output to Variable:calc) as we don’t need the variable calc anymore. In addition, delete the workflow action “Copy from Current Item: Organisation_0”. You should be left with two actions and the transition to stage logic as shown below.

image

Step 3:

Add an Extract Substring from Start of String workflow action in between the two remaining actions. Click the “0” hyperlink and click the fx button. In the Lookup for Integer dialog, set it to the existing variable Index. Click on the “string” hyperlink and set it to the Organisation_0 column from the Current Item. Finally, click the (Output to…) hyperlink and create a new string variable called Organisation.

image

Now, at this point we need to pause and think about what we are doing. If you recall part 10, I had trouble getting the format right for the URL that uses CAML inside REST web service call. The culprit was that I had to encode any occurrence of a space in the URL with the HTML encoded space (a %20). Take a look at the URL that  was tested in Fiddler below to see this in action…

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

Look toward the end of the URL where the organisation is specified (marked in bold). What do you notice?

Yep – the space between Megacorp and Burgers is also encoded. But this causes a problem since the current value of the Organisation variable contains the space. So let’s deal with this now by encoding spaces.

Step 4:

Add a Replace Substring in String workflow action. Click the first string hyperlink and type in a single space. In the second string hyperlink, type in %20. In the third string hyperlink, click the fx button and add the Organisation variable. In the final hyperlink (Output to Variable:Output), choose the variable Organisation.

image

After all this manipulation of the Organisation variable, it is probably worthwhile logging it to the workflow history list so we can see if the above steps work as expected.

Step 5:

Click the Log Variable:TermGUID to the workflow history list action and change the variable from TermGUID to Organisation. The action will now be called Log Variable:Organisation to the workflow history list

image

Step 6:

In the Transition to stage section, find the “If Variable: TermGUID is not empty” condition and change the variable from TermGUID to Organisation

image

Step 7:

Create a new workflow stage and call it “Get X-RequestDigest”. Then in the Transition to stage section of the Get Organisation Name stage, find the “Go to Get Process Owners” and change the stage from Get Process Owners to Get X-RequestDigest.

The adjusted workflow should now look like the image below…

image

Getting the X-RequestDigest…

If you recall in part 9, we need to call the contextinfo web service so we can extract the FormDigestValue to use in our CAML embedded web service call to the Process Owners list. If that statement makes no sense then go back and read part 9, otherwise, you should already know what to do!.. Bring on the dictionary variables and the Call to HTTP Web service action!

Step 1:

Go to the Get Process Owners stage further down and find the very first action – a Build Dictionary action that creates a variable called RequestHeader. Right click on it and choose Move Action Up. This will move the action into the Get X-RequestDigest stage as shown below.

image  image

What are we doing here? This action was the one we created in part 9 that asks SharePoint to bring back data in JSON format. We first learnt all about this in part 4 when I explained JSON and part 5 when I explained how dictionary variables work.

Step 2:

Add a Call HTTP Web Service action after the build dictionary action. For the URL, use the string builder and add a lookup to the Current Site URL (found in Workflow Context in the data source dropdown). Then add the string “_api/contextinfo” to it to complete the URL of the web service. Also, make sure the method chosen is a HTTP POST and not a GET.

image  image

image

This will construct the URL based on which SharePoint site the workflow is run from (eg http://megacorp/iso9001/_api/contextinfo. ) but without hard-coding the URL.

Step 3:

Make sure the workflow action from step 2 is selected and in the ribbon, choose the Advanced Properties icon. In the Call HTTP Web Service Parameters dialog, click the RequestHeaders dropdown and choose the RequestHeader variable and click OK. (Now you know why we moved the build dictionary action in step 1)

image

Step 4:

Click the response hyperlink in the Call HTTP Web Service action and choose to create a new variable. Call it ContextInfo. Also check the name of the variable for the response code and make sure it is set to the responseCode and not something like responseCode2.

image  image

Step 5:

Add an If any value equals value condition below the web service call. For the first value hyperlink, choose the variable responseCode as per step 4. Click the second value hyperlink, type in “OK” as shown below:

image

This action ensures that the response to the web service call was valued (OK is the same as a HTTP 200 code). If we get anything other than an OK, there is no point continuing with the workflow.

Step 6:

Inside the condition we created in step 5, add a Get an Item from a Dictionary action. Then do the following:

  • In the item by name or path hyperlink, type in exactly “d/GetContextWebInformation/FormDigestValue” without the quotes.
  • In the dictionary hyperlink, choose the variable ContextInfo that was specified in step 4.
  • In the item hyperlink in the “Output To” section, create a new string variable called X-RequestDigest.

All this should result in the action below.

image

Now let’s take a quick pause to understand what we did in this step. You should recognise the d/GetContextWebInformation/FormDigestValue as parsing the JSON output. We get the value of FormDigestValue and assign it to the variable X-RequestDigest. As a reminder, here is the JSON output from calling the contextinfo web service using Fiddler. Note the path from d –> GetContextWebInformation –> FormDigestValue.

image_thumb17

Step 7:

In the transition to stage section, add an If any value equals value condition. For the first value hyperlink, choose the variable X-RequestDigest that we created in step 6. Click the equals hyperlink and change it to is empty.

image

Step 8:

Under the newly created If Variable: X-Request is empty condition, add a Go to a stage action and set it to End of Workflow. In the Else section of the condition, add a Go to a stage action and set it to the Get Process Owners stage.

image

Cool! We have our X-Request Digest stage all done. Here is what it looks like…

image

This has all been very easy so far hasn’t it! A big difference to some of the previous posts. But now its time to wire up the CAML inside REST web service call, and SharePoint is about to throw us another curveball…

Get the Process Owner…

Our next step is to rip the guts out of the existing stage to get the process owner. Unlike our first solution, we no longer need to loop through the process owners list which means the entire Find Matching Process Owner stage is no longer needed. So before we add new actions, lets do some tidying up.

Step 1:

Delete the entire stage called “Find Matching Process Owner”. Do this by clicking the stage to select all actions within it, and then choose delete from the SharePoint Designer ribbon. SPD will warn you that this will delete all actions. Go ahead and click OK.

image

Our next step is to attempt to make the CAML inside REST web service call. To remind you of what the URL will look like, here is the one we successfully tested in part 10. Ugly isn’t it. Now you know why developers are an odd bunch – they deal with this stuff all day!

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

Let’s take our time here, because as you can see the URL we have to craft is complex. First up, we need to use a Build a Dictionary action to create the HTTP headers we need (including the X-RequestDigest). Recall in part 9, that we also need to set Content-length to 0 and Accept to application/json;odata=verbose.

Step 2:

Add a Build dictionary action as the first action in the Get Process Owners section. Click the this hyperlink and the add button in the Build a Dictionary dialog. Add the following dictionary items:

  • Add a string called Accept and a value of: application/json;odata=verbose

image

  • Add a string called Content-length and a value of 0

image

  • Add a string called X-RequestDigest. In the value textbox, click the fx button and choose the workflow variable called X-RequestDigest.

image  image  image

Your dictionary should look like this:

image

Click ok and set the dictionary variable name to be the existing variable called RequestHeader. The completed action should look like the image below:

image

Now let’s turn our attention to creating the web service URL we need.

Step 3:

Find the existing Call HTTP Web Service action in the Get Process Owner stage. Click the URL hyperlink and click the ellipses to bring up the string builder dialog. Delete the existing URL so we can start over. Add the following entries back (carefully!)

  • 1) A lookup to the Site URL from the Workflow Context
  • 2) The string “_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>”
  • 3) A lookup to the Organisation workflow variable
  • 4) The string “</Value></Eq></Where></Query></View>”}”

This should look like the image below:

image

A snag…

Click OK and see what happens. Uh-oh. We are informed that “Using the special characters ‘[%%]’ or [%xxx%]’ in any string, or using the special character ‘{‘ in a string that also contains a workflow lookup may corrupt the string and cause an unexpected error when the workflow runs” – Ouch!

image

How do we get out of this issue?

Well, we are using two workflow lookups in the string – the first being the site URL at the start and the second being the Organisation variable embedded in the CAML bit of the URL. Since it is complaining of using certain special characters in combination with workflow lookups, let’s break up the URL into pieces by creating a couple of string variables. At the start of step 3 above, we listed 4 elements that make up the URL. Let’s use that as a basis to do this…

Step 4:

Add a Set Workflow Variable action below the build dictionary action in the Get Process Owner stage. Call the variable URLStart and set its value to: _api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>

image   image

Step 5:

Add another Set Workflow Variable action in the Get Process Owner stage. Call the variable URLEnd and set its value to: “</Value></Eq></Where></Query></View>”}”

image

Step 6:

Edit the existing Call HTTP Web Service action in the Get Process Owner stage. Click the URL hyperlink and add the following entries back (carefully!)

  • 1) A lookup to the Site URL from the Workflow Context
  • 2) A lookup to the URLStart workflow variable
  • 3) A lookup to the Organisation workflow variable
  • 4) A lookup to the URLEnd workflow variable

This should look like the image below:

image

Click OK and in the Call HTTP Web Service dialog, make sure the HTTP method is set to HTTP POST. Click OK

image  image

Step 7:

Select the Call HTTP Web Service action and click the Advanced Properties icon in the ribbon. In the Call HTTP Web Service Properties dialog box, click the RequestHeaders parameter and in the drop down list to the right of it, choose the RequestHeader variable created in step 3. Click OK.

image_thumb97    image_thumb103

 

Step 8:

Select the Call HTTP Web Service action and click the variable next to the ResponseContent to section. Create a variable called ProcessOwnerJSON. This variable will store the JSON returned from the web service call.

image    image

Step 9:

In the Transition to stage section of the Get Process Owners stage, look for the If responseCode equals OK condition. Set the stage to Obtain Userid as shown below:

image

Step 10:

To make the workflow better labelled, rename the existing Get Process Owners stage to Prepare and execute Process Owner web service call. This workflow stage is going to end when it has attempted the call and we will create a new stage to extract the process owner and create the approval task. At this point the workflow stage should look like the image below:

image

Conclusion

We will end the post at this point as it is already very long. In the next post, we will make a couple of tweaks to the Obtain Userid workflow stage and test the workflow out. For your reference, here is the complete workflow as it stands…

image

image

image

Thanks for reading

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Dec 27 2013

Trials or tribulation? Inside SharePoint 2013 workflows–Part 10

Send to Kindle

Hi there and welcome back to my series of articles that puts a real-world viewpoint to SharePoint 2013 workflow capabilities. This series is pitched more to Business Analysts, SharePoint Hackers and generally anyone who might be termed a citizen developer. This series shows the highs and lows of out of the box SharePoint Designer workflows, and hopefully helps organisations make an informed decision around whether or not to use what SharePoint provides, or moving to the 3rd party landscape.

By now you should be well aware of some of the useful new workflow capabilities such as stages, looping, support for calling web services and parsing the data via dictionary objects. You also now understand the basics of REST/oData and CAML. At the end of the last post, we just learnt that it is possible to embed CAML queries into REST/oData, which gets around the issue of not being able to filter lists via Managed metadata columns. We proved this could be done, but we did not actually try it with the actual CAML query that can filter managed metadata columns. It is now time to rectify this.

Building CAML queries

Now if you are a SharePoint developer worth your salt, you already know CAML, because their are mountains of documentation on this topic on MSDN as well as various blogs. But a useful shortcut for all you non coders out there, is to make use of a free tool called CAMLDesigner 2013. This tool, although unstable at times, is really easy to use, and in this section I will show you how I used it to create the CAML XML we need to filter the Process Owners list via the organisation column.

After you have downloaded CAMLDesigner and successfully gotten it installed, follow these steps to build your query.

Step 1:

Start CAMLDesigner 2013 and on the home screen, click the Connections menu.

image

Step 2:

In the connections screen that slides out from the right, enter http://megacorp/iso9001 into the top textbox, then click the SharePoint 2013 and Web Services buttons. Enter the credentials of a site administrator account and then click the Connect icon at the bottom. If you successfully connect to the site, CAMLDesigner will show the site in the left hand navigation.

image  image

Step 3:

Click the arrow to the left of the Megacorp site and find the Process Owners list. Click it, and all of the fields in the list will be displayed as blue boxes below the These are the fields of the list section.

image

Step 4:

Drag the Organisation column to the These are the selected fields section to the right. Then do the same for the Assigned To column. If you look closely at the second image, you will see that the CAML XML is already being built for you below.

image     image

Step 5:

Now click on the Where menu item above the columns. Drag the Organisation column across to the These are the selected fields section. As you can see in the second image below, once dragged across, a textbox appears, along with a blue button with an operator. Also take note of the CAML XML being build below. You can see that has added a <Where></Where> section.

image

image

image

Step 6:

In the Textbox in the Organisation column you just dragged, type in one of the Megacorp organisations. Eg: Megacorp Burgers. Note the XML changes…

image

Step 7:

Click the Execute button (the Play icon across the top). The CAML query will be run, and any matching data will be returned. In the example below, you can see that the user Teresa Culmsee is the process owner for Megacorp Burgers.

image

image

Step 8:

Copy the XML from the window to clipboard. We now have the XML we need to add to the REST web service call. Exit CAMLDesigner 2013.

image

Building the REST query…

Armed with your newly minted CAML XML as shown below, we need to return to fiddler and draft it into the final URL.

<ViewFields>
   <FieldRef Name='Organisation' />
   <FieldRef Name='AssignedTo' />
</ViewFields>
<Where>
   <Eq>
      <FieldRef Name='Organisation' />
      <Value Type='TaxonomyFieldType'>Megacorp Burgers</Value>
   </Eq>
</Where>

As a reminder, the XML that we had working in the past post looked like this:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

Let’s now munge them together by stripping the carriage returns from the XML and putting it between the <Query> and </Query> sections. This gives us the following large and scary looking URL.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ /> <FieldRef Name=’AssignedTo’ /> </ViewFields> <Where> <Eq> <FieldRef Name=’Organisation’ /> <Value Type=’TaxonomyFieldType’>Megacorp Burgers</Value> </Eq> </Where></Query></View>”}

Are we done? Unfortunately not. If you paste this into Fiddler composer, Fiddler will get really upset and display a red warning in the URL textbox…

image

If despite Fiddlers warning, you try and execute this request, you will get a curt response from SharePoint in the form of a HTTP/1.1 400 Bad Request response with the message HTTP Error 400. The request is badly formed.

The fact that Fiddler is complaining about this URL before it has  even been submitted to SharePoint allows us to work out the issue via trial and error. If you cut out a chunk of the URL, Fiddler is okay with it. For example: This trimmed URL is considered acceptable by Fiddler:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>

But adding this little bit makes it go red again.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ />

Any ideas what the issue could be? Well, it turns out that the use of spaces was the issue. I removed all the spaces from the URL above and where I could not, I encoded it in HTML. Thus the above URL turned into the URL below and Fiddler accepted it

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’ />

So returning to our original big URL, it now looks like this (and Fiddler is no longer showing me a red angry textbox):

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

image

So let’s see what happens. We click the execute button. Wohoo! It works! Below you can see a single matching entry and it appears to be the entry from CAMLBuilder2013. We can’t tell for sure because the Assigned To column is returned as AssignedToID and we have to call another web service to return the actual username. We covered this issue and the web service to call extensively in part 8 but to quickly recap, we need to pass the value of AssignedToID to the http://megacorp/iso9001/_api/Web/GetUserById() web service. In this case, http://megacorp/iso9001/_api/Web/GetUserById(8) because the value of AssignedToId is 8.

The images below illustrate. The first one shows the Process Owner for Megacorp burgers. Note the value of AssignedToID is 8. The second image shows what happens when 8 is passed to the GetUserById web service call. Check Title and LoginName fields.

image image

Conclusion

Okay, so now we have our web service URL’s all sorted. In the next post we are going to modify the existing workflow. Right now it has four stages:

  • Stage 1: Obtain Term GUID (extracts the GUID of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get Process Owners (makes a REST web service call to enumerate the Process Owners List and if successful, moves to stage 3)
  • Stage 3: Find Matching Process Owner (Loops through the process owners and finds the matching organisation from stage 1. For the match, grab the value of AssignedToID and if successful, move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

We will change it to the following  stages:

  • Stage 1: Obtain Term Name (extracts the name of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get the X-RequestDigest (We will grab the request digest we need to do our HTTP POST to query the Process Owners list. If successful move to stage 3)
  • Stage 3: Get Process Owner (makes the REST web service call to grab the Process Owners for the organisation specified by the Term name from stage 1. Grab the value of AssignedToID and move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

One final note: After this epic journey we have taken, you might think that doing this in SharePoint Designer workflow should be a walk in the park. Unfortunately this is not quite the case and as you will see, there are a couple more hurdles to cross.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Dec 26 2013

Trials or tribulation? Inside SharePoint 2013 workflows–Part 9

Send to Kindle

Hi all and welcome to my series that aims to illustrate the trials and tribulations of SharePoint 2013 workflow to those who consider themselves as citizen developers. In case you don’t want to go all the way back to part 1, a citizen developer is basically a user who creates new business applications themselves, without the use of developers (or IT in general). Since there is no Visual Studio in sight in this series, I think its safe to ay that SharePoint 2013 workflow has the potential to be a popular citizen developer tool, but it is important that people know what they are in for.

We start part 9 of this series having just finally assigned a task to a user in part 8. While this in itself is not particularly earth shattering, if you have followed this series to now, you will appreciate that we have had to navigate some serious potholes to get here, but along the way it is clear that there is some very powerful features available.

Currently the workflow as it stands consists of four stages.

  • Stage 1: Obtain Term GUID (extracts the GUID of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get Process Owners (makes a REST web service call to enumerate the Process Owners List and if successful, moves to stage 3)
  • Stage 3: Find Matching Process Owner (Loops through the process owners and finds the matching organisation from stage 1. For the match, grab the value of AssignedToID and if successful, move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

As mentioned at the end of part 8, one flaw in this workflow is the issue that if the process owners list has a large number of entries, the workflow has to iterate each process owner to find the one with a matching organisation. This causes a bit of concern, because in general, iterating through SharePoint lists in this way is not overly great on performance. In fact SharePoint has an  unfortunate heritage of newbie developers causing all sorts of disk and memory performance issues because of code that iterates a list in a similar way.

So this post is going to explore how we can do better. How about change the workflow behaviour so that rather than grab the entire process owners list, we grab just the entry we need from the process owners list.

But wait – didn’t you say something about this not working?

Now if you have dutifully read this series in the way I intend you to do, you might recall the issue that cropped up in parts 4 and 5. I pointed out that since the Organisation column we are using is a managed metadata column, we cannot use it to filter a list using REST/oData. So while the first query below would happily filter a list by a title of “something”, the second one will result in a big fat error.

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbyid(guid’0ffc4b79-1dd0-4c36-83bc-c31e88cb1c3a’)/Items?$filter=Title eq ‘something’ Smile

http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbyid(guid’0ffc4b79-1dd0-4c36-83bc-c31e88cb1c3a’)/Items?”filter=Organisation eq ‘something’ Sad smile

So this is a pickle isn’t it – how can we filter the process owners list by organisation when its not supported by REST/oData?

The thing about managed metadata columns…

Going back in time a bit, SharePoint 2010 was the first version with support for REST and in SharePoint 2013, REST support was extended significantly. As you now know, it seems the managed metadata people never got that memo because one of the older methods that can be used to query lists is called Collaborative Application Markup Language (CAML for short), and CAML does support filtering on managed metadata columns.

CAML, in case you are not aware of it, has been used for SharePoint since the very first version. It is based on a defined XML schema that can be used to query lists and libraries – much like a SQL query does on a database table. Being XML, it is more verbose than a SQL table and for me, harder to read. As an example, the SQL statement “SELECT * from TABLE WHERE field=VALUE” would look something like:

<Query><Where>< Eq><FieldRef Name=’field’ />< Value Type=’Text’>VALUE</Value> </Eq></Where></Query>.

Turning our attention back to the Organisation column that we are having trouble with, a CAML query to bring back all documents tagged as owned by “Megacorp Burgers” would look something like this…

<Where>
   <Eq>
      <FieldRef Name='Organisation' />
      <Value Type='TaxonomyFieldType'>Megacorp Burgers</Value>
   </Eq>
</Where>

Note: By the way, if you want to prove that this works, use CAML Designer 2013, to connect to a list, apply a filter and it will generate the CAML XML it used. I will cover this in the next post.

So here is where we are at. We can definitely can filter a list with a managed metadata column by using the CAML language. But we cannot filter a list using managed metadata via the REST/oData methods that I outlined in part 4. I wonder If there a way to embed a CAML query into a REST web service call?

Turns out there is… only problem is that there is some more conceptual baggage required to understand it properly, so have a strong coffee and lets go…

A journey of CAML in REST…

A while back I came across an MSDN forum thread where Christophe Humbert asked whether CAML queries could be done via the REST API. Erik C. Jordan provided this answer:

POST https//<site>/_api/web/Lists/GetByTitle(‘[list name]‘)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>[other CAML query elements]</Query></View>”}

Take a close look at the above URL. We are still talking to SharePoint via REST and we are calling a method called GetItems. As part of the GetItems call, we see CAML XML inside some curly braces: ‘@v1={“ViewXml”:”<View><Query>[other CAML query elements]</Query></View>”}’.

Hmmm – this looks to have potential. If I can embed a valid CAML query that filters list items based on a managed metadata column, we can very likely have the workflow do that using the Call HTTP Web Service workflow action.

So let’s test this web service and see if we can get it to work. Let’s try enter the above URL on the MegaCorp process owners site.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

Uh-oh, error 400. Dammit, what now?

image_thumb3_thumb_thumb

Turns out that we cannot use a browser to test this particular web service because it is requires a HTTP POST operation, but when we type a URL into a browser, we are performing a HTTP GET operation, hence the error 400.  If you really want to know the difference between a GET and POST in relation to HTTP go and visit this link. But for the purpose of this article, we need to find a way to compose HTTP POST web service calls and guess what – you already know exactly how to do it because we covered it in part 7 – the Fiddler composer function.

So start up Fiddler and let’s craft ourselves a call to this web service….

Step 1:

Start Fiddler and click the Composer Tab. Paste in the web service call we just tried – http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}. Make sure you change the request from a GET to a POST by clicking the dropdown to the left of the URL.

image

Step 2:

Type in the string “Accept: application/json;odata=verbose” into the Request headers textbox as shown below. If you recall the HTTP interlude from part 6, this tells SharePoint to bring back the data in JSON format, rather than XML

image

Step 3:

Click the execute button to execute the request and then click the Response Headers tab as shown below to see what happened. As you can see below, things did not go so well. We got a response code of HTTP/1.1 411 Length Required

image

Hmm – so what are we missing here? It turns out that some HTTP queries require the use of a ‘Content-Length‘ field in the HTTP header. The standard for the HTTP protocol states that: “Any Content-Length greater than or equal to zero is a valid value”, so let’s add a value of zero to the request header.

Step 4:

Click the composer tab again and add the string “Content-length: 0” to the request header textbox as shown below and click execute again:

image

Checking the response and it looks like we are still not quite there as we have another error code: HTTP/1.1 403 FORBIDDEN (The security validation for this page is invalid and might be corrupted. Please use your web browser’s Back button to try your operation again). *sigh* will this ever just work?

image

The reason for this error is a little more complex than the last one. It turns out that we are missing another required HTTP header in the POST request that we are crafting. This header has the cool sounding name of X-RequestDigest and it holds something called the form digest. What is the form digest? Here is what Nadeem Ishqair from Microsoft says:

The form digest is an object that is inserted into a page by SharePoint and is used to validate client requests. Validation of client requests is specific to a user, a site and time-frame. SharePoint relies on the form digest as a form of security validation that helps prevent replay attacks wherein users may be tricked into posting data to the server. As described on this page on MSDN, you can retrieve this value by making a POST request with an empty body to http://site/_api/contextinfo and extracting the value of the “d:FormDigestValue” node in the XML that the contextinfo endpoint returns.

So there you go – it is a security function that validates web service requests. So our workflow is going to have to make yet another web service call to handle this. We will make a POST request with an empty body to http://megacorp/iso9001/_api/contextinfo and then extract the value of the “d:FormDigestValue” node in the information returned.

This probably sounds as clear as mud, so let’s use Fiddler to do it so we know what we have to do in the workflow.

Step 5:

Start Fiddler and click the Composer Tab. Paste in the web service of http://megacorp/iso9001/_api/contextinfo. Make sure you change the request from a GET to a POST and add the string “Accept: application/json;odata=verbose” into the Request headers textbox as shown below

image

Step 6:

Click the execute button to execute the request and make sure the response headers tab is selected. Confirm that the response you get from the server is 200 OK.

image

Step 7:

Click the JSON button and look for an entry called FormDigestValue in the response.

image

Step 8:

Right click on the FormDigestValue entry and choose copy to get it into the clipboard.

image

Step 9:

Click on the composer tab again and paste the FormDigestValue into the Request Headers textbox as shown below. Replace the string “FormDigestValue =” with “X-RequestDigest: “ to make it the correct format needed as shown in the second image below.

image    image

Step 10:

Paste in the original request into the URL: http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”} and click Execute.

image

Check that the HTTP return code is 200 (OK) and then click the JSON tab to see what has come back. You should see a JSON result set that looks like the image below. If you examine the JSON data returned detail of this image, you will it is exactly the same JSON structure that was returned when we used fiddler in part 7.

image  image

Let’s pause here for a moment and reflect. If you have made it this far, you have pretty much nailed the hard bit. If you had an issue, fear not as there are a couple of common problems that are usually easy to rectify.

Firstly, if you receive an error HTTP/1.1 400 Bad Request with a message that looks something like “Invalid JSON. A colon character ‘:’ is expected after the property name ‘â’, but none was found.”, just double check the use of quotes (“”) in the URL. Sometimes when you paste strings from your browser or RSS reader, the quotes can get messed up because of autocorrect. Look closely at the URL below and note the quotes are angled:

{“ViewXml”:”<View><Query></Query></View>”}

To resolve this issue, simply replace the angled quotes with a regular boring old quote so they are not angled and the problem will go away. Compare the string below to the one above to see what I mean…

{“ViewXml”:”<View><Query></Query></View>”}

The second common problem is a HTTP/1.1 403 FORBIDDEN response with the message: “The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again”. If you see this error, your X-RequestDigest may have expired and you need to regenerate it via repeating steps 5 to 9 above. The other possibility is that you did not properly paste the FormDigest into the request header. Double check this as well.

Conclusion

Okay, so that was a rather large dollop of conceptual baggage I handed out in this post. You got introduced to the older method of querying SharePoint lists called CAML, and we have successfully been able to call a REST web service and pass in a CAML XML string and get back data. We learnt about the HTTP POST request and some of the additional HTTP headers that need to be sent, like the Content-length and the X-RequestDigest to make it all work. As an added bonus, we are all Fiddler composer gurus now.

However all we sent across was an empty CAML string. The string <View><Query></Query></View> pretty much says “give me everything and don’t filter”, which is not what we want. So in the next post, we will learn how to create a valid CAML string that filters by the Organisation column. Once we have successfully tested it, we will modify the workflow to use this method instead.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Dec 24 2013

Trials or tribulation? Inside SharePoint 2013 workflows–Part 8

Send to Kindle

Hi and welcome to part 8 of my series of articles around SharePoint 2013 workflows. Hopefully as you progress through this series, as well as getting a heap of new conceptual baggage, you are getting a better sense of that SharePoint 2013 workflows are capable of and whether they have enough to be a seriously considered solution and whether they can empower citizen developers.

If you have been following proceedings thus far, we just successfully connected to the Process Owners list via REST web services and we manipulated dictionary variables to get to the data returned by the web service. We just started using the looping functionality of SharePoint 2013 workflow and successfully created a loop to iterate through the process owners list.

Now it’s time to finish the job. Here is the stage that we successfully tested at the end of the last post…

image

Adding conditional logic…

Our next task is to add some conditional logic into the workflow where if a match is found between term GUID’s, stop looping through the rest of the process owners. This will make the workflow more efficient because if the process owners list was big and the matching term GUID was one of the first few entries, we can save a lot of unnecessary loops.

Step 1:

Click in between the second and third workflow actions in the loop. From the ribbon, click the Condition icon and from the list of conditions, choose If any value equals value.

image  image  image

Now back in part 6, in the “Getting the GUID…” section, we added a series of steps to extract the Term GUID from the current item. The variable was called TermGUID. Now we are going to compare the TermGUID with each of the GUID’s returned by each iteration of the loop.

Step 2:

Click on the leftmost value hyperlink of the If statement and click the fx button. In the Define Workflow lookup dialog, choose Workflow Variables and Parameters as the Data source and choose the TermGUID variable in the Field from Source dropdown. Click OK

Step 3:

Click on the rightmost value hyperlink of the If statement and click the fx button. In the Define Workflow lookup dialog, choose Workflow Variables and Parameters as the Data source and choose the ProcessOwnerTermGUID variable in the Field from Source dropdown. Click OK

image

Now we have configured the test for when the Organsiation matches between a document and the process owner list. The next step is to get the value of the Assigned To column, since that has the username of the process owner for a given organisation. But If you review the web service call that was made to to the Process Owners list (http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbytitle(‘Process%20Owners’)/Items?$select=AssignedToId,Organisation), you might notice that I used the column name AssignedToId rather than the actual column name reported in SharePoint called Assigned To. So what gives?

Turns out that one of the current limitations of using the oData/REST approach is that columns that store People/Group data do not return the persons name. Instead, they return a numeric ID via a different column name. For example: the Created By and Modified By columns are represented in rest by AuthorId and EditorId respectively.

The implication? We will need to call yet another web service, passing it the value of AssignedToId to get the actual username so we can assign them a task.

“Oh come on,” I hear you say, “this is messy enough already and we still have to mess with web services?”

I feel your pain, but at least now you have more of an  idea of what it takes to be a citizen developer when it comes to SharePoint Designer 2013 workflows! Let’s get on with it…

Finding the user…

Our first step is to grab the value of AssignedToId from the InnerProceessOwnerList dictionary variable. This is similar to what we did to create the ProcessOwnerTermGUID variable in part 7 – we use a Get Item from a Dictionary action.

Step 1:

Add a Get an Item from a Dictionary action, click the item by name or path hyperlink and click the ellipses to bring up the string builder screen. Type in a left bracket “(“ and then click the Add or Change lookup button. In the Lookup for String dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop  and click OK. Finally, type in the following string “)/AssignedToId” and click OK.

image

Step 2:

Back at the workflow action, click on the dictionary hyperlink and choose the InnerProcessOwnerList variable from the list.

Step 3:

Click the item hyperlink and choose to create a new variable to bring up the edit variable screen. Name the variable AssignedToID and set its type to Integer.

image  image

Now that we have our user (well an integer representing the user in the form of the variable AssignedToID), there is not much point in looping around for more process owners, so lets add a workflow step to make the loop end straight away.

Step 4:

Add a Set Workflow Variable action and click on the workflow variable hyperlink. From the list of variables, choose loop. Now click the value hyperlink and click the fx button to bring up the Lookup for Integer dialog. In the Data source dropdown, choose Workflow Variables and Parameters and in the Field from source dropdown, choose the variable count.

To understand what step 4 does, let’s look at the entire loop. Note that the loop will run repeatedly while the value of the loop variable is less than the value of the count variable. Step 4 now means that this is not the case and the loop will end.

image

Next we have to grab the user that AssignedToId variable refers to via calling another web service as I previously mentioned. This is a little tricky, so it is best we do it in a new workflow stage. Thus, we will tidy up a few loose ends with this stage.

Step 5:

Add a new workflow stage and name it “Obtain UserID”.

Step 6:

In the Transition to stage section of the Find Matching Process Owner stage, delete the Go to End of Workflow action and replace it with an If any Value equals Value condition. In the first value hyperlink, set it to the variable AssignedToID. Click the equals hyperlink and change it to is greater than. Finally, click the last value hyperlink and set it to zero “0”. The logic behind this workflow action is that the default value for integer variables are 0. If a process owner has been found for a document, the value of AssignedToID will be greater than zero. Therefore the workflow can use this fact when it decides whether to proceed to the new stage created in step 5.

image

Step 7:

Add a Go to a stage action underneath the If condition and set it to go to the Obtain Userid stage created in step 5.

Step 8:

Add a Go to a stage action underneath the Else condition and set it to End of Workflow.

image

A new web service…

Right, let’s now turn our attention to the Obtain Userid stage created in step 5 above. By now you should be a REST/JSON guru after being able to successfully connect to and parse the data from the Process Owners list in part 6 and part 7. By comparison, the web service method we are about to call is quite simple. It is called GetUserByID and the URL looks like this:

http://megacorp/iso9001/_api/Web/GetUserById().

The one thing we have to do is pass it an integer that refers to the user – value of the AssignedToID variable.

To demonstrate how this works, I will pick an ID at random and we can have a look at the data returned by this web service call. Using the Fiddler composer function to return the data in JSON format, we get the following output.

http://megacorp/iso9001/_api/Web/GetUserById(7)

image

A quick scan of the data above and it appears that the data we are interested in is LoginName. Based on what we learnt from accessing JSON data to get hold of the term GUID of the Organisation column, we will have to use the Get an Item from a Dictionary action, to get at the LoginName entry. The item reference should be d/LoginName, so let’s test it.

Step 1:

Add a Call HTTP Web Service action and click the this hyperlink. Click the ellipses button to display the string builder dialog. Use the Add or Change Lookup button and the text editor to build the following string:

  • 1) [Workflow context: Current Site URL]
  • 2) “_api/Web/GetUserById(“
  • 3) [%Workflow Variables and Parameters: AssignedToID%]
  • 4) “)”

image

Step 2:

Right click on the Call HTTP Web Service action created in step 1 and choose properties. In the Call HTTP Web Service Properties dialog, set the RequestHeaders dropdown to the RequestHeader dictionary variable. (If you stretch your mind back to part 6, we created the RequestHeader variable to ensure our data comes back in JSON format. We can re-use that here to do the same thing).

image   image

Step 3:

On the Call HTTP Web Service action,  click the response hyperlink and create a new variable and call it UserDetail. Click the ResponseStatusCode hyperlink and choose the existing variable called responseCode. (It first made an appearance in part 6 and we can re-use it here).

image

Step 4:

Add an If any value equals value condition and configure it to test of the variable responseCode equals OK

image

Step 5:

Add a Get an Item from a Dictionary action, click the item by name or path hyperlink and type in “d/LoginName”. Click the dictionary hyperlink and choose the UserDetail variable created in step 3. Click the item hyperlink and create a new string variable called AssignedToName.

image

At this point it makes sense to log the value of AsssignedToName to workflow history to make sure the have set everything up correctly.

Step 6:

Add a Log to history action and set the message to: “The User to assign a task to is: [%Variable: AssignedToName%]”

image

Step 7:

Add a Go to a stage action and set it to end the workflow. The complete configuration for the stage should look like the image below.

image

Now it’s time to test the workflow. In my case, I went back to the documents library and triggered the workflow on the document called Burger Additives Standard which is assigned to the organisation of Megacorp Burgers. Here is the result…

image  image

Wohoo! we have our user! If we check the process owners list for Megacorp Burgers on the left image above and compare it to the workflow history next to it, we can indeed confirm that the lookup reported is right. (In my test environment, the userid for Teresa Culmsee is teamsevensigma\terrie).

Assigning a task…

Now I think I speak for all of us when I say HOLY CRAP WE ARE ALMOST THERE!!!. I mean, it has taken 8 posts, but we have one more workflow action to add. Drumroll….the action that we started this series with way back in part 2.

Step 1:

Add an Assign a task action and click on the this user hyperlink. In the Select Task Participant Dialog, click the ellipses next to the Participant textbox. This will bring up the Select User dialog. In the select from existing users or groups list, select Workflow Lookup for a User and click Add. In the Lookup for Person or Group dialog, choose Workflow Variables and Parameters from the Data source dropdown and choose the AssignedToName variable and click OK.

image  image

image  image

Now you will be back at the Select Task Participant Dialog. Enter a task title of “Test approval task” and click ok. (For now we are not going to wire up the rest of the task, this is enough to prove that it works. So click Ok to go back to the workflow designer and review the action.)

image    image

Looks good, so let’s now publish the workflow and let’s test it by taking it from the top.

Now as a side note, when writing this post, my colleague Chris Tomich insisted that he had to be process owner for Megacorp’s Iron Man Suits division, and you will see that below. Checking the documents library and using metadata navigation, we see all the documents that are owned by the Iron Man Suits subsidiary of Megacorp. It stands to reason that we would have suit schematics and the backup and recovery process for Javris eh?

image  image

Choosing the Jarvis backup and recovery procedure document, we run our workflow. Checking the status of the workflow, what do we see? Yeah baby! Chris has been assigned a task!

image  image

Conclusion…

So at this point, I can’t help but wonder how many readers are actually happy that it took 8 posts to finally assign a task to a user. But nevertheless, the journey has been a good example of the sort of issues that you encounter when using SharePoint 2013 workflows.

Now in part 7, I  alluded to the fact that this method had a flaw. Our Megacorp example has been a simple one with just a few entries in the process owners list. But what if there were say, 5000 process owners and the one we want is number 4864? The current workflow would have to iterate through the first 4863 entries before it gets to this one. This isn’t overly efficient to do and might introduce some performance penalty (which plays into the hands of developers who don’t like the idea of scary citizen developers meddling with forces they don’t understand). So it begs the question whether we can come up with an alternate way to do this?

In part 9 and onward from there, we will answer that question.

Thanks for reading

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Dec 24 2013

Trials or tribulation? Inside SharePoint 2013 workflows–Part 7

Send to Kindle

Hi and welcome to part 7 of my series of articles around SharePoint 2013 workflows. We have been examining the trials and tribulations (mainly trials so far) of implementing a relatively simple document approval workflow for a mythical organisation called Megacorp Inc. In the last post, I started illustrating the first of two approaches to filtering a list based on a managed metadata column using the Call HTTP Web Service capability. If you have been following along with me, we just successfully called the SharePoint lists web service via REST. The diagram below shows where we are at… Now it is time to work with the data that is returned.

image

Very soon we are going to use the loop function of workflows to go through each process owner and check the GUID of the Organisation field. This is probably best done as a new workflow stage that only runs if the response code from the webservice is 200 (OK). So let’s make some minor changes to the workflow…

Step 1:

Add a new workflow stage and name it “Find Matching Process Owner

Step 2.

In the Transition to stage section of the new stage, add a Go to a stage action and set the stage as “End of Workflow

image

Step 3:

In the Transition to stage section of “Get Process Owners” Stage, delete the “Go to End of Workflow” action. In its place, add an “If any value equals value” condition

Step 4:

Click the first value hyperlink of the newly added condition from step 3 and click the fx button. In the Define Workflow lookup dialog, choose Workflow Variables and Parameters as the Data source and choose the responseCode variable in the Field from Source dropdown. Click OK

image  image

Step 5:

Click the second value link of the newly added condition from step 3. Type in the value “OK” and press enter.

image

Step 6:

Under the If Variable: responseCode equals OK condition, add a Go to a stage action. Set the stage as Find Matching Process Owner (the stage created in step 1).

Step 7.

Under the Else condition, add a Go to a stage action. Set the stage as End of Workflow

image

Ok, so now we have a new workflow stage to work with called Find Matching Process Owner. This is where we are going to manipulate dictionary variables and in particular, the ProcessOwnersList variable that contains the data returned by the call we made to the REST webservice in part 6. The first thing we will do is count how many items are in the ProcessOwnersList dictionary, otherwise we won’t know when to stop the loop. This is where the Count Items in a Dictionary workflow action comes in. Let’s try it on the ProcessOwnerList variable now.

Step 8:

In the Find Matching Process Owners stage, add a Count Items in a Dictionary action. Click the dictionary hyperlink and choose Variable: ProcessOwnersList from the list of dictionary variables.

image

Now we should check to see if the value of count is what  we are expecting. For your reference, here are the 9 entries currently in the Process Owners list…

Megacorp Defense Paul Culmsee
Megacorp Burgers Teresa Culmsee
Megacorp Iron Man suits Chris Tomich
Megacorp Gamma Radiation Experiments Peter Chow
Megacorp Pharmaceutical Hendry Khouw
Megacorp Fossil Fuels Sam Zhou
Perth Du Le
Sydney Paul Culmsee
Alaksa Du Le

Step 9:

Add a Log to Workflow History action. Click the message hyperlink and click the ellipses button. In the string builder dialog, type in “Process Owners count: “

image

Step 10:

Click the Add or Change Lookup button. In the Lookup for string dialog, choose Workflow Variables and Parameters from the data source dropdown and choose the variable count. Click OK. You should see the following text in the string builder dialog. Click OK. Review the workflow stage and confirm it looks like the second image below.

image  image

Publish the workflow and run it. Take a look at the workflow history and see what we get. Uh-oh… what now? It says we only have 1 item in the dictionary, but we have 9 process owners. What the…?

image

A JSON (and fiddler) interlude…

In part 6, I stressed the point that dictionary variables can contain any type of variable available in the SharePoint 2013 Workflow platform, including other dictionary variables! This now becomes important because of the way JSON works. To explain better, it is time for me to formally introduce you Fiddler.

Fiddler is a multipurpose utility that sits between the web browser and web sites. It logs all HTTP(s) traffic and is used to view and debug web traffic. Any web developer worth their salt has Fiddler as part of the troubleshooting toolkit because of its ability to manipulate the data as it travels between browser and server. Bets of all, it can be used to compose your own HTTP requests.

So first up, install Fiddler and then start it. In your browser, head over to a web site like Google. Go back to Fiddler and you should see entries logged in the left hand panel. Click on an entry and click the inspectors tab. From here you can see all the gory detail of the conversation between your web browser and the web site.  Neat eh?

Snapshot

If you look in the tabs in the Fiddler window, you will see one labelled Composer. Let’s use it right now… Paste the webservice URL that we used in part 6 into the composer textbox as shown below. Also, paste the string “Accept: application/json;odata=verbose” into the Request headers textbox as shown below. If you recall the HTTP interlude from part 6, this tells SharePoint to bring back the data in JSON format, rather than XML.

image

Click execute and Fiddler will send the request to the server. It will also switch to inspector mode for you to see the request and the response as it happened. If you look closely at the response (by clicking the JSON tab), you will see a structure that looks a bit like that I have pasted below. This is the same data that is now being stored in the ProcessOwnerList dictionary variable.

image

So why did the workflow report that there was only one item in the dictionary then? The short answer is, because there is only one item in the dictionary! A dictionary called “d”. To understand better, take another look at the JSON structure below. What is the first entry in the JSON data? a section called “d”. If you were to collapse d, all other data would be inside it. Therefore, as far as the dictionary variable is concerned, there truly is only one entry.

- JSON
  + d

Note: In case you are wondering what the deal is with the whole “d” thing, given that the it appears to be somewhat redundant. The answer that Microsoft oData standards stipulate that all data returned is wrapped by an outer “d”. Why you may ask? Well you really don’t want to know, but if you must insist, it is for security reasons related to JavaScript (see, I told you that you didn’t want to know!).

So if there is only one entry in the dictionary, than how can we get to the data we need buried deeper? The answer is that this is one of those “dictionary in a dictionary” situations that I described in part 6. The dictionary variable ProcessOwnerList has a single item. That item happens to be another dictionary with multiple items in it. So our first task is to get to the inner dictionary that contains the data we need!

Getting to the real count…

Now we will make use of the Get Item from a Dictionary action to get stuff under the d branch. Looking at the fiddler JSON output above, we need to get to the children of the results branch.

Step 1:

In the Find Matching Process Owner stage, add a Get Item from a Dictionary action as the first action in the stage.

image

Step 2:

Click the item by name or path hyperlink and type in “d/results”. Note: This is case sensitive, so has to match the JSON data shown in the fiddler screenshots. Click the dictionary hyperlink and choose the ProcessOwnersList dictionary variable as the source data.

image

Step 3:

Click the item hyperlink and choose the create a new variable (the bottom option in the dropdown). Call the variable InnerProcessOwnerList and make sure it is set to type dictionary.

image

Step 4:

Now we need to modify the count items action to count the new variable InnerProcessOwnerList. In the Count Items action below the action we just created, click the Variable: ProcessOwnersList and change it to InnerProcessOwnerList.

image

Right! Let’s retest this workflow by republishing it and running it. Now that’s more like it!!!

image

Building the loop…

Now we get to a powerful new feature of workflows in SharePoint 2013 – the ability to perform loops. What we need to do here is loop through the 9 process owners, and for each, compare the GUID of the Organisation column to the GUID on the document from which the workflow was triggered. If the term matches, we exit the loop and assign a task to the person in the Assigned To column.

Let’s go through the process step by step.

Step 1:

Make sure that your cursor is below the last action in the Find Matching Process Owner stage. Add a Set Workflow Variable action. Click the workflow variable hyperlink and choose to create a new variable. Call the variable loop and make it an integer. Click the value hyperlink and set it to zero “0”.

image  image

We are going to use this variable in the looping section  of the workflow to get to each process owner. It’s purpose will become clear soon…

Step 2:

In the ribbon, click the Loop button and choose Loop with Condition. This will add a loop step into the stage. The name “1” is not exactly meaningful so lets change it. Click the title bar of the loop and change the name to For each process owner…

image  image

Now we will set the condition for how long this loop will run for. Note that we have a variable called count that stores the number of entries in the process owners list and in step 1, we created a variable called loop. We will set the workflow to loop around while the value of loop is less than the value of count.

Step 3:

Click the leftmost value hyperlink. In the LoopCondition Properties dialog, click the fx button for the top value. In the Define Workflow Lookup dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop. Click the equals dropdown and choose Is less than. Finally, click the fx button for the bottom value. In the Define Workflow Lookup dialog, choose Workflow Variables and Parameters as the data source and find the variable called count.

image  image  image   image

image

Step 4:

Inside the newly created loop, add a Get Item from a Dictionary action. Click the item by name or path hyperlink and click the ellipses button to display the string builder. Type in a left bracket “(“ and then click the Add or Change lookup button. In the Lookup for String dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop  and click OK. Finally, type in the following string “)/Organisation/TermGuid”.

image

If you look closely at this string, it is referring to the TermGuid in the JSON data. The value of loop (currently 0) will be used to create the string. Hence “(0)/Organisation/TermGuid” will be used to grab the first Organsiation GUID, (1)/Organisation/TermGuid for the second and so on…

Step 5:

Click the dictionary hyperlink and choose InnerProcessOwnerList.

Step 6:

Click the item hyperlink and choose to create a new variable. Call the variable ProcessOwnerTermGUID and set it to a string.

image  image

By this stage, we should have the value of the term GUID for the process owner. Let’s log the value to workflow history so we can confirm things are working…

Step 7:

Add a log to workflow history action and click the message hyperlink. Click the fx button and in the Lookup for String dialog, choose Workflow Variables and Parameters as the data source and find the variable called ProcessOwnerTermGUID.

Step 8:

Now we need to increment the value of the loop variable so it can select the next process owner from the InnerProcessOwnerList variable. Add a Do calculation workflow action, click the first value hyperlink and click the fx button. In the Lookup for Number dialog, choose Workflow Variables and Parameters as the data source and find the variable called loop. Click the second value hyperlink and type in the value “1”.

image

Note that this action creates a new workflow variable (called calc1 by default) that stores the value of loop + 1. We will use this variable in the next workflow step.

Step 9:

Add a Set Workflow Variable action. Click the workflow variable hyperlink and choose loop. Click the value hyperlink and click the fx button. In the Lookup for Number dialog, choose Workflow Variables and Parameters as the data source and find the variable called calc1.

image

Right, that should be enough to test. The each iteration of the loop will log the Organisation term GUID for each process owner. It then increments the value of loop and does it again. Once the value of loop is equal to the value of the variable count, the loop should finish. So publish the workflow and let’s see what happens.. Brilliant! Below the count of process owners (line 3), we have looped though the process owners list and extracted the GUID for the organisation term!

image

Conclusion…

Well, it has taken us a while to get here, but we now have all of the information we need to be able to assign a task to the process owner. if you look in the log above, the first entry was the GUID for the organisation assigned to the document this workflow was run against. Scanning the list of GUID’s from the process owners list, we can see that the matching GUID was on line 5. So in the next post, we will modify our loop to stop when it has a match, and we will examine what we need to do to assign a task to the appropriate process owner.

Until then thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle


Next Page »

Today is: Friday 31 October 2014 |