One solution to PowerApps login problems on Android phones

Send to Kindle

Tonight I had an issue where I was unable to log into one of my client’s PowerApps tenant using PowerApps from the app store. I tried with several android phones and all had the same issue (although iPhones did not). The symptom was that after entering my email address, I was redirected to a loading screen for a short time and then returned to the logon page. While the PowerApps client was not playing ball, I was able to connect to PowerApps site via the Chrome browser on the device.

Now my client uses their own ADFS, so I felt the issue might be related to local configuration or a certificate problem. After doing a little bit of reading, I came across this article, which let me to the SSL analyser tool at ssllabs.com. As the article predicted, when I ran SSL analyser against my client’s ADFS URL, my issue was related to certificate paths.

The article states that is you see “extra download” in your certification paths as shown below, Android will have an issue with it. Quoting from the support article: “Android does not support downloading additional certificates from the authorityInformationAccess field of the certificate. This is true across all Android versions and devices, and for the Chrome browser. Any server authentication certificate that’s marked for extra download based on the authorityInformationAccess field will trigger this error if the entire certificate chain is not passed from Active Directory Federation Services (AD FS).”

image

Now the article talks about a change to your ADFS configuration to address this issue, but I also wanted to make it clear that this can be addressed (at least for PowerApps) on a per-device basis. In my client’s case, the problematic certificate was an Entrust certificate called Entrust Certification Authority – L1K.

All I did to fix it was using my phone, visited the Entrust root certificates page and clicked on the “Entrust Certificate Authority L1K” link. The certificate was downloaded and when I ran it, I was presented with a screen to give it a display name and confirm it was to be used for “VPN and apps”.

Screenshot_20171120-221647

Once I did this. I could log into PowerApps again.

 

Hope this helps someone else out there…

 

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Tips for using SPD Workflows to talk to 3rd party web services

Send to Kindle

Hi all

One workflow action that anyone getting into SharePoint Designer 2013 workflow development needs to know is the Call HTTP Web Service action. This action opens up many possibilities for workflow developers, and in some ways, turns them into an option to be taken seriously for certain types of development in SharePoint – particularly in Office 365 where your customisations options are more restrictive.

Not so long ago, I wrote a large series of posts on the topic of calling web services within workflows and was able to get around some issues encountered when utilising Managed Metadata columns. Fabian WIlliams and Andrew Connell have also done some excellent work in this area too. More recently I have turned my attention to using 3rd party cloud services with SharePoint to create hybrid scenarios. After all, there are tons of fit-for-purpose solutions for various problems in cloud land and many have an API that supports REST/JSON. As a result, they are accessible to our workflows which makes for some cool possibilities.

But if you try this yourself, you find out fairly quickly that there are three universal realities you have to deal with:

  1. Debugging SPD workflows that call web services is a total bitch
  2. SPD Workflows are very fussy when parsing the data returned by web services
  3. Web services themselves can be very fickle

In this brief post, I thought I might expand on each of these problem areas with some pointers and examples of how we got around them. While they may not be applicable or usable for you, they might give you some ideas in your own development and troubleshooting efforts.

Debugging SharePoint Designer Workflows…

The first issue that you are likely to encounter is a by product of how SharePoint 2013 workflows work. To explain, here is my all-time most dodgy conceptual diagram that is kind of wrong, but has just enough “rightness” not to get me in too much trouble with hardcore SharePoint nerds.

Snapshot

In this scenario, the SharePoint deployment consists of a web front end server and a middle tier server running Workflow Manager. Each time a workflow step runs, it is executed on the workflow manager server, not on the SharePoint server, and most certainly not on the users browser. The implication of this is that if you wish to get a debug trace of the webservice call made by the workflow manager server, you need to do it on the workflow manager server, which necessitates access to it.

Now typically this is not going to happen in production and it is sure as hell never going to happen on Office365, so you have to do this in a development environment. There are two approaches I have used to trace HTTP conversations to and from workflow manager: The first is to use a network sniffer tool like Wireshark and the second is to use a HTTP level trace tool like Fiddler. The Wireshark approach is oft overlooked by SharePoint peeps, but that’s likely because most developers tend not to operate at the TCPIP layer. It’s main pro is it is fairly easy to set up and capture a trace, but its major disadvantage is that if the remote webservice uses HTTPS, then the packet data will be encrypted at the point where the trace is operating. This rules it out when talking to most 3rd party API’s out in cloud land.

Therefore the preferred method is to use install Fiddler onto the workflow manager server, and configure it to trace HTTP calls from workflow manager. This method is more reliable and easier to work with, but is relatively tricky to set it all up. Luckily for all of us, Andrew Connell wrote comprehensive and clear instructions for using this approach.

SPD Workflows are very fussy when parsing the data returned by web services

If you have never seen the joys of JSON data, it looks like this…

{“d”:{“results”:[{“__metadata”:{“id”:”71deada5-6100-48a5-b2e3-42b97b9052a2″,”uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)”,”etag”:”\”6\””,”type”:”SP.Data.DocumentsItem”},”FirstUniqueAncestorSecurableObject”:{“__deferred”:{“uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)/FirstUniqueAncestorSecurableObject”}},”RoleAssignments”

Hard to read? yeah totally, but it was never meant to be human readable anyway. The point here that when calling a HTTP Web service, SharePoint Designer will parse JSON data like the example above into a dictionary variable for you to then work with. In part 7 of my big workflow series I demonstrated how you can parse the data returned to make decisions in your workflow.

But as Fabian has noted, some web services return additional data other than the JSON itself. For example, sometimes an API will return JSON in a JavaScript variable like so:

var Variable = {“d”:{“results”:[{“__metadata”:{“id”:”71deada5-6100-48a5-b2e3-42b97b9052a2″,”uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)”,”etag”:”\”6\””,”type”:”SP.Data.DocumentsItem”},”FirstUniqueAncestorSecurableObject”:{“__deferred”:{“uri”:”http://megacorp/iso9001/_api/Web/Lists(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(1)/FirstUniqueAncestorSecurableObject”}},”RoleAssignments”

The problem here is that the parser code used by the Call HTTP Web Service action does not handle this well at all. Fabian had this to say in his research

What we found out is that if you have anything under the Root of the JSON node other than a JSON Array, for example as in the case of a few, the Version Number [returned as a JSON Object], although it works perfectly in a browser, or JSON Viewer, or Fiddler, it doesn’t make the right call when using SPD2013 or VS2012. If after modifying the data output and removing anything that is NOT a JSON Array from the Root of the Node, it should work as expected.

Microsoft documented an approach to getting around this when demonstrating pulling data from ebay which brings in XML format instead of JSON. They used a transformer web service provided by firstamong.com and then passed the URL of the ebay web service as a parameter to the transformer web service (ie http://www.firstamong.com/json/index.php?q=http://deals.ebay.com/feeds/xml. A similar thing could be done for getting cleaned JSON.

So what to do if you have a web-service that includes data you are not interested in? Before you swear too much or use Microsoft’s approach, check with the web service provider to see if they offer a way to return “raw” JSON data back to you. I have found with a little digging, some cloud providers can do this. Usually it is a variation of the URL you call or something you add to the HTTP request header. In short, do whatever you can to get back a simple JSON array and nothing else if you want it easily parsed by SharePoint.

Now speaking of request headers…

Web Services can be fickle…

It is very common to get a HTTP 500 internal server error when calling 3rd party web services. One common reason for this is that SPD workflows add lots of stuff to the HTTP request when calling a web service because it assumes it is going to be SharePoint and some authorisation needs to take place. But 3rd party webservices are likely not to care, as they tend to use API keys. In fact, sometimes it can cause you problems if the remote webserver is not expecting this stuff in the header.

Here is an example of a problem that my colleague Hendry Khouw had. After successfully crafting a REST request to a 3rd party service on his workstation, he tried to call the same web service using the Call HTTP Web Service workflow action. But when the webservice was called from the workflow, the HTTP responsecode was 500 (internal server error). Using Andrew Connell’s method of fiddler tracing explained earlier, he captured the request that was returning a HTTP 500 error.

image

image

Hendry then pasted the web service URL directly into Fiddler and captured the following trace that returned a successful HTTP 200 response and the JSON data expected.

image

By comparing the request header between the failed and successful request, it becomes clear that Workflow Manager sends oAuth data, as well as various other things in the header that were not sent when the URL was manually called. Through a little trial and error using Fiddler’s composer function, Hendry isolated the problem down to one particular oAuth header variable, Authorization: Bearer. By using Fiddler composer and removing the Authorization variable (or setting it as a blank value), the request was successful as can be shown on the second and third images below:

image  image

image

Now that we have worked out the offensive HTTP header variable that was causing the remote end to spit the dummy, we can craft a custom request header in the workflow to prevent the default value of Bearer being set.

Step 1. Build a dictionary to be included into the RequestHeaders for the web service call. Note the blank value.

image

Step 2. Set the RequestHeaders to use the dictionary we’ve just created.

image

image

Hendry then republished his workflow, the request was successful and he was able to parse the results into a dictionary variable.

Hope this helps others…

Paul Culmsee and Hendry Khouw

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 12

This entry is part 12 of 13 in the series Workflow
Send to Kindle

Hi all, and welcome to part 12 of my articles about SharePoint 2013 Workflows and whether they are ready for prime time. Along the way we have learnt all about CAML, REST, JSON, calling web services, Fiddler, Dictionary objects and a heap of scenarios that can derail aspiring workflow developers. All this just to assign a task to a user!

Anyways, since it has been such a long journey, I felt it worthwhile to remind you of the goal here. We have a fictitious company called Megacorp trying to develop a solution to controlled documents management. The site structure is as follows:

image

The business process we have been working through looks like this:

Snapshot_thumb3

The big issue that has caused me to have to write 12 articles all boils down to the information architecture decision to use a managed metadata column to store the Organisation hierarchy.

Right now, we are in the middle of implementing an approach of calling a web service to perform step 3 in the above diagram. In part 9 and part 10 of this series, I explained the theory of embedding a CAML query into a REST query and in part 11, we built out most of the workflow. Currently the workflow has 4 stages and we have completed the first three of them.

  • 1) Get the organisation name of the current item
  • 2) Obtain an X-RequestDigest via a web service call
  • 3) Constructed the URL to search the Process Owner list and called the web service

The next stage will parse the results of the web service call to get the AssignedToID and then call another web service to get the actual userid of the user. Then we can finally have what we need to assign an approval task. So let’s get into it…

Obtaining the UserID

In the previous post, I showed how we constructed a URL similar to this one:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

This URL uses the CAML in REST method of querying the Process Owners list and returns any items where Organisation equals “Megacorp Burgers”. The JSON data returned shows the AssignedToID entry with a value of 8. Via the work we did in the last post. we already have this data available to us in a dictionary variable called ProcessOwnerJSON.

The rightmost JSON output below illustrates taking that AssignedToID value and calling another web service to return the username , i.e : http://megacorp/iso9001/_api/Web/GetUserById(8).

image   image_thumb52

Confused at this point? Then I suggest you go back and re-read parts 8 and 10 in particular for a recap.

So our immediate task is to extract the AssignedToId from the dictionary variable called ProcessOwnerJSON. Now that you are a JSON guru, you should be able to figure out that the query will be d/results(0)/AssignedToId.

Step 1:

Add a Get an Item from a Dictionary action as the first action in the Obtain Userid workflow stage. Click the item by name or path hyperlink and click the ellipses to bring up the string builder screen. Type in d/results(0)/AssignedToId.

image

Step 2:

Click on the dictionary hyperlink and choose the ProcessOwnerJSON variable from the list.

Step 3:

Click the item hyperlink and use the AssignedToID variable

image

That is basically it for now with this workflow stage as the rest of it remains unchanged from when we constructed it in part 8. At this point, the Obtain Userid stage should look like this:

image

If you look closely, you can see that it calls the GetUserById method and the JSON response is added to the dictionary variable called UserDetail. Then if the HTTP response code is OK (code 200), it will pull out the LoginName from the UserDetail variable and log it to the workflow history before assigning a task.

Phew! Are we there yet? Let’s see if it all works!

Testing the workflow

So now that we have the essential bits of the workflow done, let’s run a test. This time I will use one of the documents owned by Megacorp Iron Man Suits – the Jarvis backup and recovery procedure. The process owner for Megacorp Iron Man suits is Chris Tomich (Chris reviewed this series and insisted he be in charge of Iron Man suits!).

image  image

If we run the workflow against the Jarvis backup and recovery procedure, we should expect a task to be created and assigned to Chris Tomich. Looking at the workflow information below, it worked! HOLY CRAP IT WORKED!!!

image

So finally, after eleven and a half posts, we have a working workflow! We have gotten around the issues of using managed metadata columns to filter lists, and we have learnt a heck of a lot about REST/oData, JSON, CAML and various other stuff along the way. So having climbed this managed metadata induced mountain, is there anything left to talk about?

Of course there is! But let’s summarise the workflow in text format rather than death by screenshot

Stage: Get Organisation Name
   Find | in the Current Item: Organisation_0 (Output to Variable:Index)
   then Copy Variable:Index characters from start of Current Item: Organisation_0 (Output to Variable: Organisation)
   then Replace " " with "%20" in Variable: Organisation (Output to Variable: Organisation)
   then Log Variable: Organisation to the workflow history list
   If Variable: Organisation is not empty
      Go to Get X-RequestDigest
   else
      Go to End of Workflow

Stage: Get-X-RequestDigest
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Call [%Workflow Context: Current Site URL%]_api/contextinfo HTTP Web Service with request
       (ResponseContent to Variable: ContextInfo
        |ResponseHeaders to responseheaders
        |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/GetContextWebInformation/FormDigestValue from Variable: ContextInfo (Output to Variable: X-RequestDigest )
   If Variable: X-RequestDigest is empty
      Go to End of Workflow
   else
      Go to Prepare and execute process owners web service call

Stage: Prepare and execute process owners web service call
   Build {...} Dictionary (Output to Variable: RequestHeader)
   then Set Variable:URLStart to _api/web/Lists/GetByTitle('Process%20Owners')/GetItems(query=@v1)?@v1={"ViewXml":"<View><Query><ViewFields><FieldRef%20Name='Organisation'/><FieldRef%20Name='AssignedTo'/></ViewFields><Where><Eq><FieldRef%20Name='Organisation'/><Value%20Type='TaxonomyFieldType'>
   then Set Variable:URLEnd to </Value></Eq></Where></Query></View>"}
   then Call [%Workflow Context: Current Site URL%][Variable: URLStart][Variable: Organisation][Variable: URLEnd] HTTP Web Service with request
      (ResponseContent to Variable: ProcessOwnerJSON
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   then Log Variable: responseCode to the workflow history list
   If Variable: responseCode equals OK
      Go to Obtain Userid
   else
      Go to End of Workflow

Stage: Obtain Userid
   Get d/results(0)/AssignedToId from Variable: ProcessOwnerJSON (Output to Variable: AssignedToID)
   then Call [%Workflow Context: Current Site URL%]_api/Web/GetUserByID([Variable: AssignedToID]) HTTP Web Service with request
      (ResponseContent to Variable: userDetail 
       |ResponseHeaders to responseheaders
       |ResponseStatusCode to Variable:ResponseCode )
   If Variable: responseCode equals OK
      Get d/LoginName from Variable: UserDetail (Output to Variable: AssignedToName)
      then Log The User to assign a task to is [%Variable: AssignedToName]
      then assign a task to Variable: AssignedToName (Task outcome to Variable:Outcome | Task ID to Variable: TaskID )
   Go to End of Workflow

Tidying up…

Just because we have our workflow working, does not mean it is optimally set up. In the above workflow, there are a whole heap of areas where I have not done any error checking. Additionally, the logging I have done is poor and not overly helpful for someone to troubleshoot later. So I will finish this post by making the workflow a bit more robust. I will not go through this step by step – instead I will paste the screenshots and summarise what I have done. Feel free to use these ideas and add your own good practices in the comments…

First up, I added a new stage at the start of the workflow for anything relation to initialisation activities. Right now, all it does is check out the current item (recall in part 3 we covered issues related to check in/out), and then set a Boolean workflow variable called EndWorkflow to No. You will see how I use this soon enough. I also added a new stage at the end of the workflow to tidy things up. I called it Clean up Workflow and it’s only operation is to check the current item back in.

image   image

In the Get Organisation Name stage, I changed it so that any error condition logs to the history list, and then set the EndWorkflow variable to Yes. Then in the Transition to stage section, I use the EndWorkflow variable to decide whether to move to the next stage or end the workflow by calling the Clean up workflow stage that I created earlier. My logic here is that there can be any number of error conditions that we might check for, and its easier to use a single variable to signify when to abort the workflow.

image

In the Get X-RequestDigest stage, I have added additional error checking. I check that the HTTP response code from the contextinfo web service call is indeed 200 (OK), and then if it is, I also check that we successfully extracted the X-RequestDigest from the response. Once again I use the EndWorkflow variable to flag which stage to move to in the transition section.

image

In the Prepare and execute process owners web service call stage, I also added more error checking – specifically with the AssignedToID variable. This variable is an integer and its default value is set to zero (0). If the value is still 0, it means that there was no process owner entry for the Organisation specified. If this happens, we need to handle for this…

image

Finally, we come to the Obtain Userid stage. Here we are checking both the HTTP code from the GetUserInfo web service call, as well as the userID that comes back via the AssignedToName variable. We assign the task to the user and then set the workflow status to “Completed workflow”. (Remember that we checked out the current item in the Workflow Initialisation stage, so we can now update the workflow status without all that check out crap that we hit in part 3).

image

Conclusion…

So there we have it. Twelve posts in and we have met the requirements for Megacorp. While there is still a heap of work to do in terms of customising the behaviour of the task itself, I am going to leave that to you!

Additionally, there are a lot of additional things we can do to make these workflows much more robust and easier to manage. To that end, I strongly urge you to check out Fabian Williams blog and his brilliant set of articles on this topic that take it much (much) further than I do here. He has written a ton of stuff and it was his work in particular inspired me to write this series. He also provided me with counsel and feedback on this series and I can’t thank him enough.

Now that we have gotten to where I wanted to, I’ll write one more article to conclude the series – reflecting on what we have covered, and its implications for organisations wanting to leverage out of the box SharePoint workflow, as well as implications for all of you citizen developers out there.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 10

This entry is part 10 of 13 in the series Workflow
Send to Kindle

Hi there and welcome back to my series of articles that puts a real-world viewpoint to SharePoint 2013 workflow capabilities. This series is pitched more to Business Analysts, SharePoint Hackers and generally anyone who might be termed a citizen developer. This series shows the highs and lows of out of the box SharePoint Designer workflows, and hopefully helps organisations make an informed decision around whether or not to use what SharePoint provides, or moving to the 3rd party landscape.

By now you should be well aware of some of the useful new workflow capabilities such as stages, looping, support for calling web services and parsing the data via dictionary objects. You also now understand the basics of REST/oData and CAML. At the end of the last post, we just learnt that it is possible to embed CAML queries into REST/oData, which gets around the issue of not being able to filter lists via Managed metadata columns. We proved this could be done, but we did not actually try it with the actual CAML query that can filter managed metadata columns. It is now time to rectify this.

Building CAML queries

Now if you are a SharePoint developer worth your salt, you already know CAML, because their are mountains of documentation on this topic on MSDN as well as various blogs. But a useful shortcut for all you non coders out there, is to make use of a free tool called CAMLDesigner 2013. This tool, although unstable at times, is really easy to use, and in this section I will show you how I used it to create the CAML XML we need to filter the Process Owners list via the organisation column.

After you have downloaded CAMLDesigner and successfully gotten it installed, follow these steps to build your query.

Step 1:

Start CAMLDesigner 2013 and on the home screen, click the Connections menu.

image

Step 2:

In the connections screen that slides out from the right, enter http://megacorp/iso9001 into the top textbox, then click the SharePoint 2013 and Web Services buttons. Enter the credentials of a site administrator account and then click the Connect icon at the bottom. If you successfully connect to the site, CAMLDesigner will show the site in the left hand navigation.

image  image

Step 3:

Click the arrow to the left of the Megacorp site and find the Process Owners list. Click it, and all of the fields in the list will be displayed as blue boxes below the These are the fields of the list section.

image

Step 4:

Drag the Organisation column to the These are the selected fields section to the right. Then do the same for the Assigned To column. If you look closely at the second image, you will see that the CAML XML is already being built for you below.

image     image

Step 5:

Now click on the Where menu item above the columns. Drag the Organisation column across to the These are the selected fields section. As you can see in the second image below, once dragged across, a textbox appears, along with a blue button with an operator. Also take note of the CAML XML being build below. You can see that has added a <Where></Where> section.

image

image

image

Step 6:

In the Textbox in the Organisation column you just dragged, type in one of the Megacorp organisations. Eg: Megacorp Burgers. Note the XML changes…

image

Step 7:

Click the Execute button (the Play icon across the top). The CAML query will be run, and any matching data will be returned. In the example below, you can see that the user Teresa Culmsee is the process owner for Megacorp Burgers.

image

image

Step 8:

Copy the XML from the window to clipboard. We now have the XML we need to add to the REST web service call. Exit CAMLDesigner 2013.

image

Building the REST query…

Armed with your newly minted CAML XML as shown below, we need to return to fiddler and draft it into the final URL.

<ViewFields>
   <FieldRef Name='Organisation' />
   <FieldRef Name='AssignedTo' />
</ViewFields>
<Where>
   <Eq>
      <FieldRef Name='Organisation' />
      <Value Type='TaxonomyFieldType'>Megacorp Burgers</Value>
   </Eq>
</Where>

As a reminder, the XML that we had working in the past post looked like this:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

Let’s now munge them together by stripping the carriage returns from the XML and putting it between the <Query> and </Query> sections. This gives us the following large and scary looking URL.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ /> <FieldRef Name=’AssignedTo’ /> </ViewFields> <Where> <Eq> <FieldRef Name=’Organisation’ /> <Value Type=’TaxonomyFieldType’>Megacorp Burgers</Value> </Eq> </Where></Query></View>”}

Are we done? Unfortunately not. If you paste this into Fiddler composer, Fiddler will get really upset and display a red warning in the URL textbox…

image

If despite Fiddlers warning, you try and execute this request, you will get a curt response from SharePoint in the form of a HTTP/1.1 400 Bad Request response with the message HTTP Error 400. The request is badly formed.

The fact that Fiddler is complaining about this URL before it has  even been submitted to SharePoint allows us to work out the issue via trial and error. If you cut out a chunk of the URL, Fiddler is okay with it. For example: This trimmed URL is considered acceptable by Fiddler:

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>

But adding this little bit makes it go red again.

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields> <FieldRef Name=’Organisation’ />

Any ideas what the issue could be? Well, it turns out that the use of spaces was the issue. I removed all the spaces from the URL above and where I could not, I encoded it in HTML. Thus the above URL turned into the URL below and Fiddler accepted it

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’ />

So returning to our original big URL, it now looks like this (and Fiddler is no longer showing me a red angry textbox):

http://megacorp/iso9001/_api/web/Lists/GetByTitle(‘Process%20Owners’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><ViewFields><FieldRef%20Name=’Organisation’/><FieldRef%20Name=’AssignedTo’/></ViewFields><Where><Eq><FieldRef%20Name=’Organisation’/><Value%20Type=’TaxonomyFieldType’>Megacorp%20Burgers</Value></Eq></Where></Query></View>”}

image

So let’s see what happens. We click the execute button. Wohoo! It works! Below you can see a single matching entry and it appears to be the entry from CAMLBuilder2013. We can’t tell for sure because the Assigned To column is returned as AssignedToID and we have to call another web service to return the actual username. We covered this issue and the web service to call extensively in part 8 but to quickly recap, we need to pass the value of AssignedToID to the http://megacorp/iso9001/_api/Web/GetUserById() web service. In this case, http://megacorp/iso9001/_api/Web/GetUserById(8) because the value of AssignedToId is 8.

The images below illustrate. The first one shows the Process Owner for Megacorp burgers. Note the value of AssignedToID is 8. The second image shows what happens when 8 is passed to the GetUserById web service call. Check Title and LoginName fields.

image image

Conclusion

Okay, so now we have our web service URL’s all sorted. In the next post we are going to modify the existing workflow. Right now it has four stages:

  • Stage 1: Obtain Term GUID (extracts the GUID of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get Process Owners (makes a REST web service call to enumerate the Process Owners List and if successful, moves to stage 3)
  • Stage 3: Find Matching Process Owner (Loops through the process owners and finds the matching organisation from stage 1. For the match, grab the value of AssignedToID and if successful, move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

We will change it to the following  stages:

  • Stage 1: Obtain Term Name (extracts the name of the Organisation column from the current workflow item in the Documents library and if successful, moves to stage 2)
  • Stage 2: Get the X-RequestDigest (We will grab the request digest we need to do our HTTP POST to query the Process Owners list. If successful move to stage 3)
  • Stage 3: Get Process Owner (makes the REST web service call to grab the Process Owners for the organisation specified by the Term name from stage 1. Grab the value of AssignedToID and move to stage 4)
  • Stage 4: Obtain UserID (Take the value of AssignedToID and make a REST web service call to return the windows logon name for the user specified by AssignedToID and assign a task to this user)

One final note: After this epic journey we have taken, you might think that doing this in SharePoint Designer workflow should be a walk in the park. Unfortunately this is not quite the case and as you will see, there are a couple more hurdles to cross.

Until then, thanks for reading…

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Trials or tribulation? Inside SharePoint 2013 workflows–Part 3

This entry is part 3 of 13 in the series Workflow
Send to Kindle

Hi and welcome to part 3 of my series of articles that take a peek under the hood of SharePoint 2013 workflows, while trying to answer the question of whether SharePoint 2013 workflows can enable citizen developers to go forth and solve business problems and catapault organisations to success. In part 1, I introduced you to Megacorp Inc and their need for a controlled documents approval workflow. In part 2, we created a basic SharePoint 2013 workflow that implements the logic outlined in the picture below. The workflow is not yet finished, but we did enough to be able to run it and learn from it, which brings us to this post.

Now I will tell straight up that our first test of this workflow is not going to work. The entire point of this series of articles is to show you *why* things do not always work what you need to do about it. As we progress, I hope that you will learn quite a bit about the operation of workflows in SharePoint 2013, as well as developing and troubleshooting them. After all, we all know that the best citizens are informed citizens!

Snapshot_thumb3

So let’s get on with testing this particular workflow. We published it to the Documents library in part 2, so now we trigger it by selecting one of the files in the library. In this example, I will select one of the documents tagged as from Megacorp GM Foods. So using the filtering feature provided by metadata navigation, we just show the four documents owned by the Megacorp GM Foods division.

image4_thumb[1]

In this example, we will update the document called Burger Additives Standard. The workflow has not yet been set to automatically start, so we will need to manually trigger the workflow ourselves. To do so, click the ellipses next to the Burger Additives Standard document, and then click the ellipsis in the bottom right of the properties/preview window. This will show a second drop down menu. Choose Workflows from this menu as shown below.

image9_thumb

Underneath the Start a New Workflow text, you will see the workflow we published in part 2 called Process Owner Approval. Clicking it will start the workflow on this document.

image12_thumb

First sign of trouble…

After starting the workflow, the browser will redirect you back to the Documents library.  At this point, we see our first sign of trouble. When a workflow is published on a list or library, a column is added that is used to track workflow progress. In this case, we started our workflow, but there is nothing displayed in the workflow status column and the workflow does not appear to run. Hmm… what gives?

image15_thumb

When a workflow does not behave as you intended, the easiest way to troubleshoot is to use the workflow status page. As it happens, you have already seen this page, because it is the same page where we started the workflow. So once again, we click on the ellipsis next to the burger additives standard document, click the ellipsis in the properties window and choose Workflows from the drop down menu.

Well look at that… it says the workflow is indeed started…

image18_thumb

Clicking on the running workflow, we can see more detail which I have shown below. This screen also says that the workflow is started and nothing appears amiss. But then there is the little blue information symbol next to the Internal Status label. Hovering over this icon displays yet more information. This time, we see an error that would stump many – the sort of error that an information worker would have to call up helpdesk for.

Retrying last request. Next attempt scheduled in less than one minute. Details of last request: HTTP InternalServerError to http://megacorp/iso9001/_vti_bin/client.svc/web/lists/getbyid(guid’a64bb9ec-8b00-407c-a7d9-7e8e6ef3e647′)/Items(7) Correlation Id: de95e312-e24b-42c3-9369-5bae68040219 Instance Id: 9ed3a11d-f665-4512-9b17-78850356c846

image21_thumb  image241_thumb

Okaaaay, so that error message is about as useful as Windows Vista. It shows an internal server error and a correlation ID. Furthermore, if you wait another minute or so, and then refresh the workflow status screen, it’s internal status is no longer started, but now has the status of Cancelled. You can see it for yourself below…

image27_thumb

One again, clicking the little blue info button gives us more detail. Unfortunately, the detail consists of an even scarier appearing message than the previous one. This one looks nasty enough to freak out some SharePoint admins too. Check it out – it is pretty much useless in terms of conveying any information of value.

RequestorId: 8ad4a017-7e6f-0d0f-35d2-81c56a05b37c. Details: System.ApplicationException: HTTP 500 {“Transfer-Encoding”:[“chunked”],”X-SharePointHealthScore”:[“0″],”SPClientServiceRequestDuration”:[“211″],”SPRequestGuid”:[“3d7950b2-3d9d-47d9-a5fb-588bf02b9551″],”request-id”:[“3d7950b2-3d9d-47d9-a5fb-588bf02b9551″],”X-FRAME-OPTIONS”:[“SAMEORIGIN”],”MicrosoftSharePointTeamServices”:[“15.0.0.4420″],”X-Content-Type-Options”:[“nosniff”],”X-MS-InvokeApp”:[“1; RequireReadOnly”],”Cache-Control”:[“max-age=0, private”],”Date”:[“Sun, 17 Nov 2013 22:56:19 GMT”],”Server”:[“Microsoft-IIS\/8.0″],”X-AspNet-Version”:[“4.0.30319″],”X-Powered-By”:[“ASP.NET”]} at Microsoft.Activities.Hosting.Runtime.Subroutine.SubroutineChild.Execute(CodeActivityContext context) at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager) at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

image30_thumb

So what the hell is going on here?

Unfortunately, both of the error messages above are virtually useless for most people and the only way to dig deeper is to delve into the SharePoint ULS logs. Of course, if you use Office 365 you don’t have that luxury because accessing the ULS logs are not available to you – so your best option to figuring out errors like this is trial and error. you need to learn CSOM code to access the logs. For the rest of us living in on-premises or IaaS land, a quick search of the ULS logs for the file name “Burger additives standards” reveals the issue and the root cause. Check out the errors reported below – it should be quite clear…

SharePoint Foundation             General                           8kh7    High        The file “http://megacorp/iso9001/Documents/Burger additives standards.pdf” is not checked out.  You must first check out this document before making changes.

SharePoint Foundation             General                           aix9j    High        SPRequest.AddOrUpdateItem: UserPrincipalName=i:0).w|s-1-5-21-1480876320-1123302732-2276846122-500, AppPrincipalName=I:0I.T|MS.SP.EXT|2CC54B18-3F9D-43A3-BE55-B3A81C045562@B9A66C21-F39D-42DB-B6CD-DE520B6C1C91 ,bstrUrl=http://megacorp/iso9001 ,bstrListName={A64BB9EC-8B00-407C-A7D9-7E8E6EF3E647} , bAdd=False , bSystemUpdate=False , bPreserveItemVersion=False , bPreserveItemUIVersion=False , bUpdateNoVersion=False ,pbstrNewDocId=00000000-0000-0000-0000-000000000000 , bHasNewDocId=False , bstrVersion=8 , bCheckOut=False ,bCheckin=False , bUnRestrictedUpdateInProgress=False , bMigration=False , bPublish=False , bstrFileName=<null>

SharePoint Foundation             CSOM                              ahjq1    High        Exception occured in scope Microsoft.SharePoint.SPListItem.UpdateWithFieldValues. Exception=Microsoft.SharePoint.SPException: The file “http://megacorp/iso9001/Documents/Burger additives standards.pdf” is not checked out.  You must first check out this document before making changes. —> System.Runtime.InteropServices.COMException: The file “http://megacorp/iso9001/Documents/Burger additives standards.pdf” is not checked out.  You must first check out this document before making changes.     at Microsoft.SharePoint.Library.SPRequestInternalClass.AddOrUpdateItem(String bstrUrl, String bstrListName, Boolean bAdd, Boolean bSystemUpdate, Boolean bPreserveItemVersion, Boolean bPreserveItemUIVersion, Boolean bUpdateNoVersion, Int32& plID, String& pbstrGuid, Guid pbstrNewDocId, Boolean bHasNewDo…    3b84d615-e006-4457-811a-0af089963216

So in case it is not clear from the above messages, we have an error stating that the file Burger additives standards is not checked out and that to make changes to the document, we need to check it out first. This raises several questions:

  • 1. Why is the document library configured to require check-out?
  • 2. Why is the workflow trying to change the document anyway? The workflow we created in part 2 does no action on the Documents library.
  • 3. Why were the error messages so useless (which really hurts in Office365 scenarios)
  • 4. How can we fix this problem?

Let’s examine each of these in turn…

1. Why is the document library configured to require check-out?

The first question is really easy to answer. When you create a site using the Document Center site template, the document library versioning settings enable the Require Check Out option as shown below. Therefore no changes can be made to this document unless the user making the change checks it out.

image

2. Why is the workflow trying to change the document anyway?

The second question is also easy to answer, but the answer is somewhat more complicated. When a workflow is associated to a list or content type, it adds a column to track the status of the workflow. In SharePoint 2013, the default behaviour is to update this column with whatever stage in the workflow currently being executed. Therefore, as soon as the workflow runs, and before it has run any of the actions, it attempts to update its stage to the document that it was triggered from. So looking at the images below, if things were working we should see the string “Stage 1” in the Process Owner Approval column for the document Burger additives standards. But since the document requires check-out to make a change, SharePoint prevents this from happening by design.

image

image

Some readers who are experienced with SharePoint Designer workflow might be thinking “easy… just use the check out item workflow action before you do anything else.”Unfortunately this won’t work for you because this issue gets triggered when the stage info is written to the workflow status column which happens before any actions run.

3. Why were the error messages so useless (which really hurts in Office365 scenarios)

The next question is why on earth wasn’t the true error able to be reported back to the workflow? After all, in the end, this was a clearly identified error in SharePoint, yet all we got was error messages that did not state the problem at all.  This would have saved the effort of delving deep into the ULS logs and that is not even possible in Office 365.

The answer to this question is a little more complex and relates to how Workflow Manager and SharePoint interact with each other. Without getting into detail, the gist of the issue is that Workflow Manager uses REST webservice calls to do all of its operations on SharePoint content. Each and every workflow action (like the Log to History List) uses REST to talk to SharePoint to get the work done. While a detailed discussion of REST would take us too far afield, I have drawn you possibly the dodgiest ever diagram of this process ever to help you understand it. The main point with REST worth mentioning is that the intention of REST is to embrace the key protocols of the web, so a successful or failed request is conveyed via a HTTP status code.  If you have ever experienced your browser giving you a 404 page not found, then you know what I mean when I speak of HTTP status codes.

Snapshot

Now look again at the gory detail of the error that was logged by the workflow. We see the following string amongst all the other stuff. “Details: System.ApplicationException: HTTP 500”. So what happened is when the workflow tried to update the document with its stage, the check-out requirement resulted in SharePoint sending back a HTTP Error 500 (server error) back to the workflow manager.  Unfortunately for us, it did not send back the underlying cause of the error. Instead, the details of the response logged by workflow has all sorts of information about the HTTP request, but no hint to the underlying error. Sucks eh?

4. How can we fix this problem?

The final question was what we could do about this issue. There are two relatively easy ways, but before I do that, let me show you what happens if you check out the document and then attempt to run the workflow. While you might think this problem might go away, instead we get a friendly dialog box telling us to check the document back in before we attempt to start a workflow. Given that I couldn’t run this workflow because the item needed to be checked out, I had to laugh.

image

Anyways, the two options you have are to disable require check out on the Documents library or change the default behaviour of the workflow so that it does not write the stage back to the item that triggered the workflow. The first one is pretty easy – in the Versioning Settings of the Documents library, we change the Require documents to be checked out before they can be edited? option from Yes to No

image

The problem with the first option is that requiring check-out for controlled documents is likely to be a key requirement, so it is not an option in all circumstances. So the other option is to change the behaviour of the workflow itself so that it does not write its stage information back to the Documents library. Luckily for us, this is actually really easy to do. In SharePoint Designer 2013, there is an option to disable the updating of stage information. In the workflow settings screen, look for the option called Automatically update the workflow status to the current stage name and untick it.

image

Now unticking this box will get us past this error, but the problem now is that we have no easy way to track workflows, as this status update can be used in views on the Documents library to see which workflows are at a particular stage. For a complex workflow, or one that will be running on many items, not being able to see the status of the workflow would make life difficult. One workaround is to add a Check Out Item workflow action to the start of the workflow, and then use the Set Workflow Status action to update the status column on the current item as the workflow progresses. This will give us back the ability to track the workflow behaviour in the Documents library, but it will mean that a new version will be added to that Document each time the status updates. So another workaround is to log workflow status to some other list altogether (using the Create List Item action) and use that list for tracking and reporting instead.

Conclusion

I think that the separation of SharePoint and Workflow Manager into separate products is ultimately a good thing. It’s just a pity that one of the legacies of this change is an issue like what we covered in this post, where a relatively simple problem was exacerbated by poor reporting of errors between Workflow Manager and SharePoint. I guess this is part of the bigger price we pay – that of increased technical complexity via more moving parts. Hopefully an issue like this one can be addressed in a future service pack or update, because it is this sort of stuff that can cause people to lose some confidence and jump to the 3rd party solution perhaps prematurely. So if this issue was enough for you to think “pah – let’s go third party”, I have news for you. As you progress through this series we are going to deal with more complex issues than this one too.

Anyway, the point is that we have identified the cause of this particular issue and gotten past it, so we should be able to continue testing the workflow and marvel at our awesomeness. So in the next article, we will continue with testing our workflow and see what else SharePoint throws at us.

Thanks for reading

Paul Culmsee

HGBP_Cover-236x300.jpg

www.hereticsguidebooks.com

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

When PerformancePoint Designer won’t start after applying the SP2013 October CU

Send to Kindle

Hi all

I am reporting this one because the resolution was harder to find that it should be. After applying the October CU for SP2013 and try and launch PerformancePoint Dashboard Designer, you will receive the error message Application cannot be started and to contact the application vendor.

image.png

If you click the button to see the details, you will see the following error:

* Activation of http://blah/_layouts/15/ppsma/1033/designer.application resulted in exception. Following failure messages were detected:
+ Reference in the deployment does not match the identity defined in the application manifest.

Now if you google “Reference in the deployment does not match the identity defined in the application manifest. “ you will get hits related to Microsoft’s ClickOnce technology, which is used to install Dashboard Designer. The issue you have is that bits of Dashboard Designer were updated with the October cumulative update, but it was not packaged up properly and obviously this got missed during the testing process for the CU.

Microsoft published an article on November 19, outlining this issue and a fix, but they never specified the error message in the post, so the article remains frustratingly hard to find. The fix is to backup two files on your Servers and replace them with the versions available from the article. The files in question are:

  • C:\Program Files\Common Files\Microsoft Shared\Web ServerExtensions\15\TEMPLATE\LAYOUTS\ppsma\1033\Designer.Application
  • C:\Program Files\Common Files\Microsoft Shared\Web ServerExtensions\15\TEMPLATE\LAYOUTS\ppsma\1033\DesignerInstall\Designer.exe.manifest

Now watch out as this fix some people have noted that the replacement files do not work in all circumstances. Find out more in the source article…

Hope this helps someone. This is not the error you want to get late on a Friday…

 

Thanks for reading

Paul Culmsee

 

 

 

p.s Full error details below.

SOURCES
Deployment url            :
http://blah/_layouts/15/ppsma/1033/designer.application
                        Server        : Microsoft-IIS/8.0
X-Powered-By    : ASP.NET
Application url            :
http://blah/_layouts/15/ppsma/1033/DesignerInstall/Designer.exe.manifest
                        Server        : Microsoft-IIS/8.0
X-Powered-By    : ASP.NET

IDENTITIES
Deployment Identity        : DashboardDesigner.exe(en-us), Version=15.0.4549.1000, Culture=neutral, PublicKeyToken=f7b232fd7284a56a, processorArchitecture=msil

APPLICATION SUMMARY
* Online only application.
* Trust url parameter is set.
ERROR SUMMARY
Below is a summary of the errors, details of these errors are listed later in the log.
* Activation of
http://blah/_layouts/15/ppsma/1033/designer.application resulted in exception. Following failure messages were detected:
+ Reference in the deployment does not match the identity defined in the application manifest.

OPERATION PROGRESS STATUS
* [29/11/2013 4:27:37 PM] : Activation of
http://blah/_layouts/15/ppsma/1033/designer.application has started.
* [29/11/2013 4:27:37 PM] : Processing of deployment manifest has successfully completed.
* [29/11/2013 4:27:37 PM] : Installation of the application has started.

ERROR DETAILS
Following errors were detected during this operation.
* [29/11/2013 4:27:37 PM] System.Deployment.Application.InvalidDeploymentException (SubscriptionSemanticValidation)
– Reference in the deployment does not match the identity defined in the application manifest.
– Source: System.Deployment
– Stack trace:
at System.Deployment.Application.DownloadManager.DownloadApplicationManifest(AssemblyManifest deploymentManifest, String targetDir, Uri deploymentUri, IDownloadNotification notification, DownloadOptions options, Uri& appSourceUri, String& appManifestPath)
at System.Deployment.Application.DownloadManager.DownloadApplicationManifest(AssemblyManifest deploymentManifest, String targetDir, Uri deploymentUri, Uri& appSourceUri, String& appManifestPath)
at System.Deployment.Application.ApplicationActivator.DownloadApplication(SubscriptionState subState, ActivationDescription actDesc, Int64 transactionId, TempDirectory& downloadTemp)
at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc)
at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl)
at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state)

COMPONENT STORE TRANSACTION DETAILS
No transaction information is available.

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Another cause of “The server returned a non-specific error when trying to get the data view from the data source”

Send to Kindle

Hiya

Here is me in tech/troubleshooting mode so you business-types who read this blog can skip this post Smile

The Issue

There are often times when its very useful to use a SOAP webservice call to a SharePoint 2010 list when binding it to a Data View Web Part. Among other things, being able to specify a particular view on a list when calling the GetListItems method is really handy. But today I came across an issue that manifested itself in two different ways. I had a test SharePoint 2010 farm with SP2 and I was working with SharePoint Designer on the server itself. I created two claims based web applications that use NTLM.  One was on port 80 and the other on port 82. Each had a single site collection using the team site template so they are about as stock as you can get.

In each site I then added a single item to the built-in links list that you get with the Team Site template…

image

Then using SPD, I then created a Data Source in each site to the above list using SOAP as follows:

image

  • Service Description Location: http://site/_vti_bin/lists.asmx?WSDL
  • Port: ListsSoap
  • Operation: GetListItems
  • listName: Links

 

 

 

 

Okay… fairly straightforward so far. Now when I go to use this datasource in a page, look what happens on the port 80 site. I can add the data source but I get an item count of zero, even though I know there is an item in there.

image  image

Now lets try the same thing on the web app listening on port 82. This time we get a common message that can mean many things. The very annoying “The server returned a non-specific error when trying to get data from the data source.”

image

For what its worth, if you create a site that uses classic mode authentication, things will work just fine. So what gives with these two claims authentication web applications?

Looking in the IIS logs for the offending web applications when attempting to access the data source, we get a bit of a hint to what is going on. Below are 4 entries in the IIS logs.

   1:  POST /_vti_bin/lists.asmx - 82 - fe80::416a:c7c4:4b41:d558%10 - 401 0 0 0
   2:  POST /_vti_bin/lists.asmx - 82 - fe80::416a:c7c4:4b41:d558%10 - 401 1 2148074254 0
   3:  POST /_vti_bin/lists.asmx - 82 0#.w|nt+authority\iusr fe80::416a:c7c4:4b41:d558%10 - 401 0 0 296
   4:  POST /_vti_bin/webpartpages.asmx - 82 0#.w|ad\paul fe80::416a:c7c4:4b41:d558%10 Mozilla/4.0+(compatible;+MS+FrontPage+14.0) 500 0 0 328

First up take a look at line 2 (I will come back to line 1 at the end of this post). It shows an attempt to access lists.asmx and it received and error code of 401 1 2148074254.  Breaking those up shows a response code of 401 means that the request was unauthorised. 1 means that authorisation problem was due to an access denied because of user credentials. The large third number is actually a win32 status of “2148074254” is saying that no credentials were supplied. This sort of entry is actually very common in IIS logs because of the way windows authentication works in IIS. A browser will first attempt an anonymous request before reattempting the request using other authentication methods.

Take a look at line 3 in particular. It shows the reattempt, this time using some credentials… the NT AUTHORITY\IUSR Account. But this new attempt has failed too, as it received a 401 0 0 296 error code. This in turn has caused SharePoint Designer’s attempt to add the web part to fail also (note line 4 with an internal server error code of 500).

Now if you are not aware, NT AUTHORITY\IUSR is a built in account that is used by default in IIS7 when Anonymous authentication is switched on. So this begs the question…why would the anonymous account be attempting to access the list webservice we have been using? After all, SharePoint by default does not have anonymous access enabled, and I am performing the above steps logged in as an administrative account.

The answer is twofold. First up, go back to the SOAP data source in SPD  and have a look at the “Login” tab of the data source properties. You will see that the default setting is “Don’t attempt to authenticate”. In this situation, SharePoint has to make use of impersonation to service the request since it has no credentials to access this data source. Impersonation enables this request to be serviced by a different account – in this case being the NT AUTHORITY\IUSR account. Problem is that is has insufficient permissions somewhere.

image

The second part to the answer is an ASP.NET configuration setting in IIS…

The fixes…

So what can we do about it? Well, if you look online, you will see references to this option in the web.config for your web application and changing aspnet:AllowAnonymousImpersonation from “true” to “false”:

<appSettings>
  <add key="aspnet:AllowAnonymousImpersonation" value="false" />
</appSettings>

If you set the value to false, IIS alters its behaviour. Instead of using the anonymous account for impersonation, it will change to use the application pool account for this web application. In this case, creating a DVWP based on a SOAP datasource will work fine. The logs will show now that the application pool account is impersonated:

   1:  POST /_vti_bin/lists.asmx - 80 - fe80::416a:c7c4:4b41:d558%10 - 401 0 0 15
   2:  POST /_vti_bin/lists.asmx - 80 - fe80::416a:c7c4:4b41:d558%10 - 401 1 2148074254 0
   3:  POST /_vti_bin/lists.asmx - 80 0#.w|ad\sp_webapp-pool fe80::416a:c7c4:4b41:d558%10 - 200 0 0 343
   4:  POST /_vti_bin/webpartpages.asmx - 80 0#.w|ad\paul fe80::416a:c7c4:4b41:d558%10 Mozilla/4.0+(compatible;+MS+FrontPage+14.0) 200 0 0 1312

Notice this time line 3 shows the account ad\wp-webapp-pool being used and being the app pool, it has full permissions to SharePoint. Accordingly line 4 now shows a successful call to webpartpages.aspx and SharePoint designer successfully displays the content.

So is that the answer then? Just change aspnet:AllowAnonymousImpersonation from “true” to “false”? The answer is not necessarily. It is critical that you understand why that entry was added to IIS and don’t just go and change it because it gets you out of a pickle.

Microsoft has a KB article describing the setting and why it was put in. Let’s just say you have a web part that does something super important and it is just needed for one site. So instead of deploying it to the GAC, you opt to deploy it to bin folder of the web application and give it a partial trust level instead. But then when using that web part, it has more permissions in SharePoint than what you gave it and your security guys freak out, fearing this is an avenue for a privilege escalation attack. In this scenario, the application pool account is being used as the impersonation account and in SharePoint land, the app pool account usually has significantly more permissions than regular user accounts.

As stated in the above article “This issue occurs because of an error in the ASP.NET 2.0 authentication component. The error causes the partially trusted Web parts to impersonate the application pool account. Therefore, the Web parts have full permission to access the SharePoint site”.

So in changing this setting, you are reintroducing the risk of privilege escalation. Anytime impersonation is required, the app pool account will be used which has the potential to make requests and execute code with greater privilege than the currently logged in user. So another alternative is to add credentials to the secure store for the data source and use them in the login tab. That way when requests are made to access the webservice, these credentials will be used. If you already are using an unattended data access account for say, PerformancePoint or Excel Services, you might consider that account here as well. Below is an example of what configuring the data connection using a secure store entry would look like.

image

By the way… whatever you do, do not choose to hard-code a user account and password because data connection details are stored in clear text XML in each site collection.

A couple final things…

When I was playing around with this, I wondered how classic mode authentication behaved. Below are the logs…

POST /_vti_bin/lists.asmx - 81 - fe80::416a:c7c4:4b41:d558%10 - 401 2 5 0
POST /_vti_bin/lists.asmx - 81 - fe80::416a:c7c4:4b41:d558%10 - 401 1 2148074254 0
POST /_vti_bin/lists.asmx - 81 ad\paul fe80::416a:c7c4:4b41:d558%10 - 200 0 0 406
POST /_vti_bin/webpartpages.asmx - 81 ad\paul fe80::416a:c7c4:4b41:d558%10 Mozilla/4.0+(compatible;+MS+FrontPage+14.0) 200 0 0 1249

Note how this time there was no need for impersonation because SharePoint was using my windows credentials and the SharePoint and Windows identifies are the same thing. In claims mode they are not – and in fact in claims mode you might have users who do not use Windows authentication at all (such a forms auth user who has authenticated via SQL membership provider). In that case they will have no windows identity and impersonation will need to be used as described above.

Another thing that  bugged me was the error code of 401.0.0 from the IIS logs. If you have ever attempted to use IIS request tracing, to track 401 errors, it will not let you trace 401.0 errors because a substatus code of 401.0 actually doesn’t exist officially. So I decided to enable IIS request tracing for all 400 errors to see what was causing that 401.0 error. Fairly quickly I found that the culprit was the SPRequestModule which “provides most of the additional configuration and processing needed for SharePoint pages” (thanks Josh). Whoever wrote that module decided that it was okay to issue the mythical 401.0 errors.

  • 10/20/2013 06:46:33.85     w3wp.exe (0x14AC)                           0x1534    SharePoint Foundation             Claims Authentication             ftc8    Verbose     Access Denied: Authentication is required.
  • ModuleName: SPRequestModule
  • Notification: 2
  • HttpStatus: 401
  • HttpReason: Unauthorized
  • HttpSubStatus: 0
  • ErrorCode: 0
  • ConfigExceptionInfo: Notification AUTHENTICATE_REQUEST
  • ErrorCode: The operation completed successfully. (0x0)

Thanks for reading…

Paul Culmsee

www.cleverworkarounds.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

How to filter on a Managed Metadata column via REST in SharePoint 2013

Send to Kindle

Pardon the pun, but I just had a ‘clever workaround’ moment with SharePoint’s oData/REST implementation when it comes to filtering list items based on taxonomy (managed metadata) columns. Now I do not consider myself a developer, so this article is probably a little verbose for some readers, but should be helpful to power users or IT pros.

Here is an example term set called FilterDemo. You can see two levels of hierarchy.

image

Take the scenario of a custom list (called TestFilter) with a managed metadata column (called FilterDemo) that links to the above term set. Let’s also assume there are 3 entries in it as follows:

Title FilterDemo
A A1
B B3
C Category A

Using the wonders of the REST API, I am able to get access to all items in the list via the following URL:

http://site/_api/web/lists/getbytitle(‘TestFIlter’)/Items

If you execute that, and IE is has “feed reading view” turned off, you will get back lots of scary looking XML. If you collapse it though, you will see three entry tags in it. One for each item in the TestFilter list.

 <?xml version=”1.0″ encoding=”utf-8″ ?>
<feed xml:base=http://site/_api/ xmlns=”http://www.w3.org/2005/Atom xmlns:d=”http://schemas.microsoft.com/ado/2007/08/dataservices xmlns:m=”http://schemas.microsoft.com/ado/2007/08/dataservices/metadata xmlns:georss=”http://www.georss.org/georss xmlns:gml=”http://www.opengis.net/gml>
  <id>a0dd3649-27b9-4d8d-90f8-243e9622b158</id>
  <title />
  <updated>2013-09-23T01:44:35Z</updated>
+ <entry m:etag=”“2”>
+ <entry m:etag=”“3”>
+ <entry m:etag=”“1”“>
</feed>

Using more wonders of REST (and oData), I can change the URL to filter my results so that I only get matching items back. For example: here I am filtering on Items where the Title field has “A” in it.

http://site/_api/web/lists/getbytitle(‘Testfilter’)/Items/?$filter=Title eq ‘A’

Now we get back just the one entry matching that criteria…

 <?xml version=”1.0″ encoding=”utf-8″ ?>
<feed xml:base=http://site/_api/ xmlns=”http://www.w3.org/2005/Atom xmlns:d=”http://schemas.microsoft.com/ado/2007/08/dataservices xmlns:m=”http://schemas.microsoft.com/ado/2007/08/dataservices/metadata xmlns:georss=”http://www.georss.org/georss xmlns:gml=”http://www.opengis.net/gml>
  <id>9dad763d-743f-4ffb-b26b-e53c0f6e1f7e</id>
  <title />
  <updated>2013-09-23T02:19:02Z</updated>
+ <entry m:etag=”“2”>

</feed>

Okay, so there is nothing earth shattering about what I just did above and its well documented in various places. But look what happens when I try and filter items in the list based on the FilterDemo column which is Managed metadata based…

http://site/_api/web/lists/getbytitle(‘Testfilter’)/Items?$filter=FilterDemo eq ‘A1’

Boom! Browser returns an error. If I do the same thing using Fiddler to look at the trace, it reports a HTTP/1.1 400 Bad Request error.

So I start digging and come across articles from Phil Harding and Serge Luca informing me that Taxonomy columns are unsupported via REST. I got my hopes up when I came across an Andrew Connell article on filtering lookup fields, since behind the scenes the taxonomy field is actually a lookup field, but in the comments section, it seemed to confirm that this wasn’t doable. All seemed lost…

But in reading MSDN’s REST articles, I had a vague recollection that CAML could be done via REST queries. I knew that using CAML, it was indeed possible to filter taxonomy columns. I proved it using CAML Designer 2013, connecting to the TestFilter list and filtering it successfully using the following XML…

<Where>
   <Eq>
      <FieldRef Name='FilterDemo' />
      <Value Type='TaxonomyFieldType'>A1</Value>
   </Eq>
</Where>

So, armed with this knowledge, I came across an MSDN forum thread where a tantalising clue was offered. Christophe Humbert asked whether CAML queries could be done via the REST API and Erik C. Jordan provided this nugget of wisdom:

I was able to get the following to work:

POST https://<site>/_api/web/Lists/GetByTitle(‘[list name]’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query>[other CAML query elements]</Query></View>”}

Editors node: I will be a little verbose at this point in case you are not a developer or overly familiar with REST.

This approach looked exactly what I needed and I thought this was worth a shot, but since the remedy is a HTTP POST rather than a GET, I couldn’t do it with Internet Explorer, so I loaded up fiddler, and used the Composer function. I crafted the following POST with an empty CAML query as a test…

http://site/_api/Web/lists/getByTitle(‘TestFilter’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query></Query></View>”}

 

image

And this is the response I got…

HTTP/1.1 411 Length Required

A quick bit of googling, and I realise that some HTTP queries require the use of a ‘Content-Length‘ field in the HTTP header. The standard states that: “Any Content-Length greater than or equal to zero is a valid value”, so I tried this figure as shown below:

image

And this time I get the response:

HTTP/1.1 403 FORBIDDEN (The security validation for this page is invalid and might be corrupted. Please use your web browser’s Back button to try your operation again).

Another quick bit of googling I discover that I am missing another required HTTP header in my POST request. This is called the X-RequestDigest and it holds something called the form digest. The form digest improves SharePoint security because it is specific to a specific user, site and limited to a certain time frame. You need to request a form digest and then pass it back to SharePoint for subsequent calls. To get hold of the form digest, you have to make another REST call which generates one. This is done by making a POST request with an empty body to http://site/_api/contextinfo and extracting the value of the “d:FormDigestValue” node in the information returned. In fiddler it looks like the following…

image

image

If you look at the returned content from calling the _api/contextinfo method above, I have highlighted FormDigestValue. In Fiddler, copy this value into the Request headers section of the composer and retry the CAML request:

image

Now if you execute the request, we get data!

HTTP/1.1 200 OK

If you look a the raw results in fiddler, you will see a whole bunch of scary XML. If you examine the results using the XML parser built into Fiddler as shown in the image below, you will see very similar output to my original REST request that I started this article with – 3 entries in this list. It worked!

image

So now let’s add our CAML query into the XML and see if we can make it work. Recall that I successfully tested this query via the following CAML…

<Where> <Eq> <FieldRef Name=’FilterDemo’ /> <Value Type=’TaxonomyFieldType’>A1</Value> </Eq> </Where>

So I construct the following URL and paste into the Fiddler constructor:

http://site/_api/Web/lists/getByTitle(‘TestFilter’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><Where> <Eq> <FieldRef Name=’FilterDemo’ /> <Value Type=’TaxonomyFieldType’>A1</Value> </Eq> </Where> </Query></View>”}

 

With great excitement, I clicked “Execute” and received….

HTTP/1.1 400 Bad Request

Ah crap! Unfortunately I could not find a single example of this form of REST query to SharePoint, but I got a hint to the problem from Fiddler itself. It wasn’t happy with my request at all, showing the request as red.

image

Clearly I was doing something wrong, and being a non-developer I figured I wasn’t encoding things properly. So after some trial and error, I worked out that spaces were the issue. So where I was able to remove them I did, and those that I couldn’t, I encoded like so:

http://site/_api/Web/lists/getByTitle(‘TestFilter’)/GetItems(query=@v1)?@v1={“ViewXml”:”<View><Query><Where><Eq><FieldRef%20Name=’FilterDemo’/><Value%20Type=’TaxonomyFieldType’>A1</Value></Eq></Where></Query></View>”}

At this point fiddler stopped showing me an angry red colour and I clicked the Execute button. Wohoo! It works! Below you can see a single matching entry, just like my example when I filtered on Title column using the $filter parameter.

image

Expanding the XML indeed confirms it has matched term A1. Smile

image

Conclusion

While I was happy that I found a way to use REST to filter a list based on a Taxonomy column, I’m sure this method offers some interesting opportunities in various other scenarios.

In my company Seven Sigma, we have a worn-out post-it note that has the words “Alpha SharePoint Developer” written on it. This gets stuck to the office of whoever does the coolest coding trick and I’m happy to report that this little effort netted me the Alpha developer prize for the first time ever, principally because I then used this approach with SharePoint Designer 2013 workflows and it worked really well. In fact it worked so well that I have decided that using this with the new capabilities of SPD workflows warrants a blog series of its own.

Until then, I hope that this approach works for you and happy REST’ing!

 

Thanks for reading

Paul Culmsee

www.hereticsguidebooks.com

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Troubleshooting PerformancePoint Dashboard Designer Data Connectivity to SharePoint Lists

Send to Kindle

image

Here is a quick note with regards to PowerPivot Dashboard Designer connecting to SharePoint lists utilising Per-user identity on the single server.  The screenshot above shows what I mean. I have told Dashboard designer to use a SharePoint list as a data source on a site called “http://myserver and chosen the “Per-user identity” to connect with the credentials of the currently logged in user, rather than a shared user or something configured in the secure store (which is essentially another shared user scenario). Per-user identity makes a lot of sense most of the time for SharePoint lists, because SharePoint already controls the permissions to this data. If you have granted users access to the site in question already, the other options are overkill – especially in a single server scenario where Kerberos doesn’t have to be dealt with.

But when you try this for the first time, a common complaint is to see an error message like the one below.

image

This error is likely caused by the Claims to Windows Token Service (C2WTS) not being started. In case you are not aware of how things work behind the scenes, once a user is authenticated to SharePoint, claims authentication is used for inter/intra farm authentication between SharePoint services. Even when classic authentication is used when a user hits a SharePoint site, SharePoint converts it to a claims identity when it talks to service applications. This is all done for you behind the scenes and is all fine and dandy when SharePoint is talking to other SharePoint components, but if the service application needs to talk to an external data source that does not support claims authentication – say a SQL Database – then the Claims to Windows Token Service is used to convert the claims back to a standard Windows token.

Now based on what I just said, you might expect that PerformancePoint should use claims authentication when connecting to a SharePoint list as a data source – after all, we are not leaving the confines of SharePoint and I just told you that claims authentication is used for inter/infra farm authentication between SharePoint services, but this is not the case. When PerformancePoint connects to a SharePoint list (and that connection is set to Per-user identity as above), it converts the users claim back to a Windows Token first.

Now the error above is quite common and well documented. The giveaway is when you see an event log ID 1137 and in the details of the log, the message: Monitoring Service was unable to retrieve a Windows identity for “MyDomain\User”.  Verify that the web application authentication provider in SharePoint Central Administration is the default windows Negotiate or Kerberos provider. This is a pretty strong indicator that the C2WTS has not been configured or is not started.

A lesser known issue is one I came across the other day. In this case, Claims to Windows Token Service was provisioned and started, and a local administrator on the box. In dashboard designer, here was the message:

image

In this error, you will see an eventlog ID 1101 and in the details of the log, something like this:

PerformancePoint Services could not connect to the specified data source. Verify that either the current user or Unattended Service Account has read permissions to the data source, depending on your security configuration. Also verify that all required connection information is provided and correct.

System.Net.WebException: The remote name could not be resolved: ‘myserver’
at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
at System.Net.HttpWebRequest.GetRequestStream()
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
at Microsoft.PerformancePoint.Scorecards.DataSourceProviders.ListService.GetListCollection()
at Microsoft.PerformancePoint.Scorecards.DataSourceProviders.SpListDataSourceProvider.GetCubeNameInfos()

PerformancePoint Services error code 201.

This issue has a misleading message. It suggests that the server “myserver” name could not be resolved, which might have you thinking this was a name resolution issue which is not the true root cause. This one is primarily due to a misconfiguration – most likely through Active Directory Group policy messing with server settings. A better hint to the true cause of this issue can be found in the security event log (assuming you have set the server audit policy to audit failures of “privilege use” which is not enabled by default).

image

The event ID to look for is 4673, and the Task Category is called “Sensitive Privilege Use”. A failure will be logged for the user account running the claims to windows service:

Level:         Information
Keywords:      Audit Failure
User:          N/A
Computer:      myserver.mydomain.com
Description:
A privileged service was called.

Subject:
Security ID:        mydomain\sp_c2wts
Account Name:        sp_c2wts
Account Domain:        mydomain

Service Request Information:
Privileges:        SeTcbPrivilege

The key bit to the log above is the “Service Request Information” section. “SeTcbPrivilege” means “To Act as Part of the Operating System”

Now the root cause of this issue is clear. The claims to windows token service is missing the “Act as part of the operating system” right which is one of its key requirements. You can find this setting by opening local security policy, choosing Local Policies->User Rights  Assignment. Add the claims to windows token service user account into this right. If you cannot do so, then it is governed by an Active Directory policy and you will have to go and talk to your Active Directory admin to get it done.

image

Hope this helps someone

Paul Culmsee

 

p.s Below this is the full eventlog details. I’ve only put it here for search engines so feel free to stop here Smile

 

Log Name:      Application
Source:        Microsoft-SharePoint Products-PerformancePoint Service
Date:          3/09/2013 10:37:12 PM
Event ID:      1101
Task Category: PerformancePoint Services
Level:         Error
Keywords:
User:          domain\sp_service
Computer:      SPAPP.mydomain.com
Description:
PerformancePoint Services could not connect to the specified data source. Verify that either the current user or Unattended Service Account has read permissions to the data source, depending on your security configuration. Also verify that all required connection information is provided and correct.

System.Net.WebException: The remote name could not be resolved: ‘sharepointaite’
at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
at System.Net.HttpWebRequest.GetRequestStream()
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
at Microsoft.PerformancePoint.Scorecards.DataSourceProviders.ListService.GetListCollection()
at Microsoft.PerformancePoint.Scorecards.DataSourceProviders.SpListDataSourceProvider.GetCubeNameInfos()

PerformancePoint Services error code 201.

 

 

 

 

Log Name:      Application
Source:        Microsoft-SharePoint Products-PerformancePoint Service
Date:          3/09/2013 10:18:39 PM
Event ID:      1137
Task Category: PerformancePoint Services
Level:         Error
Keywords:
User:          domain\sp_service
Computer:      SPAPP.mydomain.com
Description:
The following data source cannot be used because PerformancePoint Services is not configured correctly.

Data source location: http://sharepoijntsite/Shared Documents/8_.000
Data source name: New Data Source

Monitoring Service was unable to retrieve a Windows identity for “domain\user”.  Verify that the web application authentication provider in SharePoint Central Administration is the default windows Negotiate or Kerberos provider.  If the user does not have a valid active directory account the data source will need to be configured to use the unattended service account for the user to access this data.

Exception details:
System.InvalidOperationException: Could not retrieve a valid Windows identity. —> System.ServiceModel.EndpointNotFoundException: The message could not be dispatched because the service at the endpoint address ‘net.pipe://localhost/s4u/022694f3-9fbd-422b-b4b2-312e25dae2a2’ is unavailable for the protocol of the address.

Server stack trace:
at System.ServiceModel.Channels.ConnectionUpgradeHelper.DecodeFramingFault(ClientFramingDecoder decoder, IConnection connection, Uri via, String contentType, TimeoutHelper& timeoutHelper)
at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.SendPreamble(IConnection connection, ArraySegment`1 preamble, TimeoutHelper& timeoutHelper)
at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.DuplexConnectionPoolHelper.AcceptPooledConnection(IConnection connection, TimeoutHelper& timeoutHelper)
at System.ServiceModel.Channels.ConnectionPoolHelper.EstablishConnection(TimeSpan timeout)
at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout)
at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.CallOnceManager.CallOnce(TimeSpan timeout, CallOnceManager cascade)
at System.ServiceModel.Channels.ServiceChannel.EnsureOpened(TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)

Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at Microsoft.IdentityModel.WindowsTokenService.S4UClient.IS4UService_dup.UpnLogon(String upn, Int32 pid)
at Microsoft.IdentityModel.WindowsTokenService.S4UClient.CallService(Func`2 contractOperation)
at Microsoft.SharePoint.SPSecurityContext.GetWindowsIdentity()
— End of inner exception stack trace —
at Microsoft.SharePoint.SPSecurityContext.GetWindowsIdentity()
at Microsoft.PerformancePoint.Scorecards.ServerCommon.ConnectionContextHelper.SetContext(ConnectionContext connectionContext, ICredentialProvider credentials)

Log Name:      Security
Source:        Microsoft-Windows-Security-Auditing
Date:          4/09/2013 8:53:04 AM
Event ID:      4673
Task Category: Sensitive Privilege Use
Level:         Information
Keywords:      Audit Failure
User:          N/A
Computer:      SPAPP.mydomain.com
Description:
A privileged service was called.

Subject:
Security ID:        domain\sp_c2wts
Account Name:        sp_c2wts
Account Domain:        DOMAIN
Logon ID:        0xFCE1

Service:
Server:    NT Local Security Authority / Authentication Service
Service Name:    LsaRegisterLogonProcess()

Process:
Process ID:    0x224
Process Name:    C:\Windows\System32\lsass.exe

Service Request Information:
Privileges:        SeTcbPrivilege

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle