How’s the weather? Using a public API with PowerApps (part 2)

This entry is part 3 of 3 in the series OpenAPI
Send to Kindle

Introduction

Hi again

This is the second half to a post that will use the OpenWeatherMap API in PowerApps. The business scenario is around performing inspections. In the first post I gave the example of a park ranger or plant operator, both conducting inspections where weather conditions can impact the level of danger or the result of the inspection. In such a scenario it makes sense to capture weather conditions when a photo is taken.

PowerApps has the ability to capture location information such as latitude and longitude, and public weather API’s generally allow you to get weather conditions for a given location. So the purpose of these posts is to show you how you can not only capture this data in PowerApps, but then send it to SharePoint in the form of metadata via Flow.

In Part 1, we got the painful stuff out of the way – that is, getting PowerApps to talk to the OpenWeather web service via a custom connector. Hopefully if you got through that part, you now have a much better understanding of the purpose of the OpenAPI specification and can see how it could be used to get PowerApps to consume many other web services. Now we are going to actually build an app that takes photos and captures weather data.

App prerequisites…

Now to get through this post, we are going to do this is to leverage a proof of concept app I built in a separate post. This app was also an inspection scenario, allowing a user to take a bunch of photos, which were then sent to SharePoint via Flow, with correctly named files. If you have not read that post, I suggest you do so now, because I am assuming you have that particular app set up and ready to go.

Go on… its cool, I will wait for you… Smile

Seriously now, don’t come back until you can do what I show below. On the left is PowerApps, taking a couple of photos, and on the right is the photos now in a SharePoint document library.

image  image

Now if you have performed the tasks in the aforementioned article, not only do you have a PowerApp that can take photos, you’ll have a connection to Flow and ready to go (yeah the pun was intended).

First up, lets recap two key parts of the original app.

1. Photo and file name…

When the camera was clicked, a photo was taken and a file name was generated for it. The code is below:

Collect(PictureList,{

        Photo: Camera1.Photo,

        ID: Concatenate(AuditNumber.Text,”-“,Text(Today(),”[$-en-US]dd:mm:yy”),”-“,Text(CountRows(PictureList)+1),”.jpg”)

} )

This code created an in-memory table (a collection named PictiureList) that, when a few photos are taken, looks like this:

image

2. Saving to Flow

The other part of the original app was saving the contents of the above collection to Flow. The Submit button has the following code…

UpdateContext( { FinalData : Concat(PictureList, ID & “|” & Photo & “#”) } );
UploadPhotostoAuditLib.Run(FinalData)

The first line takes the PictureList collection above and munges it into a giant string called FinalData. Each row is delimited by a hash “#” and each column delimited by a pipe “|”. The second line then calls the Flow and passes this data to it.

Both of these functions are about to change…

Getting the weather…

The first task is to get the weather. In part 1 we already created the custom connector to the service. Now it is time to use it in our app by adding it as a data source. From the View menu, choose Data Sources. You should see your existing data source that connects to Flow.

image  image

Click Add data source and then New connection. If you got everything right in part 1, you should see a data source called OpenWeather. Click on it, and you will be asked to enter an API key. You should have this key from part 1 (and you should understand exactly why you were asked for it at this point), so go ahead, add it here and click the Create button. If all things to go plan, you will now see OpenWeather added as a data source.

image  image  image  image

Now we are connected to the API, let’s test it by modifying the code that takes a photo. Instead of just capturing the photo and generating a file name, let’s also grab the latitude, longitude from PowerApps, call the API and collect the current temperature.

First here is the code and then I will break it down…

image

UpdateContext( { Weather: OpenAPI.GetWeather(Location.Latitude,Location.Longitude,”metric”) } );
Collect(PictureList,
{

     Photo: Camera1.Photo,

     ID: Concatenate(AuditNumber.Text,”-“,Text(Today(),”[$-en-US]dd:mm:yy”),”-“,Text(CountRows(PictureList)+1),”.jpg”),

     Latitude:Location.Latitude,

     Longitude:Location.Longitude,

     Temp:Weather.main.temp } )

 

The first line is where the weather API is called: OpenAPI.GetWeather(Location.Latitude,Location.Longitude,”metric”) . The parameters Location.Latitude and Location.Longitude come straight from PowerApps. I want my temperature in Celsius so I pass in the string “metric” as the 3rd parameter.

My API call is then wrapped into an UpdateContext() function, which enables us to save the result of the API call into a variable I have called Weather.

Now if you test the app by taking photos, you will notice a couple of things. First up, under variables, you will now see Weather listed. Drill down into it and you will see that a complex data structure is returned by the API. In the example below I drilled down to Weather->Main to find a table that includes temperature.

image

image  image

The second line of code (actually I broke it across multiple lines for readability) is the Collect function which, as its title suggests, creates collections. A collection is essentially an in-memory data table and the first parameter of Collect() is simply the name of the collection. In our example it is called PictureList.  The second second parameter is a record to insert into the collection. A record is a comma delimited set of fields and values inside curly braces. eg: { Title: “Hi”, Description: “Greetings” }. In our example, we are building a table consisting of:

  • Photo
  • File name for Photo
  • Latitude
  • Longitude
  • Temperature

The last parameter is the most interesting, because we are getting the temperature from the Weather variable. As this variable is a complex data type, we have to be quite specific about the value we want. I.e. Weather.main.temp.

Here is what the PictureList collection looks like now. If you have understood the above code, you should be able to extend it to grab other interesting weather details like wind speed and direction.

image

Getting ready for Flow…

Okay, so now let’s look at the code behind the Submit button. The change made here is to now include the additional columns from PictureList into my variable called FinalData. If this it not clear then I suggest you read this post or even Mikael Svenson’s work where I got the idea…

image

UpdateContext( { FinalData : Concat(PictureList, ID & “|” & Photo & “|” & Latitude & “|” & Longitude & “|” & Temp & “#”) } );
UploadPhotosToAuditLib.Run(FinalData)

So in case it is not clear, the first line munges each row and column from PictureList into a giant string called FinalData. Each row is delimited by a hash “#” and each column delimited by a pipe “|”. The second line then calls the Flow and passes it FinalData.

At this point, save your changes to PowerApps and publish as you are done here. Let’s now make some changes to the SharePoint document library where the photos are currently being uploaded to. We will add columns for Temperature, Latitude and Longitude and I am going to assume you know enough of SharePoint to do this and will paste a screenshot of the end in mind…

image  image

Right! Now it is time to turn our attention to Flow. The intent here is to not only upload the photos to the document library, but update metadata with the location and temperature data. Sounds easy enough right? Well remember how I said that we got rid of most of the painful stuff in part 1?

I lied…

Going with the Flow…

Now with Flow, it is easy to die from screenshot hell, so I am going to use some brevity in this section. If you played along with my pre-requisite post, you already had a flow that more or less looks like this:

  1. A PowerApps Trigger
  2. A Compose action that splits the photo via hash: @split(triggerbody()[‘ProcessPhotos_Inputs’],”#”)
  3. An Apply to each statement with a condition inside @not(empty(item()))
  4. A Compose action that grabs the file name: @split(item(),’|’)[0]
  5. A Compose action that grabs the file contents and converts it to binary: @dataUriToBinary(@split(item(),’|’)[1])
  6. A SharePoint Create File action that uploads the file to a document library

The image below illustrates the basic layout.

image

Our task is to now modify this workflow to:

  1. Handle the additional data sent from PowerApps (temperature, latitude and longitude)
  2. Update SharePoint metadata columns on the uploaded photos with this new data.

As I write these lines, Flow has very poor support for doing this. It has been acknowledged on UserVoice and I know the team are working on improvements. So the method I am going to use here is essentially a hack and I actually feel a bit dirty even suggesting it. But I do so for a couple of reasons. Firstly, it helps you understand some of the deeper capabilities of Flow and secondly, I hope this real-world scenario is reviewed by the Flow team to help them understand the implications of their design decisions and priorities.

So what is the issue? Basically the flow actions in SharePoint have some severe limitations, namely:

  • The Create File action provides no way to update library metadata when uploading a file
  • The Create Item action provides access to metadata but only works on lists
  • The Update Item action works on document libraries, but requires the item ID of the document to do so. Since Create File does not provide it, we have no reference to the newly created file
  • The Get Items function allows you to search a list/library for content, but cannot match on File Name (actually it can! I have documented a much better method here!)

So my temporary “clever” method is to:

  1. Use Create File action to upload a file
  2. Use the Get Items action to bring me back the metadata for the most recently created file in the library
  3. Grab the ID from step 2
  4. Use the Update Item action to set the metadata on the recently uploaded image.

Ugh! This method is crude and I fear what happens if a lot of flows or file activity was happening in this library and I really hope that soon the next section is soon redundant…

Okay so let’s get started. First up let’s make use of some new Flow functionality and use Scopes to make this easier. Expand the condition block and find the Compose action that extracts the file name. If you dutifully followed my pre-req article it will be called “Get File Name”. Above this, add a Scope and rename it to “Get File and Metadata”. Drag the “Get File Name” and “Get File Body” actions to it as shown below.

image  image  image

Now let’s sort out the location and temperature data. Under “Get File Body”, add a new Compose action and rename it to “Get Latitude”. In the compose function add the following:

Under “Get Latitude”, add a new Compose action and rename it to “Get Longitude”. In the compose function add the following:

Under “Get Longitude”, add a new Compose action and rename it to “Get Temperature”. In the compose function add the following:

  • @split(item(),’|’)[4]

This will result in the following:

image  image

Now click on the Get File and Metadata scope and collapse it to hide the detail of metadata extraction (now you know what a scope is for Smile)

image

So now we have our metadata, we need to apply it to SharePoint. Under the “Create File” action, add a new scope and call it “Get Item ID”. This scope is where the crazy starts…

Inside the scope, add a SharePoint – Get Items action. Enter the URL of your site collection and in the name (not URL) of your document library. In the Order By field, type in Created desc and set the Maximum Get Count to 1. Basically this action is calling a SharePoint list web service and “Created desc” says “order the results by Created Date in descending order (newest first).

Actually what you do is set Filter Query to FileLeaf eq ‘[FileName]’ as described in this later post!

Now note the plural in the action: “Get Items”. That means that by design, it assumes more than 1 result will be returned. The implication is that the data will comes back as an array. in JSON land this looks like the following:

[ { “Name”: “Value” }, { “Name”: “Value2” }, { “Name”: “Value3” } ]

and so on…

Also note that there is no option in this action to choose which fields we want to bring back, so this action will bring back a big, ugly JSON array back from SharePoint containing lots of information.

Both of these caveats mean we now have to do some data manipulation. For a start, we have to get rid of the array as many Flow actions cannot handle them as data input. Also, we are only interested in the item ID for the newly uploaded photo. All of the other stuff is just noise. So we will add 3 more flow actions that:

  1. clear out all data apart from the ID
  2. turn it from an array back to a regular JSON element
  3. extract the ID from the JSON.

For step 1, under the “Get items” action just added, add a new Data Operations – Select action. We are going to use this to select just the ID field and delete the rest. In the From textbox, choose the Value variable returned by the Get Items action. In the Map field, enter a key called “ID” and set the value to be the ID variable from the “Get Items” action.

image

For step 2, under the “Select” action, add a Data Operations – Join action. This action allows you to munge an array into a string using a delimiter – much like what we did in PowerApps to send data to Flow. Set the From text box to be the output of the Select action. The “Join with” delimiter can actually be anything, as the array will always have 1 item. Why? In the Get Items action above, we set the Maximum Get Count to 1. We will always get back a single item array.

image

The net effect of this step will be the removal of the array. I.e., from:

[ { “ID”: 48 } ]

to

{ “ID”:48 }

For step 3, under the “Join” action, add a Data Operations – Parse JSON action. This action will process the JSON file and each element found will be available to subsequent actions. The easiest way to understand this action is to just do it and see the effect. First, set the Content textbox to the output from the Join action.

image

Now we need to tell this action which elements that we are interested in. We already know that we only have 1 variable called ID because of the Select action we set up earlier that has stripped everything else out. So to tell this action we are expecting an ID, click the “use sample payload…” link and paste some sample JSON in our expected format…

{

    “ID”:48

}

If all goes to plan, a Schema has been generated based on that sample data that will now allow us to grab the actual ID value.

image  image

Okay, so we are done with the Get Item ID scope, so collapse it as follows…

image

Finally, under the “Get Item ID” scope, add a SharePoint – Update Item action. Add the URL of your site collection and then specify the document library where the photos were uploaded to. If this has worked, the additional metadata columns should now be visible as shown in the red box below. Now set the specific item we want to update by setting the ID parameter to the ID variable from the Parse JSON step.

image

Now assign the metadata to the library columns. Set Latitude to the output variable from the Get Latitude step, Longitude to the output variable from the Get Longitude step and Temperature to the output variable from the Get Temperature step as shown below.

image

Now save your flow and cross all fingers and toes…

Testing and conclusion!

Return to PowerApps (in the browser or on your mobile device – not the PowerApps studio app). Enter an audit number and take some photos… Wohoo! Here they are in the library along with metadata. Looks like I need to put on a jacket when I step outside Smile

image   image

So taking a step back, we have managed to achieve quite a lot here.

  1. We have wired up a public web service to PowerApps
  2. We have used PowerApps built-in location data to load weather data each time a photo has been taken
  3. We have used Flow to push this data into SharePoint and included the location and weather data as parameters.

Now on reflection there are a couple of massive issues that really prevent this from living up to its Citizen Developer potential.

  1. I had to use a 3rd party service to generate my OpenAPI file and it was not 100%. Why can’t Microsoft provide this service?
  2. Flow’s poor support for common SharePoint scenarios meant I had to use (non) clever workarounds to achieve my objectives.

Both of these issue were resolvable, which is a good thing, but I think the solutions take this out of the realm of most citizen developers. I had to learn too much JSON, too much Swagger/OpenAPI and delve deep into Flow.

In saying all that, I think Flow is the most immature piece of the puzzle at this stage. The lack of decent SharePoint support is definitely one where if I were a program manager, I would have pushed up the release schedule to address. It currently feels like the developer of the SharePoint actions has never used SharePoint on a day to day basis, and dutifully implemented the web services without lived experience of the typical usage scenarios.

For other citizen developers, I’d certainly be interested in how you went with your own version and variations of this example, so please let me know how you go.

And for Microsoft PowerApps/Flow product team members who (I hope) are reading this, I think you are building something great here. I hope this material is useful to you as well.

 

Thanks for reading

 

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

A Filename Generation Example for PowerApps with Flow

This entry is part 1 of 3 in the series OpenAPI
Send to Kindle

Hiya

A client recently asked to make a PowerApps proof of concept audit app for safety inspections. The gist was that a user would enter an audit number into the App, take a bunch of photos and make some notes. The photos needed to be named using the convention: <Audit Number>-<Date>-<Sequence Number>. Eg

  • 114A3-13:04:17-3.jpg is Audit number 114A3, photo taken on 13th of April and it was the 3rd photo taken.
  • 114A6-14:04:17-7.jpg is Audit number 114A3, photo taken on 14th of April and it was the 7th photo taken.

As I type these lines, it is still not possible to save pictures directly into SharePoint from PowerApps, so we are going to build on the method that Mikael Svenson documented. But first let’s look at this file name requirement above. While it seems straightforward enough, if you are new to PowerApps this might take a while to figure out. So let’s quickly build a proof of concept app and show how this all can be achieved.

In this post I will build the app and then we will use Microsoft Flow to post the images to a SharePoint document library. I’ll spend a bit of time on the flow side of things because there are a few quirks to be aware of.

First up create a blank app and then add a text box, camera, gallery and two buttons to the form… Rename the text box control to “Auditnumber” and arrange them similar to what I have done below…

image

First up, let’s disable the submit button until an audit number is entered… Without an audit number we cannot generate a filename. To do this, set the Disabled property of the button labelled “Submit” to AuditNumber.Text=””. This means that while the audit number text box is blank, this formula will return “true” and disable the button.

image

image  image

Now let’s set things up so that when a photo is taken, we save the photo into a Collection. A collection is essentially an in-memory data table and I find that once you get your head around collections, then it opens up various approaches to creating effective apps.

On the camera control (assuming it is named Camera1), set the OnSelect property to “Collect(PictureList,{ Photo: Camera1.Photo, ID: CountRows(PictureList)+1 } )

image

Now if you are a non developer, you might think that is a bit ugly, so let’s break it down.

  • The collect function creates/populates a collection – and the first parameter is simply the name of the collection. You could call this anything you like, but I chose to call mine PictureList.
  • The second parameter is a record to insert into the collection. This consists of a comma delimited set of fields and values inside curly braces. eg: { Photo: Camera1.Photo, ID: CountRows(PictureList)+1 }

Now that whole curly brace thing is a bit ugly so let’s break it down further. First up here is an image straight from Microsoft’s own documentation. You can see a record is a row of fields – pretty standard stuff if you are familiar with databases or SharePoint list items.

In PowerApps formulas, a record can be specified using curly braces. So { Album: “Thriller”, Price: 7.99 } is a record with 2 fields, Album and Price.

Now take a look at the record I used: { Photo: Camera1.Photo, ID: CountRows(PictureList)+1 } . This shows two fields, Photo and ID. The values for the Photo field is Camera1.Photo, which holds the latest photo taken by the camera. The ID field is a unique identifier for the photo. I generated this by calling the CountRows function, passing it the PhotoList collection and then adding 1 to the result.

So here’s what happens when this app starts:

  • The PhotoList collection is empty
  • A user clicks the camera control. A photo is taken and stored in the Camera1.Photo property.
  • I then count the number of records in the PhotoList collection. As it is currently empty, it returns 0. We add 1 to it
  • The photo and an ID value of 1 is added to the PhotoList collection
  • A user clicks on the camera control again. A photo is taken and stored in the Camera1.Photo property.
  • I then count the number of records in the PhotoList collection. As it is currently has 1 record from steps 1-4, it returns 1. We add 1 to it
  • The photo and an ID value of 2 is added to the PhotoList collection

… and so on. So let’s test this now…

Press play to test your app and take a couple of photos. It may not look like anything is happening, but don’t worry. Just exit and go to the File Menu and choose “Collections”. If everything has gone to plan, you will now see a collection called PictureList with the Photo and ID columns. Yay!

image

So next let’s bind this collection to the Gallery so we can see photos taken. This part is easy. On the Gallery you added to the page, set the Items property to PictureList (or whatever you named your collection). Depending on the type of gallery you added, you should see data from the PictureList collection. In my case, I changed the gallery layout to “Image and Title”. The ID field was bound to the title field and now you can see each photo and its corresponding ID.

image  image  image

Now let’s provide the ability to clear the photo gallery. On the button labelled “Clear”, set the OnSelect property to “Clear(PictureList)”. The clear command does exactly what it suggests: clears out all items from a collection. Now that we have the gallery bound, try it out. You can take photos and then clear them to your hearts content.

Now the point of this exercise is to generate a nice filename. One way is by modifying the record definition we were using: eg

From this:

  • Collect(PictureList,{ Photo: Camera1.Photo, ID: CountRows(PictureList)+1 })

To this crazy-ass looking formula:

  • Collect(PictureList,{ Photo: Camera1.Photo, ID: Concatenate(AuditNumber.Text,”-“,Text(Today(),”[$-en-US]dd:mm:yy”),”-“,Text(CountRows(PictureList)+1),”.jpg”) } )

To explain it, first check out the result in the gallery. Note my unique identifier has been replaced with the file-name I needed to generate.

image

So this formula basically uses the Concatenate function to build the filename in the format we need. Concatenate takes a list of strings as parameters and munges them together. In this case, it takes:

  • The audit number from the textbox – Auditnumber.Text
  • A hyphen – “-“
  • Todays date in dd:mm:yy format and converts it to text – Text(Today(),”[$-en-US]dd:mm:yy”))
  • Another hyphen – “-“
  • The row count for PictureList collection, adds 1 to it and converts the result to text – Text(CountRows(PictureList)+1)
  • The file type – “.jpg”

The net result is our unique filename for each photo.

Now we have one final step. We are going to send the entire collection of photos and their respective file names to flow to put into a SharePoint Library. The method we will use will take the PictureList collection, and save it as one giant string. We will send the string to Flow, and have Flow then pull out each photo/filename pair. Mikael describes this in detail in his post, so I will just show the code here, which we will add to the onSelect property of the submit button.

UpdateContext( { FinalData: Concat(PictureList, ID & “|” & Photo & “#”) } )

image

So what is this formula doing? Well working from inside to out, the first thing is the use of the Concat function. This function does something very useful. It works across all the records of a PowerApps collection and returns a single string. So Concat(PictureList, ID & “|” & Photo & “#”) takes the file name we generated (ID), joins it to a pipe symbol “|” and joins that to the photo, and then adding a hash “#”. The UpdateContext function enables us to save the result of the Concat into a variable called FinalData.

If we test the app by taking some photos and clicking the submit button, we can see what the FinalData variable looks like by the Variable menu as shown below. Our table is now one giant text string that looks like:

“filename 1 | encoded image data 1 # filename 2 | encoded image data 2 # filename x | encoded image data x “

image

Now a quick caveat, avoid using PowerApps desktop client to do this. You should use PowerApps web creator due to an image encoding bug.

Anyway, let’s now move to Flow to finish this off. To do this, click on the Submit button on your app and choose Flows from the Action menu and on the resulting side panel choose to Create a new flow.

image  image

A new browser tab will open and sign you into Microsoft flow, and be nice enough to create a new flow that is designed to be triggered from PowerApps. Rename the flow to something more meaningful like “Upload Photos to Audit Lib” and then click the New Step button, and add a new Action. In the search bar, type in “Data Operations” and in the list of actions, choose the “Compose” action…

image  image

Okay so at this point I should a) explain what the hell we are going to do and b) credit Mikael Svenson for first coming up with this method. In relation to the first point, PowerApps is going to send flow a giant string of photos and filenames (remember the FinalData variable – that’s what flow will receive). If you recall with the FinalData, each photo/filename pair is delimited by a hash “#” and the file name is delimited by a pipe “|”. So we need to take what PowerApps sends, and turn it back into rows. Then for each row, we grab the file name, and the file content and upload it to our friendly neighbourhood SharePoint library.

Sounds easy right?

Our first step is to use the compose action we just added to split the data from PowerApps back into photos. This particular workflow action does something really neat. It executes workflow definition language formulas. What are these? Well workflow definition language is actually a part of another Microsoft product called Azure Logic Apps. Flow leverages Azure Logic Apps, which means this language is available to us. Sounds complex? Well yeah, it does at first but when you think about it this is not new. For example, MS Teams, Groups and Planner use SharePoint behind the scenes.

Anyway, the point is that there are several workflow definition language functions we can use, namely:

  • Split() –  Takes a string, and splits it up into smaller strings based on a delimiter
  • Item() – This is used to to return an element of an array. Eg if we use the split command above, item() will refer to each smaller string
  • dataUriToBinary() – This takes an image encoded and turns it back into binary form.

Okay enough talk! Let’s fill in this compose action. First up (and this is important), rename the action to something more meaningful, such as “ProcessPhotos”. After renaming, click on the text box for the Compose action and a side panel will open, showing a PowerApps label and a box matching the PowerApps logo called Body. Click the Body button and the compose textbox should have a value called ProcessPhotos_Inputs as shown in the 3 images below…

image

image

image

So what have we just done? Essentially we told the Compose method to ask PowerApps to supply data into a variable called ProcessPhotos_Inputs. In fact, let’s test this before going any further by saving the flow.

Switch back to PowerApps, click the Submit button and select the onSelect method we used earlier. You should still see the function where we created the FinalData variable. Copy the existing function to the clipboard, as you’re about to lose everything you typed in. Now click the Flow once and it should say that it’s adding to PowerApps.

At this point, the function you so painstakingly put together has been replaced by a reference to the Flow. Delete what’s been added and paste the original function back. Add a semicolon to the end of the function (which tells PowerShell that another command is coming) and then press SHIFT+ENTER to insert a newline. Now type in the name of your Flow and it should be found via auto-complete. Click the matching flow with .Run on the end. Add a left bracket to the end of the flow and it will ask for an input parameter. If you have done it right, that parameter will be called ProcessPhotos_Inputs. Note that it matches the parameter from above image. The parameter we are passing is the FinalData variable we constructed earlier.

image   image

image

Okay so basically we have wired up PowerApps to flow and confirmed that Flow is asking PowerApps for the correct parameter. So let’s now get back to Flow and finish the first action. If you closed the Flow tab in your browser, you can start a new flow editing session from within PowerApps. Just click the ellipsis next to your flow and click Edit.

image

Right! So after that interlude, lets use the first function of interest – split(). In the text box for your compose function, modify it to look like the string below. Be sure to put the whole thing in quotes because Flow is going to try and be smart and in doing so, make things really counter intuitive and hard to use. Don’t be surprised if your ProcessPhotos_Inputs box disappears. Just add it back in again via the “dynamic content” button.

image

In fact, save and close the flow at this point and then edit it again. You will now see what the real function looks like. Note how the ProcessPhotos_Input has magically changed to {@triggerbody()[‘ProcessPhotos_Inputs’]}.

image

Unfortunately this sort of magic will not actually work… there are a few too many curly braces and excessive use of “@” symbols. So replace the function with the following (include quotes):

  • “@split(triggerBody()[‘ProcessPhotos_Inputs’], ‘#’)”

Like I said… flow is trying to be smart, only its not Smile. If you have done things right the action looks like the screen below. Double check this because if you do not have the quotes right, it will get messy. In fact if you save the flow and re-open it the quotes will disappear. This is actually a good sign…

image

After saving the flow, closing an re-opening… look Ma, no quotes!

image

Now for the record, according to the documentation,  the triggerbody() function is a reference function you use to “reference outputs from other actions in the logic app or values passed in when the logic app was created. For example, you can reference the data from one step to use it in another”. This makes sense – as ProcessPhotos_Inputs is being passed from the PowerApps trigger step to this compose function we are building.

In any event, let’s add the next workflow step. The output of the ProcessPhotos step should be an array of photos, so now we need to process each photo and upload it to SharePoint. To do this, click on “New step”, and from the more button, choose “Add an apply to each” action. In the “Select an output from previous steps” action, click “Add dynamic content” and choose the output from the ProcessPhotos step as shown in the sequence of images below.

image   image

image

Next click “Add a condition” and choose the option “Edit in advanced mode”. Replace any existing function with :

  • @not(empty(item()))

image

image

The item() function is specifically designed for a “repeating action” such as our “apply to each” event. It returns the item that is in the array for this iteration of the action. We are then using the empty() function to check if this item has any data in it and then via the not() function so we can only take action of the item has data. In case you are wondering why I need to test this, the data that comes from PowerApps has an extra blank row and this effectively filters it out.

The resulting screen should look like this:

image

At this point we should have a file name and file body pair, delimited by a pipe symbol. So let’s add an action to get the file name first. On the “If Yes” condition, click “Add an action” and choose another Compose Action. Rename the action to “Get File Name” and enter the function:

  • “@split(item(),’|’)[0]”

image  image

The square brackets are designed to return a value from an array with a specific index. Since our split() function will return an array of two values, [0] tells Flow to grab the first value.

Now let’s add an action to get the file body. Click “Add an action” and choose another Compose Action. Rename the action to “Get File Body” and enter the function:

  • “@dataUriToBinary(split(item(),’|’)[1])”

image

Looking at the above function, the split() side of things should be easy to understand. We are now grabbing the second item in the array. Since that item is the image in an encoded format, we are then passing it to the dataUriToBinary() to turn it back into an image again. Neat eh?

Now that we have our file name and file body, let’s add the final action. This time, we will choose the “Create File” action from the SharePoint connector. Rename the action to “Upload to Library” and enter the URL of the destination SharePoint site. If you have permission to that site, you can then specify which library to upload the photo to by browsing the Folder Path.

image  image

Now comes the final steps. For the File Name, click the “Add dynamic content” button and add the Output from the Get File Name step (now you know why we renamed it earlier). Do the same for the File Content textbox by choosing the output from the Get File Body step.

image

Right! We are ready to test. Sorry about screenshot hell in the Flow section, but I am writing this on the assumption you are new to it, so I hope the extra detail helped. To test, use PowerApps on your phone or via the web browser, as the PowerApps desktop tool has a bug at the time of writing that will prevent the photos being sent to Flow in the right format.

Now it is time to test.

So check your document library and see if the photos are there! In the example below I took a photo of my two books for blatant advertising purposes Smile . The 3rd image is the document library showing the submitted photos!

image   image

image

Finally, here are some tips for troubleshooting along the way…

First up if you are having trouble with your flows, one lame but effective way to debug is to add a “Send an email” action from the Ofifce365 Outlook Connector to your flow. This is particularly handy for encoded images as they can be large and often in the flow management tools, you will see a “value too big to display” error. This method is crude, and there are probably better approaches, but it is effective. In the image below you can see how I have added to action below my ProcessPhotos action prior to splitting it into an array.

image   image

Another thing that has happened to me (especially when I rename flow actions) is a disconnect between the trigger connection between Flow and PowerApps. Consider this error…

image  image

If your flow fails and you see an error like

“Unable to process template language expressions in action [one of your actions] inputs at line ‘1’ and column ‘1603’: ‘The template language expression ‘json(decodeBase64(triggerOutputs().headers[‘X-MS-APIM-Tokens’]))[‘$connections’][‘shared_office365’][‘connectionId’]’ cannot be evaluated because property ‘shared_office365′ doesn’t exist, available properties are ”. Please see https://aka.ms/logicexpressions for usage details.’

.. the connection between PowerApps and Flow has gotten screwed up.

To resolve this, disconnect your flow from PowerApps and reconnect it. Your flow will be shown as a data source which can easily be removed as shown below.

image

Once removed I suggest you go back to your submit button and copy your function to the clipboard. This is because adding the flow back will wipe the contents of the button. With your button selected, click the Flows button from the Action menu and then click on the Flow to re-associate it to PowerApps. The third image shows the onSelect property being cleared and replaced with the flow instantiation. This is where you can re-paste your original code (4th image)…

image

image

image

image

Finally, another issue you may come across is where no pictures come through, and instead you are left with an oddly named file, which (if you look closely) looks like one of your compose actions in Flow. You can see an example of this in the image below.

The root cause of this is similar to what I described earlier in the article where I reminded you to put all of your compose actions in quotes. I don’t have a definitive explanation for why this makes a difference and to be honest, I don’t care. The fix is easy: just make sure all of your compose actions are within quotes, and you should be cooking with gas.

Phew! Another supposedly quick blog post that got big because I decided to explain things for the newbie. Hopefully this was of use to you and I look forward to your feedback.

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

A Clever-workaround for Saving Photos to SharePoint from PowerApps

Send to Kindle

At the time of writing, a common request for PowerApps is to be able to able to upload photos to SharePoint. It makes perfect sense, especially now that its really easy to make a PowerApp that is bound to a SharePoint list. Sadly, although Microsoft have long acknowledged the need in the PowerUsers forum, a solution has not been forthcoming.

<update>I now use a Flow-based method for photos that I documented in this video. I only use this method in complex scenarios</update>

I have looked at the various workarounds, such as using the OneDrive connector or a custom web API, but these for me were fiddly. Thanks to ideas from John Liu, I’ve come up with a method that is more flexible and less fiddly to implement, provided you are okay with a bit of PowerShell, and (hopefully) with PnP PowerShell. One advantage to the method is that it handles an entire gallery of photos in a single transaction, rather than just a single photo at a time.

Now in the old days I would have meticulously planned out a multi-part series of posts related to a topic like this, because I have to pull together quite a lot of conceptual threads into a single solution. But since the pace of change in the world of Office365 is so rapid, my solution may be out of date by the time I publish it. So instead I offer a single summary post of my solution and leave the rest to you to figure out.  Sorry followers, its just too hard to do epic multi-part articles these days – times have changed.

What you need

  1. An understanding of JSON and basic idea of web services
  2. An azure subscription
  3. Access to Azure functions
  4. The PnP Powershell cmdlets
  5. A Swagger file (don’t worry if this makes no sense now)
  6. To be signed up to PowerApps

 How we are going to solve this…

In a nutshell, we will create an Azure function, using PowerShell to receive photos from PowerApps and uploads them to a SharePoint library. Here is my conceptual diagram that I spent hours and hours drawing…

Snapshot

To do this we will need to do a few things.

  1. Customise PowerApps to store photo data in the way we need
  2. Create and configure our Azure function
  3. Write and test the PowerShell code to upload to SharePoint
  4. Create a Swagger file so that PowerApps can talk to our Azure function
  5. Create a custom PowerApps connection/datasource use the Azure function
  6. Test successfully and bask in the glory of your awesomeness

Step 1: Customise PowerApps to store photo data in the way we need

Let’s set up a basic proof of concept PowerApp. We will add a camera control to take photos, a picture gallery to view the photos and a button to submit the photos to SharePoint. I’ll use the PowerApps desktop client rather than the web page for this task and create a blank app using the Phone Layout.

image

From the Insert menu, add a Camera control from the Media dropdown to add it to the screen. Leave it up near the top…

image

From the Insert Menu, add a Gallery control. For my demo I will use the vertical gallery. Move it down below the camera control so it looks like the second image below.

image  image

From the Insert Menu, add two buttons below the gallery. Set the text property on one to “Submit” and the other to “Clear”. I realise the resulting layout will not win any design awards but just go with it. Use the picture below to guide you.

image    image

Now let’s wire up some magic. Firstly, we will set it up so clicking on the camera control will take a photo, and save it to PowerApps storage. To do this we will use the Collect function. Assuming your Camera control is called “Camera1”, select it , and set the OnSelect property to:

Collect(PictureList,Camera1.Photo)

image

Now when a photo is taken, it is added to an in-memory PowerApps data-source called PictureList. To see this in action, preview the PowerApp and click the camera control a couple of times. Exit the preview and choose “Collections” from the left hand menu. You will now see the PictureList collection with the photos you just took, stored in a field called Url. The reason it is called URL and not “Photo” will become clear later).

image

Now let’s wire up the clear button to clear out this collection. Choose the button labelled “Clear” and set its OnSelect property to:

Clear(PictureList)

image

If you preview the app and click this button, you will see that the collection is now empty of pictures.

The next step is to wire up the Gallery to the PictureList collection so that you can see the photos being taken. To do this, select the gallery control and set the Items property to PictureList as shown below. Preview this and you should be able to take a set of photos, see them added to the gallery and be able to clear the gallery via the button.

image

image

Now we get to a task that will not necessarily make sense until later. We need to massage the PictureList collection to get it into the right format to send to SharePoint. For example, each photo needs a filename, and in a real-world scenario, we would likely further customise the gallery to capture additional information about each photo. For this post I will not do this, but I want to show you how you can manipulate data structures in PowerApps. To do this, we are going to now wire up some logic to the “Submit” button. First I will give you the code before I explain it.

Clear(SubmitData);
ForAll(PictureList,Collect(SubmitData, { filename: "a file.jpg", filebody: Url }))

image

In PowerApps, it is common to add multiple statements to controls, separated by a semicolon. Thus, the first line above initialises a new Collection I have called “SubmitData”. If this collection already had data in it, the Clear function will wipe it out. The second line uses two functions, ForAll and the previously introduced Collect. ForAll([collection],[formula]) will iterate through [collection] and perform tasks specified in [formula]. In our case we are adding records to the SubmitData collection. Each record consists of two fields and is in JSON format – hence the curly braces. The first field is called filename and the second is called filebody. In my example the filename is a fixed string, but filebody grabs the Url field from the current item in PictureList.

To see the effect, run the app, click submit and then re-examine the collections. Now you will see two collections listed – the original one that captures the photos from the camera (PictureList), and the one called SubmitData that now has a field for filename and a field called filebody with the photo. I realise that setting a static filename called “a file.jpg” is not particularly useful to anybody, and I will address this a little later, the point is we now have the data in the format we need.

image

By the way, behind the scenes, PowerApps stores the photo in the Data URI scheme. This is essentially a Base64 encoded version of the image with a descriptor at the start that is included in HTML. When you think about it, this makes sense in some situations because it reduces the number of HTTP round trips between browser and server. For example here is a small image encoded and embedded direct in HTML using the technique.

<img src="data:image/png;base64,iVBORw0KGgoAAA
ANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4
//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU
5ErkJggg==" alt="Red dot" />

The implication of this format is when PowerApps talks to our Azure function, it will send this sort of JSON…

[ {  "filename": "boo.jpg",
"filebody": "data:image/jpeg;base64,/9j/4AAQSkZJR [ snip heaps of Base64 ] T//Z" },
{  "filename": "boo3.jpg",
"filebody": "data:image/jpeg;base64,/9j/TwBAQMEBA [ snip heaps of Base64 ] m22I" } ]

In the next section we will set up an Azure Function and write the code to handle the above format, so save the app in its current state and give it a nice name. We are done with PowerApps for now…

2. Create and Configure Azure Function

Next we are going to create an Azure function. This is the bit that is likely new knowledge for many readers. You can read all about them on their Azure page, but my quick explanation is that they allow you to take a script or small piece of code and turn it into a fully fledged webservice. As you will soon see, this is very useful indeed (as well as very cost effective).

Now like many IT Pros, I am a PowerShell hacker and I have been using the PowerShell PnP libraries for all sorts of administrative purposes for quite some time. In fact if you are administering an Office365 tenant and you are not using PnP, then I can honestly say you are missing out on some amazing time-saving toolsvand you owe it to yourself to skill-up in this area. Of course, I realise that many readers will not be familiar with PowerShell, let alone PnP, and I expect some readers have not done much coding at all. Luckily the code we are going to use is just a few lines and I think I can sufficiently explain it.

But we are getting ahead of ourselves, let’s create the Azure function and then revisit PowerShell. Assuming you have an Azure subscription, visit functions.azure.com and log in. If not, sign up for the free account and then create a function app to host the function. I called mine MyFunctionsDemo but yours will have to be something different. This will take minute or two to complete and you will be redirected to the Azure functions portal.

image   image

Once the web application to host your functions is created, Click the + next to the Functions button to create a new function. PowerShell is still in preview, so you have to click the option to create a custom function. On the next screen, in the Language dropdown, choose PowerShell.

image  image

Our function is going to be triggered from PowerApps when a user clicks the submit button. PowerApps will make a HTTP request so this is a HttpTrigger scenario. Click the HttpTrigger-PowerShell template, give it a name (I called mine PhotoSendSP) and click the Create button. If all goes to plan you will be presented with a screen with some basic PowerShell code… essentially a “Hello World” web service.

image   image

Let’s test this newly minted Azure function before we customise it. If you look to the right of the screen above, you will see a “Test” vertical label. Click it and you will be presented with a screen that allows you to craft some data to send to your shiny new function. You can see that the test is going to be a HTTP POST by default. As you can see below, there is a basic JSON entry with a single name/value pair “name”: “Azure”. Change the Azure string to something else and then click the Run button. The result will be displayed below the JSON as shown below.

image   image

Now let’s take a quick look at the PowerShell code provided to you by default. Only lines 2, 3 and 11 matter for our purposes. What lines 2 and 3 show is that all of the details that are posted to this webservice are stored in a variable called $req. Line 2 converts this to JSON format and stores that in a variable called $requestbody. Line 3 then asks $requestbody for the value of “name”, which is you look in the screenshots above are what you set in the test. Line 11 then outputs this line to a variable called $res, which is the response back to the caller of this webservice. In this case you can see it returns “Hello “ with $name appended to it.

image
Now that we have seen the PowerShell code, let’s now update it with code to receive data from PowerApps and send it to SharePoint.

3. Write and test the PowerShell code that uploads to SharePoint

If you recall with PowerApps, the data we are sending to SharePoint is one or more photos. The data will look like this…

[ {  "filename": "boo.jpg",
"filebody": "data:image/jpeg;base64,/9j/4AAQSkZJR [ snip heaps of Base64 ] T//Z" },
{  "filename": "boo3.jpg",
"filebody": "data:image/jpeg;base64,/9j/TwBDAQMEBAUEBQ  m22I" } ]

In addition, for the purposes of keeping things simple, I am going to hard code various things like the document library to save the files to and not worry about exception handling. Below is my sample code with annotations at the end…

1.  Import-Module "D:\home\site\wwwroot\modules\SharePointPnPPowerShellOnline\2.15.1705.0\SharePointPnPPowerShellOnline.psd1" -Global
2.  $requestBody = Get-Content $req -Raw | ConvertFrom-Json
3.  $username = "paul@tenant.onmicrosoft.com"
4.  $password = $env:PW;
5.  $siteUrl = "https://tenant.sharepoint.com"
6.  $secpasswd = ConvertTo-SecureString $password -AsPlainText -Force
7.  $creds = New-Object System.Management.Automation.PSCredential ($username, $secpasswd)
8.  Connect-PnPOnline -url $siteUrl -Credentials $creds
9.  $ctx = get-pnpcontext
10. $doclib = $ctx.Web.Lists.GetByTitle("Documents")
11. ForEach ($item in $requestbody)
12. {
13.    $filename = $item.filename
14.    $rawfiledata = $item.filebody
15.    $rawfiledata = $rawfiledata -replace 'data:image/jpeg;base64,', ''
16.    $bytes = [System.Convert]::FromBase64String($rawfiledata)
17.    # uses comma notation related to .net reflection http://piers7.blogspot.com.au/2010/03/3-powershell-array-gotchas.html
18.    $memoryStream = New-Object System.IO.MemoryStream (,$bytes)
19.    $FileCreationInfo = New-Object Microsoft.SharePoint.Client.FileCreationInformation
20.    $FileCreationInfo.Overwrite = $true
21.    $FileCreationInfo.ContentStream = $memoryStream
22.    $FileCreationInfo.URL = $filename
23.    $Upload = $doclib.RootFolder.Files.Add($FileCreationInfo)
24.    $ctx.Load($Upload)
25.    $ctx.ExecuteQuery()
26. }

 

  • Line 1 loads the PnP PowerShell module. Without this, commands like Connect-PnPOnline and Get-PnPContext will not work. I’ll show how this is done after explaining the rest of the code.
  • Lines 3-7 are all about connecting to my SharePoint online tenant. Line 4 contains a variable called $env:PW. The idea here is to avoid passwords being stored in the code in clear text. The password is instead is pulled from an environment variable that I will show later.
  • Lines 7-9 connect to a site collection and then connect to the default document library within it.
  • Line 10 looks at the data sent from PowerApps and loops around to process for each image/filename pair.
  • Lines 13-18 grabs the file name and file data. It converts the file data into a memorystream, which is a way to represent a file in memory.
  • Line 19-25 then uploads the in-memory image to the document library in SharePoint, based on the filename provided. (Note: any PnP gurus wondering why I did not use Add-PnPFile, it was because this cmdlet did not properly handle the memorystream and the images were not proper binary and always broken.)

So now that we have seen the code, lets sort out some final configuration to make this all work. A lot of the next section I learnt from John Liu and watching the excellent Office Patterns and Practices Special Interest Group webinar he recently did with my all-time SharePoint hero, Vesa Juvonen.

Installing PnP PowerShell Components

First up, none of this will work without the PnP PowerShell module deployed to the Azure Function App. The easiest way to do this is to install the PnP PowerShell cmdlets locally and then copy the entire installation up to the Azure function environment. John Liu explains this in the aforementioned webinar but in summary, the easiest way to do this is to use the Kudu tool that comes bolted onto Azure functions. You can find it by clicking the Azure function name (“MyFunctionDemo” in my case) and choosing the “Platform Features” menu. From here you will find Kudu hiding under the Development Tools section. When the Kudu tab loads, click the Debug console menu and create a CMD or PowerShell console (it doesn’t matter which)

image  image  image

We are going to use this console to copy up the PnP PowerShell components. You can ignore the debug console and focus on the top half of the screen. This is showing you the top level folder structure for the Azure function application. Click on site and then wwwroot folders. This is the folder where all of your functions are stored (you will see a folder matching the name of the module we made in step 2). What we will do is install the PnP modules at this level, so it can be used for other functions that you are sure to develop Smile.

image  image

So click the + icon to create a folder here and call it Modules. From here, drag and drop the PnP PowerShell install location to this folder. In my case C:\Program Files\WindowsPowerShell\Modules\SharePointPnPPowerShellOnline\2.15.1705.0. I copied the SharePointPnPPowerShellOnline\2.15.1705.0 folder and all of its content here as I want to be able to maintain multiple versions of PnP as I develop functions over time.

image  image

Now that you have done this, the first line of the PowerShell script will make sense. Make sure you update the version number in the Import-Mobile command to the version of PnP you uploaded.

Import-Module "D:\home\site\wwwroot\modules\SharePointPnPPowerShellOnline\2.15.1705.0\SharePointPnPPowerShellOnline.psd1"

Handling passwords

The next thing we have to do is address the issue of passwords. This is where the $env:PW comes in on line 4 of my code. You see, when you set up Azure functions application, you can create your own settings that can drive the behaviour of your functions. In this case, we have made an environment variable called PW which we will store the password to access this site collection. This hides clear text passwords from code, but unfortunately it is a security by obscurity scenario, since anyone with access to the Azure function can review the environment variable and retrieve it. If I get time, I will revisit this via App Only Authentication and see how I go. I suspect though that this problem will get “properly” solved when Azure functions support using the Azure Key Vault.

In any event, you will find this under the Applications Settings link in the Platform Features tab. Scroll down until you find the “App Settings” section and add your password in as shown in the second image below.

image  image

Testing it out…

Right! At this point, we have all the plumbing done. Let’s test to see how it goes. First we need to create a JSON file in the required format that I explained earlier (the array of filename and filebody pairs). I crafted these by hand in notepad as they are pretty simple. To remind you the format was:

[ {  "filename": "boo.jpg",
"filebody": "data:image/jpeg;base64,/9j/4AAQSkZJR [ snip heaps of Base64 ] T//Z" },
{  "filename": "boo3.jpg",
"filebody": "data:image/jpeg;base64,/9j/TwBDAQMEB [ snip heaps of Base64 ] m22I" } ]

To generate the filebody elements, I used the covers of my two books (they are awesome – buy them!) and called them HG2BP and HG2M respectively. To create the base 64 encoded images in the Data URI scheme, I went to https://www.base64-image.de and generated the encoded versions. If you are lazy and want to use a pre-prepared file, just download the one I used for testing.

h2bph2m image

To test it, all we need to do is click the right hand Test link and paste the JSON into it. Click the Run button and hope for the best! As you can see in my example below, the web service returned a 200 status which means hunky dory, and the logs showed the script executing successfully.

image

Checking my document library and they are there… wohoo!!

image

So basically we have a lot of the bits in place. We have proven that our Azure function can take a JSON file with encoded images, process that file and then save it to a SharePoint document library. You might be thinking that all we need to do now is to wire up PowerApps to this function? Yeah, so did I too, but little did I realise the pain I was about to endure…

4. Create a Swagger file so that PowerApps can talk to our Azure function

Now we come to the most painful part of this whole saga. We need to describe our Azure function using a standard called Swagger (or OpenAPI). This provides important metadata so that PowerApps can make it easy for users to consume. This will make sense soon enough, but first we have to create it, which is a royal pain in the ass. I found the online documentation for swagger to be lacking and it took me a while to understand enough of the format to get it working.

So first up to make things simpler, let’s reduce some of the complexity. Our Azure function is a simple HTTP post. We have not defined any other type of requests, so lets make this formal as it will generate a much less ugly Swagger definition. Expand your function and choose the “Integrate” option. On the next screen, under Triggers, you will find a drop down with a label “Allowed HTTP methods”. Change the default value to “Selected methods” and then untick all HTTP methods except for POST. Click Save.

image  image  image

Now click back to your function app, and choose “API Definition” from the top menu. This will take you to the screen where you create/define your Swagger file.

image

On the initial screen, set your API definition source to function if asked, and you should see a screen that looks somewhat like this…

image

Click the Generate API definition template button as suggested by the comment in the code box in the middle. This will generate a swagger file and on the right side of the screen, the file has will be used to generate a summary of your API. You can see the Url of your azure app, some information about an API key (which we will deal with later) and below that, the PhotoSendSP function exposed as a webservice (/api/PhotoSendSP).

image

Now at this point you are probably thinking “okay so its unfamiliar, but this is pretty easy”, and you would be right. Where things got nightmarish for me was working out how to understand and customise the swagger file as the template is currently incomplete. All it has done is defined our function (note the paths section in the above screenshot  – can you see all those empty square brackets? that’s what you need to now fill in).

For the sake of brevity, I am not going to describe the ins and outs of this format (and I don’t fully know it yet anyway!). What I can tell you is that getting this right is a painful and time-consuming combination of trial and error, reading the swagger spec and testing in PowerApps. Let’s hope my hints here save you some frustration.

The first step is we need to create a definition for the format of data that our Azure function accepts as input. If you look closely above, you will see a section called definitions. Paste the following into the section so it looks like the screenshot below. Note: If you see any symbol apart from a benign warning message next to the “Photos:” line, then you do not have it right!.

definitions:
   Photos:
      type: object
      required:
         - filename
         - filebody
      properties:
         filename:
            type: string
            example: image.jpg
         filebody:
            type: string

image

So what we have defined here is an object called Photos which consists of two required properties, filename and filebody. Both are assumed to be string format, and filename also has an example to illustrate what is expected. Depending on how this swagger file is consumed by another application that supports swagger, one can imagine that example showing up on online help or intellisense when our function is being called.

Now lets make use of this definition.  Paste this into the parameters and description sections. Note the “schema:” section. Here we have told the swagger file that our function is expecting an array of objects based on the Photos definition that we created earlier.

parameters:
  - name: photocollection
    in: body
    description: “The encoded files”
    required: true
    schema:
       type: array
       items:
          $ref: '#/definitions/Photos'
description: "A collection of photos and filenames"

image

Finally, let’s finish off by defining that our Azure function consumes and produces its data in JSON format. Although our sample code is not producing anything back to PowerApps, you can imagine situations where we might do something like send back a JSON array of all of the SharePoint URL’s of each photo.

produces:
- application/json
consumes:
- application/json

image

Okay, so we are all set. Now to be clear, there is a lot more to Swagger, especially if you wanted to call our function from flow, but for now this is enough. Click the Save button, and then click the button “Export to PowerApps + Flow”. You will be presented with a new panel that explains the process we are about to do. Feel free to read it, but the key step is to download the swagger file we just created.

image

Okay, so if you have made it this far, you have a swagger JSON file and you are in the home stretch. Let’s now head back to PowerApps!

5. Create a custom PowerApps connection/datasource to use the Azure function

Back in PowerApps, we need to make a new custom connection to our Azure function. Click the Connections menu item and you will be redirected to web,powerapps.com. Click “Manage Custom Connections” and then click “Create a custom connector”. This will take you to a wizard.

image   image

image

The first step is to upload the swagger file you created in step 4 and before you do anything else, rename your connector to something short and sweet, as this will make it easier when displayed in the PowerApps list of connections.

image

Scroll to the bottom the of page and click the “Continue” button. Now you are presented with some security options about your connector. For context, the default settings for the PowerShell azure function template we chose in step 2 was to use an API key, so you can leave all of the defaults here, although I like to add the more meaningful “API Key” (this will make sense soon). Click Continue

image

PowerApps has now processed the swagger file and found the PhotoSendSP POST action we defined. It has also pulled some of the data from the swagger file to prepopulate some fields. Note this screen has some UI problems – you need to hover your cursor over the forms in the middle of the screen to see the scrollbar, so there is more to edit than what you initially see…

image

For now, do not enter anything into the summary field and scroll down to look at some of the other settings. The Visibility setting does not really matter for PowerApps, but remember that other online services can call our function. This visibility stuff relates more to Microsoft Flow, so you can ignore it for now. Scroll further down and you will see how our swagger schema has been processed by PowerApps. You can explore this but I suggest leaving it well alone. Below I have used all my MSPaint skills to make a montage to show how this relates to your Swagger file…

image

Finally, click Create Connector to wire it up. If all has gone to plan, you will see something resembling the following in the list of custom connectors in PowerApps. If you get this far, congratulations! You are almost there!

image

6. Test successfully and bask in the glory of your awesomeness

Now that you have a custom connection, let’s try it out. Open your PowerApp that you created in step 1. From the Content menu, click Data sources, click Add Datasource and then click New Connection. Scroll through the list of connections until you find the one you created in step 5. Click on it and you will be prompted for an API key (now you see why I added that friendly label during step 5).

image  image  image

Where to find this key? Well, it turns out that it was automagically generated for you when you first created your function! Go back to Azure functions portal and find your function. From the function sub menus, click Manage and you will be presented with a “Functions Keys” section with a default key listed. Copy this to the clipboard, and paste it back into the PowerApps API Key text box and click the Create button. Your datasource that connects to your Azure function is now configured in PowerApps!! (yay!).

image

image  image

Now way back in step 1 (gosh it seems like such a long time ago), we created a button labelled Submit, with the following formula.

Clear(SubmitData);
ForAll(PictureList,Collect(SubmitData, { filename: "a file.jpg", filebody: Url }))

The problem we are going to have is that all files submitted to the webservice will be called “a file.jpg” as I hardcoded the filename parameter for simplicity. Now if I fully developed this app, I would add a textbox to the gallery so that each photo has to be named prior to being able to submit to SharePoint. I am not going to do that here as this post is already too long, so instead I will use a trick I saw here to create a random two letter filename. I know it is not truly unique, but for our demo will suffice.

Here is the formula in all its ugliness.

Concatenate(Text( Now(), DateTimeFormat.LongDate ),Mid("0123456789ABCDEFGHIJKLMNOPQRTSTIUVWXYZ", 1 + RoundDown(Rand() * 36, 0), 1),Mid("0123456789ABCDEFGHIJKLMNOPQRTSTIUVWXYZ", 1 + RoundDown(Rand() * 36, 0), 1),".jpg"), filebody: Url } ) )

Yeah I know… let me break it down for you…

  • Text( Now(), DateTimeFormat.LongDate ) produces a string containing todays date
  • Mid(“0123456789ABCDEFGHIJKLMNOPQRTSTIUVWXYZ”, 1 + RoundDown(Rand() * 36, 0), 1) – selects a random letter/number from the string
  • Concatenate takes the above date, random letters and adds “.jpg” to it.

PowerApps does allow for line breaks in code, so it looks less ugly…

image

Let’s quickly test this before the final step of sending it off to SharePoint. Preview the app, take some photos and then examine the collections. As you can see, the SubmitData collection now has unique filenames assigned (By the way, yes I took these in a car and no, I was not driving at the time! 🙂

image

We are now ready for the final step. We need to call the Azure function! Smile

Go back to your submit button and add the following code to the end of the existing code, taking into account the name you gave to your connection/datasource. You should find that PowerApps uses intellisense to help you add the line because it has a lot of metadata from the swagger file.

'myfunctionsdemo.azurewebsites.net'.apiPhotoSendSPpost(SubmitData)

image

Important! Before we go any further, I strongly suggest you use the PowerApps web based authoring environment and not the desktop application. I have seen a problem where the desktop application does not encode images using the Data URI format, whereas the web based tool (and PowerApps clients on android and IOS) work fine.

So log into PowerApps on the web, open your app, preview it and cross your fingers! (If you want to be clever, go back to your function in Azure and keep and eye on the logs Smile

image

As the above screen shows, things are looking good. Let’s check the SharePoint document library that the script uploads to… YEAH BABY!! We have our photos!!!

image

Conclusion

Now you finally get to bask in your awesomeness. If you survived all this plumbing and have gotten this working then congratulations! you are well on your way to becoming a PowerApps, PowerShell, PnP (and flow) guru. This means you can now enhance your apps in all sorts of ways. For the non-developers who got this far, you can truly call yourself a citizen developer and design all sorts of innovative solutions via these techniques.

Even though a lot of this stuff is fiddly (especially that god-awful swagger crap), once you have gone through it a couple of times, and understand the intent of the components we have used, this is actually quite an easy solution to put together. I also found the Azure function side of things in particular, really easy to debug and see what was going on.

In terms of where we could take this, there are several avenues that I can immediately think of, but the possibilities are endless.

  • We could update the PowerShell code so it takes the destination document library as a parameter. We would need to update a new Swagger definition and then update our datasource in PowerApps, but that is not too hard a task once you have the basics working.
  • Using the same method, we could design a much more sophisticated form and capture lots of useful metadata with the pictures, and get them into SharePoint as metadata on the picture.
  • We could add in a lot more error handling into the script and return much better detail to PowerApps, such as detailed failure information.
  • Similarly, we could return detailed information to PowerApps to make the app richer. For example, we could generate unique filenames in our PowerShell function rather than PowerApps and return those names (and the URL to the image) in the reply to the HTTP Post, which would enable PowerApps to display images inline.
  • We could also take advantage of recent PowerApps enhancements and use local caching. I.e, when an internet connection is available, call the API, but if not, save to local storage and call the API once a connection is available.
  • We could not only upload images, but update lists and any other combination of SharePoint functions supported by PnP.

I hope that this post has helped you to better understand how these components hang together and I look forward to your feedback and how you have adopted/adapted and enhanced the ideas presented here.

 

Thanks for reading

 

Paul Culmsee

www.hereticsguidebooks.com

 

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 3)

This entry is part 3 of 3 in the series Apps Model
Send to Kindle

Hiya

This is the third post in some articles aimed helping strategic or business focused users understand the SharePoint 2013 and Office365 “apps model”, and what it means for the future of SharePoint. In part 1 of this series, I outlined the opportunities and challenges that Microsoft are currently trying to addressing. They were:

  • Changing perceptions to cloud technologies and increased adoption; which enables…
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange in the form of Google Apps; as well as…
  • An increasing number of smaller cloud-based “point solution” players who chip away at SharePoint features with cheaper and easier to use offerings; while suffering from…
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace; while dealing with…
  • Customers unable to handle the ever increasing complexity of SharePoint, leading to delaying upgrades for years

These reasons prompted Microsoft’s to take a strongly cloud driven strategy and have really been transforming their business to deliver it. They really have transitioned from a software provider to an application hosting provider and in terms of SharePoint, the “apps” model is now the future of customisation.

Now “apps” is a multifaceted topic and the word has been overused unfortunately. So in part two we started to unpack the apps model by channelling the kids TV show playshcool to show the idea of SharePoint customisations being hosted on separate servers, but presenting a seamless experience for users.

If you never read part 1 and 2, I seriously suggest you do. To whet your appetite, here are some pretty diagrams to highlight what you missed out on!

image image  image

The main point I made was the notion that custom SharePoint components ran on separate, non SharePoint servers and were embedded into SharePoint via Iframes. These remote apps then communicate securely with SharePoint (eg read or write data from lists) via web services.

I concluded part 2 by showing the benefits of the apps model from Microsoft’s perspective. Among other things, this model of developing custom SharePoint solutions can be supported on Office365 and on-premises. For on-premises customers, SharePoint servers remain pristine and free of the muck and clutter of 3rd party code, making service packs and cumulative updates much less complex and costly. It also enables Microsoft to offer an app store, where 3rd party vendors can maintain cloud based services that can be embedded and consumed by on-premises and online SharePoint installs. If you go back to the 5 key strategic threats I started this post with, it addresses each one nicely.

So Microsoft’s intent is good, and there is nothing wrong with good intentions… or is there?

image

Digging deeper

So where does one start with unpacking the apps model? Let’s make this a little less of a dry read by channelling Big Bang Theory to find out. First up, Penny wants to know where app data stored is stored, given that the app runs on a different server to SharePoint as shown below. (If you think about it, from apps perspective – SharePoint is the remote server).

image

Sheldon’s answer? (Of course Sheldon invented the apps model right)…

image

Er… come again? Let’s see if Leonard can give a clearer explanation…

image

Leonard’s explanation is a little better. Ultimately the app developer has the choice over where app data is stored. For example, let’s say someone writes a survey app for SharePoint and it renders on the home page via an iframe. When users fill in this poll, the results could be stored on the server that hosts the app and not SharePoint at all (which in SharePoint terms is referred to as the remote web). Alternatively, the app could store the survey results in a list on a SharePoint site which the app is being invoked from (this is called the host web). Yet another alternative is something called the App Web, which I  will return to in a moment.

First lets look at the pros and cons of the first two options.

Option A: Store App Data in SharePoint Lists

If the app developer chooses this approach, the app reads/writes to SharePoint lists in the site where the app is deployed (henceforth called the host web). In this approach, when a site administrator chooses to add an app to the site, the app has to specify the level of access is required for the site, and it asks the site admin to authorise this access. In the image below, you can see that this Kodak app is requesting the ability to edit or delete items in document libraries and lists on this site, as well as access user profile information. If the site administrator clicks “Trust it”, the app now has the access it needs.

image

The pro to this approach is that all app data resides in regular old SharePoint, where it is searchable and can take advantage of all of the goodies that lists and libraries give you like versioning, information management policies and workflow. Additionally, multiple apps can access these lists, so this allows for the development of componentised solutions that work with a single authoritative data source.

The potential cons (or implications to be aware of) to the approach are:

  • SharePoint lists and libraries are not always an appropriate data stores for some types of data. Most people are well aware now that a SharePoint list is most definitely not a relational database and it has performance issues when misused (among other things). SharePoint also has in-built thresholds that kick in when lists get big (list queries that generate a result set of 5000 or more items will fail by default). Microsoft state in their SharePoint 2013 and SharePoint Online solution packs documentation “If your business needs require you to work with large data sets and query result sets, this approach won’t work”.
  • If the app has been deployed to many sites and site collections (and uses lists on each), then things can get painful if a new version of the app requires a new or modified set of columns on the list in the host web.
  • if you delete the app from the host web, the list data remains on the site as the lists will likely not be deleted. Sometimes this is a good thing as the data might be important or used in other ways, but if the app developer is storing configuration data here, would leave orphaned data in the site.

Also think about what happens when you have an on-premises SharePoint server but the app is hosted by a 3rd party outside your firewall. How is the remote app even able to get to your SharePoint box in the first place? To enable this to happen, you are likely going to need to talk to your network/security people because you are going to need some funky firewall/reverse proxy infrastructure to allow that to happen. Additionally, some organisations might be uncomfortable that an app from a 3rd party on the interweb can have the ability to read and write data inside an internal SharePoint server anyway.

So what alternatives are there here?

Option B: Store data in the remote app

The other option for the app developer is to store the app data on the same server the app is running from (called the remote web). In this approach, no data is stored in SharePoint at all. The pros for this option is that it alleviates two of the issues from option A above in that developers can use any data storage system they want (eg SQL Server, a GIS system or a graph database) and you do not have any of those pesky firewall issues with the app connecting back to your SharePoint server. The app renders in its iframe and does its thing.

Unfortunately this also has some cons.

  • There are potential data sovereignty issues. Where is the remote app hosted? Are you sure you trust the 3rd party app provider with your data? Consider that a 3rd party might host an app for many organisations. Do they have adequate precautions for keeping your data isolated and secure (ie Is your stuff stored in a separate database to everyone else?) Are they adequately backing up that data consistent with your internal standards? If you uninstall the app, is the data also uninstalled at the app provider end?
  • This data is very likely not searchable or easily usable by SharePoint for other purposes as it is not necessarily directly accessible by SharePoint.
  • Chances are that the app will have to talk to SharePoint in some way, so you really don’t get out of dealing with your network/security people to make it all work if the remote web is outside your firewall.
  • Also consider data integrity. if say, you needed to restore SharePoint because of a data loss or data corruption issue, what does this mean for the data stored on the remote app? will things get out of sync?

These (and other questions) bring us to the next option that is a bit of a mind-bending middle ground.

Option C: Store data in the App Web

Now here is where things start to mess with your head quite a bit. Most people can understand the idea behind option A and B, but what the heck is this thing known as an app web? The short answer, it is a special SharePoint sub-site that is used for certain apps. It usually gets created when a site administrator adds an app to a site and importantly, removed when a site administrator removes the app. If you consider the diagrams below we can see our mythical SharePoint 2013 homepage with 3 apps on it, all running on separate servers as explained in part 2. If we assume each of these apps have deployed an app web on our SharePoint site, there are now three SharePoint sub-sites created underneath it as shown below. These are the app webs.

image

Now app webs are no ordinary subsites in the way you might know them now. For a start, if you tried to access them from your SharePoint host web, you would not be able to at all. Through some trickery, SharePoint puts these sites on a completely different URL than the site where the apps were installed. For example, if you had a site called http://myintranet/sites/web1 and you added a survey app, a subsite exists but it would absolutely not be http://myintranet/sites/web1/survey or anything like that. It would instead look something like:

http://app-afb952fd75de4a.apps.mycompany/sites/web1/surveyapp/

Now being a business audience reading this, trust me that there are good reasons for this apparent weirdness related to security which I will speak to in the next post.

But if this makes sense so far, then in some ways, one can argue that an app web is a weird cross between option A and B in that it is a subsite on your SharePoint farm, yet SharePoint treats it as if it is a remote data store. This means the app web is a special isolated storage area for app developers to put stuff like data, configuration, CSS, JavaScript and whatever other functionality their app needs to do its thing.

This approach has some advantages:

  • The app web is technically a SharePoint site, so app developers can create whatever structure they need (think lists, libraries and columns) to store their stuff like images, css, JavaScript and other goodies. This allows for much more flexible data structures that can easily be accommodated that writing to the odd list in a host web (Option A above)
  • The app web lives is on your SharePoint server, so it means some app components (and data) can be stored here instead of on a remote web on a server far away from you. When you back up your configuration database, the app web is backed up like everything else. (Less data integrity risk than option B).
  • The app web facilitates clean install/uninstall of an app, since the app web is removed if an app is removed. In other words, no more orphaned data lying around

But if you think all of those points through, you might see several important implications:

  • If the app developer decided to store critical app data in the app web, when the app is uninstalled that data is lost (or put better, developers have to write special uninstall code to copy the data somewhere else which means yet more code)
  • Just like option A, SharePoint lists and libraries are not always an appropriate data stores for some types of data. (remember the list item threshold I mentioned in option A? They also apply here too)
  • Apps cannot share app webs between them. In other words, apps cannot reach in and access data from other app webs. Therefore you can’t easily use the information stored in app webs with other apps and applications. In fact if you want to do this you pretty much have to choose option A. Store the shared data apps need in the host web and have both apps access that data
  • You may end up with many many app webs. If you take my example above there are 3 app webs to handle 3 apps. What if this was a project site template and your organisation has hundreds of projects. That means you have thousands of app webs, all with potentially interesting data that trapped in mini information silos.
  • SharePoint search cannot index the app webs
  • A critical but often overlooked one is that developers can’t update the library/list metadata schema in an app web (think columns, content types, libraries, etc) without updating and redeploying the app. As you will see in a future post, this gets real ugly real fast!
  • SharePoint App Webs are created with special templates which block SharePoint Designer (that’s probably a good thing given the purpose of an app web)

Conclusion

So if you are a non-developer reading this post, consider that none of the above options are on their own, likely to give you a solution. For each option, there seems to be just as many advantages as flaws. The reality is that many apps will use at least two, or even all three options. Things like images, css and javascript might be loaded from the app web, some critical reusable SharePoint content from the host web and the remote web for some heavy duty data manipulation.

If you think that through, that means as SharePoint administrators and governance teams, you will likely end up dealing with all of the cons of each of the options. Imagine asking a developer to conceptually draw an app that uses each of the options and consider how many “moving parts” there are to it all. Then when you add the fact that most organisations still have a legacy of full trust solutions to deal with, you can start to see how complex this will be to manage.

Now this really is just the tip of the iceberg. In the next post I am going to talk a little about how all this stuff is wired up from an authentication and security standpoint. I am also going to focus on the application lifecycle management implications of this model. If you think about the picture I have painted here with all of the potential moving parts, how to you think an upgrade to an app would fare?

thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 2)

This entry is part 2 of 3 in the series Apps Model
Send to Kindle

Hiya

This is the second post in some articles aimed at demystifying the SharePoint “apps” model for the strategic or business focused user. In case you are not aware, Microsoft have gotten a serious case of “app fever” in recent times, introducing the terminology not only into SharePoint, but Office as well. While there are very good reasons for this happening, Microsoft used the “app” terminology in multiple ways, therefore making their message rather confusing. As a result, Microsoft have not communicated their intent particularly well and customers often fail to understand why they make the changes that they do.

Things are definitely getting better, but I nevertheless see a lot of confusion around the topic. So in part 1 of this series, I explained the reasons Microsoft have adopted the strategy that they have done. To recap, they are trying to respond to five major disruptive forces that challenge their market position:

  • Changing perceptions to cloud technologies and increased adoption; which enables…
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange in the form of Google Apps; as well as…
  • An increasing number of smaller cloud-based “point solution” players who chip away at SharePoint features with cheaper and easier to use offerings; while suffering from…
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace; while dealing with…
  • Customers unable to handle the ever increasing complexity of SharePoint, leading to delaying upgrades for years

Microsoft’s answer to this has to go all-in with cloud, as this is the only way to beat the cloud providers at their own game, while reducing the complexity burden on their customers.  This of course is in the form of Office365, OneDrive and an ever increasing set of cloud oriented tools like Delve and Project Online.

But in SharePoint land, this has turned traditional development upside down. More than a decade of customisation “best practices” are no longer best – in fact they are no longer usable in many circumstances. The main reason is that the most common method of customisation normally applied to SharePoint (full-trust server side code) is not permitted in the cloud. Microsoft couldn’t risk untrusted, 3rd party custom code on their servers. What happens if one clients dodgy code affects everybody else sharing the service? This would threaten performance, uptime and Microsoft’s ability to upgrade their service over time.

So things had to change. Microsoft’s small army of product architects commandeered a whiteboard and started architecting innovative solutions to deal with these challenges and the apps model is the result. So let’s examine some core bits to the apps model by channelling a much loved children’s TV show.

There is a bear in there…

Now at this point I have to warn the developers or tech people writing this post. I am going to give a simplified version of the apps model intended for a decision making audience. I will omit many details I don’t deem necessary to make my key points. You have been warned…

image

Any parent of small children in most countries might be familiar with Playschool – a show for toddlers that has been around for eons. It is well known for its theme song starting with the line “There is a bear in there…and a chair as well”. When trying to come up with a suitable way to explain the SharePoint apps model, using Playschool as a metaphor turned out to work brilliantly. You see in each episode of Playschool, there was a segment where viewers were taken through the “magic window” to faraway lands. In the show, the presenter would pick one of the three windows and we would zoom into it, resulting in a transition to another segment. In our case, we have to pick the square window for two reasons. Firstly, a good many apps are in effect windows to somewhere else. Secondly, and much more importantly, it perfectly matches the new Microsoft corporate logo. Perfect metaphor or what eh? 🙂

Like the Playschool magic window, browsers have a similar capability to enable you to visit strange and magical lands… Not only is there a bear in there and a chair as well, but there are plenty of other things like YouTube videos and Yammer discussions. I have drawn this conceptually shown below. Note the black window in the SharePoint team site on the left, that can be filled with YouTube or Yammer.

image

You have no doubt visited web sites that have embedded content like YouTube videos or SlideShare slides in them (This blog site has lots of embedded ads that make me no money!). Essentially, it is possible for browsers to include content from different sites together into a single “page” experience. Users see it all as one page, even through content can come from all over the place. This is really useful, because it means you can leverage the capability of other sites to enhance the functionality of your own sites..

This my friends, is one of the core tenets of the current SharePoint 2013 apps model. Instead of running on the SharePoint server, many apps now run separately from SharePoint, embedded in SharePoint pages so that they look like they are part of SharePoint. In the example mock-up below, we have a SharePoint team site. In it, we have a remote web site that displays some pretty dashboard data. By loading that emote content into our magical square window, it now appears a part of SharePoint.

image       image

Going back to Microsoft’s core pain points, this helps things a lot. For a start, it means no custom code has to be installed onto the SharePoint server. Instead, SharePoint simply embeds the external content on the page. In the leftmost image above, you can see the SharePoint server (labelled as “Your server”), rendering a page with a placeholder in it. It then retrieves content from a remote server (labelled “my server”) and displays it in the placeholder to render the complete page (the rightmost image above).

So what feat of Microsoft innovation and general awesomeness enabled this to happen?

Everybody meet “Mr IFrame”.

Inline Frames (IFrames) are windows cut into your webpage that allow your visitor to view content on another site without reloading the entire page. The concept was first implemented in Microsoft Internet Explorer way back in 1997. Yep – you heard right… 1997. So IFrames are not a new concept at all – in fact its positively ancient when you count time in internet years. For this reason, when developers find this out, their reaction is usually something like this…

image

But there is more than meets the eye…

Now if the apps model was just IFrames alone, then you you might wonder what the big deal is with apps. In fact IFrames have been used this way in SharePoint for years via the Page Viewer Web Part. For years, companies with SharePoint deployments have embedded stuff like Twitter, YouTube or Facebook widgets via iFrames.

So of course, there is more to it…

Let’s revisit the “your server” and “my server” diagram used above and consider the question.. What if these remote applications displayed inside an iFrame can interact with SharePoint? In other words, What if the remote application running on my remote server is able to connect to your SharePoint server and read/write data? In the diagram below I have illustrated the idea. The top half of the diagram represents a SharePoint server that could be on-premises or an Office365 tenant. On the left is a Products list, that is somewhere inside this SharePoint server. At the bottom is my application running on my server that creates a pretty dashboard. What if my remote application queried the SharePoint products list to create the dashboard? Now we have an application, that while not running inside SharePoint, can nevertheless utilise live data from SharePoint to create a seamless experience for users.

image

If we now add 3 iframes to a page, the implications should start to become more clear. We can build hybrid solutions leveraging the best of what SharePoint can do, whilst leveraging the best of what other platforms can do. To the user, these are still SharePoint sites, but the reality is that we are now viewing a page that has been delivered by various different platforms. Each can interact with SharePoint data in different ways to deliver a seamless experience. Because these remote apps are not SharePoint at all, developers can write any application they want to, using the platform and tools of their choice. But to the user it is still a SharePoint page… neat huh? I’m sure the Microsoft product team thought that this was a brilliant conceptual masterpiece when they dreamt it up.

image

A beautiful model…

I don’t know if you have ever watched developers come up with API’s, but it tends to be a lot of excitement around a whiteboard as they revel in the glory of their elegant solution designs. So let’s quickly re-examne the benefits of this remotely hosted app approach from Microsoft’s perspective and see how we are going so far…

image

First and foremost, we now have SharePoint customisation approach that they can be fully support in Office365. Microsoft don’t have to put code on their online servers, yet can support extensibility. Now they are much more evenly matched with Google, while at the same time, reduce their tech support costs of SharePoint because they have isolated 3rd party code out of SharePoint. If any problems are encountered with a remote app, SharePoint will keep humming along and Microsoft can now legitimately tell the clients “no really it is not SharePoint causing your issue – go see your friendly neighbourhood app developer”.

More importantly since apps can also can be used in on-premises SharePoint deployments too, meaning both Microsoft and their customers now have pristine SharePoint servers free of the muck and clutter of 3rd party code. Therefore service packs and cumulative updates should no longer strike fear into admins. Microsoft also now nails google’s ass because Google has no real concept of on-premises at all in the way Microsoft does. Thus when hybrid scenarios come up in conversation, Microsoft has a much stronger story to tell.

But there is a more important implication than all of that. Microsoft can now do the app store thing. Vendors can maintain cloud based services that can be embedded and consumed by on-premises and online SharePoint installs. This means 3rd parties can tap into the customer ecosystem with a captive marketplace and customers can browse the store to examine what options are out there to extend SharePoint functionality. In theory, this should enable hundreds of vendors to do some slight modifications to their existing web based applications and incorporate them into the SharePoint ecosystem.

image

But reality is not what’s on the whiteboard…

At this point, I hope I have painted a pretty good picture of the advantages offered by this new paradigm and you can probably appreciate the Microsoft nerds completely falling in love with this conceptual model of future SharePoint customsiations. The Microsoft strategy dudes probably loved it too because it elegantly dealt with all of the challenges they were seeing. Unfortunately though, with most conceptual models, reality is a very different beast from the convenient fiction of models.

So in the next post, we are going to dig a little deeper. For example, how can a remote app even have permissions to talk to SharePoint in the first place? Do you really want code running in some untrusted 3rd party server to be fiddling with data in your SharePoint lists and libraries? How does that even work anyway in an on-premises scenario when a cloud hosted app has to access data behind your firewall?

Fear not though – the Microsoft guys thought of this (and more) when they were drawing their apps model concept on the big whiteboard. So in the next post, we are going to look at what it takes to bring this conceptual masterpiece into reality.

 

Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 1)

This entry is part 1 of 3 in the series Apps Model
Send to Kindle

Hiya

I’ve been meaning to write about the topic of the apps model of SharePoint 2013 for a while now, because it is a topic I am both fascinated and slightly repulsed by. While lots of really excellent material is out there on the apps model (not to mention a few good rants by the usual suspects as well), it is understandably written by developers tasked with making sense of it all, so they can put the model into practice delivering solutions for their clients or organisations.

I spent considerable time reading and researching the various articles and videos on this topic produced by Microsoft and the broader SharePoint community and made a large IBIS map of it. As I slowly got my head around it all, subtle, but significant implications begin to emerge for me. The more I got to know this topic, the more I realised that the opportunities and risks of the apps model holds many important insights and lessons for how organisations should be strategically thinking about SharePoint. In other words, it is not so much about the apps model itself, but what the apps model represents for organisations who have invested into the SharePoint platform.

So these posts are squarely aimed at the business camp. Therefore I am going to skip all sorts of things that I don’t deem necessary to make my points. Developers in particular may find it frustrating that I gloss over certain things, give not quite technically correct explanations and focus on seemingly trivial matters. But like I said, you are not my audience anyway.

So let’s see if we can work out what motivated Microsoft head in this direction and make such a significant change. As always, context is everything…

As it once was…

I want you to picture Microsoft in 2011. SharePoint 2010 has come out to positive reviews and well and truly cemented itself in the market. It adorns the right place in multiple Gartner magic quadrants, demand for talent is outstripping supply and many organsiations are busy embarking on costly projects to migrate from their legacy SharePoint 2007 and 2003 deployments, on the basis that this version has fixed all the problems and that they will definitely get it right this time around. As a result, SharePoint is selling like hotcakes and is about to crack the 2 billion dollar revenue barrier for Microsoft. Consultants are also doing well in this time since someone has to help organsiations get all of that upgrade work done. Life is good… allegedly.

But even before the release of SharePoint 2010, winds of change were starting to blow. In fact, way back in 2008, at my first ever talk at a SharePoint conference, I showed the Microsoft pie chart of buzzwords and asked the crowd what other buzzwords were missing. The answer that I anticipated and received was of course “cloud”, which was good because I had created a new version of the pie and invited Microsoft to license it from me. Unfortunately no-one called.

image

Winds of change…

While my cloud picture was aimed at a few cheap laughs at the time, it holds an important lesson. Early in the release cycle of SharePoint 2007, cloud was already beginning to influence people’s thinking. Quickly, services traditionally hosted within organisations began to appear online, requiring a swipe of the credit card each month from the opex budget which made CFO’s happy. A good example is Dropbox which was founded in 2008. By 2010, won over the hearts and minds of many people who were using FTP. Point solutions such as Salesforce appeared, which further demonstrated how the the competitive landscape was starting to heat up. These smaller, more nimble organsiations were competing successfully on the basis of simplicity and focus on doing one thing well, while taking implementation complexity away.

Now while these developments were on Microsoft’s radar, there was really only one company that seriously scared them. That was Google via their Google Docs product. Here was a company just as big and powerful as Microsoft, playing in Microsoft’s patch using Microsoft’s own approach of chasing the enterprise by bundling products and services together. This emerged as a credible alternative to not only SharePoint, but to Office and Exchange as well.

Some of you might be thinking that Apple was just a big a threat to Microsoft as Google. But Microsoft viewed Apple through the eyes of envy, as opposed to a straight out threat. Apple created new markets that Microsoft wanted a piece of, but Apple’s threat to their core enterprise offerings remained limited. Nevertheless, Microsoft’s strong case of crimson green Apple envy did have a strategic element. Firstly, someone at Microsoft decided to read the disruptive innovation book and decided that disruptive was obviously cool. Secondly, they saw the power of the app store and how quickly it enabled an developer ecosystem and community to emerge, which created barriers for the competition wanting to enter the market later.

Meanwhile, deeper in the bowels of Microsoft, two parallel problems were emerging. Firstly, it was taking an eternity to work through an increasingly large backlog of tech support calls for SharePoint. Clients would call, complaining of slow performance, broken deployments after updates, unhandled exceptions and so on. More often than not though, these issues had were not caused by the base SharePoint platform, but via a combination of SharePoint and custom code that leaked memory, chewed CPU or just plain broke. Troubleshooting and isolating the root cause very difficult which led to the second problem. Some of Microsoft’s biggest enterprise customers were postponing or not bothering with upgrades to SharePoint 2010. They deemed it too complex, costly and not worth the trouble. For others, they were simply too scared to mess with what they had.

A perfect storm of threats

image  image

So to sum up the situation, Microsoft were (and still are) dealing with five major forces:

  • Changing perceptions to cloud technologies (and the opex pricing that comes with it)
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange
  • An increasing number of smaller point solution players who chip away at SharePoint features with cheaper and easier to use offerings
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace
  • Customers unable to contend with the ever increasing complexity of SharePoint and putting off upgrades

So what would you do if you were Microsoft? What would your strategy be to thrive in this paradigm?

Now Microsoft is a big organisation, which affords it the luxury of engaging expensive management consultants, change managers and corporate coaches. Despite the fact that it doesn’t take an MBA to realise that just a couple of these factors alone combine as a threat to the future of SharePoint, lots of strategic workshops were no doubt had with associated whiteboard diagrams, postit notes, dodgy team building games and more than one SWOT analyses to confirm the strategic threats they faced were a clear and present danger. The conclusion drawn? Microsoft had to put cloud at the centrepiece of their strategy. After all, how else can you bring the fight to the annoying cloud upstarts, stave off the serious Google threat, all the while reducing the complexity burden on their customers?

A new weapon and new challenges…

In 2011, Microsoft debuted Office365 as the first salvo in their quest to mitigate threats and take on their competitors at their own game. It combined Exchange, Lync and SharePoint 2010 – packaging them up in a per-user per month approach. The value proposition for some of Microsoft’s customers was pretty compelling because up-front capital costs reduced significantly, they now could get the benefits of better scalability, bigger limits on things like mailboxes, while procurement and deployment was pretty easy compared to doing it in-house. Given the heritage of SharePoint, Exchange and Lync, Microsoft was suddenly competitive enough to put Google firmly on the back foot. My own business dumped gmail and took up Office365 at this time, and have used it since.

Of course, there were many problems that had to be solved. Microsoft was now switching from a software provider to a service provider which necessitated new thinking, processes, skills and infrastructure internally. But outside of Microsoft there were bigger implications. The change from software provider to service provider did not go down well with many Microsoft partners who performed that role prior. It also freaked out a lot of sysadmins who suddenly realised their job of maintaining servers was changing. (Many are still in denial to this day). But more importantly there was a big implication for development and customisation of SharePoint. This all happened mid-way through the life-cycle of SharePoint 2010 and that version was not fully architected for cloud. First and foremost, Microsoft were not about to transfer the problem of dodgy 3rd party customisations onto their servers. Recall that they were getting heaps of support calls that were not core SharePoint issues, but caused by custom code written by 3rd parties. Imagine that in a cloud scenario when multiple clients share the same servers. This means that that one clients dodgy code could have a detrimental effect on everybody else, affecting SLA’s while not solving the core problem Microsoft were having of wearing the cost and blame via tech support calls for problems not of their doing.

So with Office365, Microsoft had little choice but to disallow the dominant approach to SharePoint customisation that had been used for a long time. Developers could no longer use the methods they had come to know and love if a client wanted to use Office 365. This meant that the consultancies who employed them would have to change their approach too, and customers would have to adjust their expectations as well. Office365 was now a much more restricted environment than the freedom of on-premises.

Is it little wonder then, that one of Microsoft’s big focus areas for SharePoint 2013 was to come up with a way to readdress the balance. They needed a customisation model that allowed a consistent approach, whether you chose to keep SharePoint on-premises, or move off to the cloudy world of Office 365. As you will see in the next post, that is not a simple challenge,. The magnitude of change required was going to be significant and some pain was going to have to happen all around.

Coming next…

So with that background set, I will spend time in the next post explaining the apps model in non technical terms, explaining how it helps to address all of the above issues. This is actually quite a challenge, but with the right dodgy metaphors, its definitely possible. 🙂 Then we will take a more critical viewpoint of the apps model, and finally, see what this whole saga tells us about how we should be thinking about SharePoint in the future…

thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud isn’t the problem–Part 6: The pros and cons of patriotism

This entry is part 6 of 6 in the series Cloud
Send to Kindle

Hi all and welcome to my 6th post on the weird and wonderful world of cloud computing. The recurring theme in this series has been to point out that the technological aspects of cloud computing have never really been the key issue. Instead, I feel It is everything else around the technology, ranging from immature process, through to the effects of the industry shakeout and consolidation, through to the adaptive change required for certain IT roles. To that end, in the last post, we had fun at the expense of server huggers and the typical defence mechanisms they use to scare the rest of the organization into fitting into their happy-place world of in-house managed infrastructure. In that post I made a note on how you can tell an IT FUD defence because risk averse IT will almost always try use their killer argument up-front to bury the discussion. For many server huggers or risk averse IT, the killer defence is US Patriot Act Issue.

Now just in case you have never been hit with the “…ah but what about the Patriot Act?” line and have no idea what the Patriot Act is all about, let me give you a nice metaphor. It is basically a legislative version of the “Men in Black” movies. Why Men in Black? Because in those movies, Will Smith and Tommy Lee Jones had the ability to erase the memories of anyone who witnessed any extra-terrestrial activity with that silvery little pen-like device. With the Patriot Act, US law enforcement now has a similar instrument. Best of all, theirs doesn’t need batteries – it is all done on paper.

image

In short, the Patriot Act provides a means for U.S. law enforcement agencies, to seek a court order allowing access to the personal records of anyone without their knowledge, provided that it is in relation to an anti-terrorism investigation. This act applies to pretty much any organisation who has any kind of presence in the USA and the rationale behind introducing it was to make it much easier for agencies to conduct terrorism investigations and better co-ordinate their efforts. After all, in the reflection and lessons learnt from the 911 tragedy, the need for for better inter-agency co-ordination was a recurring theme.

The implication of this act is for cloud computing should be fairly clear. Imagine our friendly MIB’s Will Smith (Agent J) and Tommy Lee Jones (Agent K) bursting into Google’s headquarters, all guns blazing, forcing them to hand over their customers data. Then when Google staff start asking too many questions, they zap them with the memory eraser gizmo. (Cue Tommy Lee jones stating “You never saw us and you never handed over any data to us.” )

Scary huh? It’s the sort of scenario that warms the heart of the most paranoid server hugger, because surely no-one in their right mind could mount a credible counter-argument to that sort of risk to the confidentiality and integrity of an organisations sensitive data.

But at the end of the day, cloud computing is here to stay and will no doubt grow. Therefore we need to unpack this issue and see what lies behind the rhetoric on both sides of the debate. Thus, I decided to look into the Patriot act a bit further to understand it better. Of course, it should be clear here that I am not a lawyer, and this is just my own opinions from my research and synthesis of various articles, discussion papers and interviews. My personal conclusion is that all the hoo-hah about the Patriot Act is overblown. Yet in stating this, I have to also state that we are more or less screwed anyway (and always were). As you will see later in this post, there are great counter arguments that pretty much dismantle any anti-cloud arguments that are FUD based, but be warned – in using these arguments, you will demonstrate just how much bigger this thing is beyond cloud computing and get a sense of the broader scale of the risk.

So what is the weapon?

The first thing we have to do is understand some specifics about the Patriot Act’s memory erasing device. Within the vast scope of the act, the two areas for greatest concern in relation to data is the National Security Letter and the Section 215 order. Both provide authorities access to certain types of data and I need to briefly explain them:

A National Security Letter (NSL) is a type of subpoena that permits certain law enforcement agencies to compel organisations or individuals to provide certain types of information like financial and credit records, telephone and ISP records (Internet searches, activity logs, etc). Now NSL’s existed prior to the Patriot Act, but the act loosened some of the controls that previously existed. Prior to the act, the information being sought had to be directly related a foreign power or the agent of a foreign power – thereby protecting US citizens. Now, all agencies have to do is assert that the data being sought is relevant in some way to any international terrorism or foreign espionage investigations.

Want to see what a NSL looks like? Check this redacted one from wikipedia.

A Section 215 Order is similar to an NSL in that it is an instrument that law enforcement agencies can use to obtain data. It is also similar to NSL’s in that it existed prior to the Patriot Act – except back then it was called a FISA Order – named after the Foreign Intelligence Surveillance Act that enacted it. The type of data available under a Section 215 Order is more expansive than what you can eke out of an NSL, but a Section 215 Order does require a judge to let you get hold of it (i.e. there is some judicial oversight). In this case, the FBI obtains a 215 order from the Foreign Intelligence Surveillance Court which reviews the application. What the Patriot Act did different to the FISA Order was to broaden the definition of what information could be sought. Under the Patriot Act, a Section 215 Order can relate to “any tangible things (including books, records, papers, documents, and other items).” If these are believed to be relevant to an authorised investigation they are fair game. The act also eased the requirements for obtaining such an order. Previously, the FBI had to present “specific articulable facts” that provided evidence that the subject of an investigation was a “foreign power or the agent of a foreign power.” From my reading, now there is no requirement for evidence and the reviewing judge therefore has little discretion. If the application meets the requirements of Section 215, they will likely issue the order.

So now that we understand the two weapons that are being wielded, let’s walk through the key concerns being raised.

Concern 1: Impacted cloud providers can’t guarantee that sensitive client data won’t be turned over to the US government

CleverWorkArounds short answer:

Yes this is dead-set true and it has happened already.

CleverWorkArounds long answer:

This concern stems from the “loosening” of previous controls on both NSL’s and Section 215 Orders. NSL’s for example, require no probable cause or judicial oversight at all, meaning that the FBI can issue these at their own volition. Now it is important to note that they could do this before the Patriot Act came into being too, but back then the parameters for usage was much stricter. Section 215 Orders on the other hand, do have judicial oversight, but that oversight has also been watered down. Additionally the breadth of information that can be collected is now greater. Add to that the fact that both NSL’s and Section 215 Orders almost always include a compulsory non-disclosure or “gag” order, preventing notification to the data owner that this has even happened.

This concern is not only valid but it has happened and continues to happen. Microsoft has already stated that it cannot guarantee customers would be informed of Patriot Act requests and furthermore, they have also disclosed that they have complied with Patriot Act requests. Amazon and Google are in the same boat. Google also have also disclosed that they have handed data stored in European datacenters back to U.S. law enforcement.

Now some of you – particularly if you live or work in Europe – might be wondering how this could happen, given the European Union’s strict privacy laws. Why is it that these companies have complied with the US authorities regardless of those laws?

That’s where the gag orders come in – which brings us onto the second concern.

Concern 2: The reach of the act goes beyond US borders and bypasses foreign legislation on data protection for affected providers

CleverWorkArounds short answer:

Yes this is dead-set true and it has happened already.

CleverWorkArounds long answer:

The example of Google – a US company – handing over data in its EU datacentres to US authorities, highlights that the Patriot Act is more pervasive than one might think. In terms of who the act applies to, a terrific article put out by Alex C. Lakatos put it really well when he said.

Furthermore, an entity that is subject to US jurisdiction and is served with a valid subpoena must produce any documents within its “possession, custody, or control.” That means that an entity that is subject to US jurisdiction must produce not only materials located within the United States, but any data or materials it maintains in its branches or offices anywhere in the world. The entity even may be required to produce data stored at a non-US subsidiary.

Think about that last point – “non-US subsidiary”.  This gives you a hint to how pervasive this is. So in terms of jurisdiction and whether an organisation can be compelled to hand over data and be subject to a gag order, the list is expansive. Consider these three categories:

  • – US based company? Absolutely: That alone takes out Apple, Amazon, Dell, EMC (and RSA), Facebook, Google, HP, IBM, Symantec, LinkedIn, Salesforce.com, McAfee, Adobe, Dropbox and Rackspace
  • – Subsiduary company of a US company (incorporated anywhere else in the world)? It seems so.
  • – Non US company that has any form of US presence? It also seems so. Now we are talking about Samsung, Sony, Nokia, RIM and countless others.

The crux of this argument about bypassing is the gag order provisions. If the US company, subsidiary or regional office of a non US company receives the order, they may be forbidden from disclosing anything about it to the rest of the organisation.

Concern 3: Potential for abuse of Patriot Act powers by authorities

CleverWorkArounds short answer:

Yes this is true and it has happened already.

CleverWorkArounds long answer:

Since the Patriot Act came into place, there was a significant marked increase in the FBI’s use of National Security Letters. According to this New York Times article, there were 143,000 requests  between 2003 to 2005. Furthermore, according to a report from the Justice Department’s Inspector General in March 2007, as reported by CNN, the FBI was guilty of “serious misuse” of the power to secretly obtain private information under the Patriot Act. I quote:

The audit found the letters were issued without proper authority, cited incorrect statutes or obtained information they weren’t supposed to. As many as 22% of national security letters were not recorded, the audit said. “We concluded that many of the problems we identified constituted serious misuse of the FBI’s national security letter authorities,” Inspector General Glenn A. Fine said in the report.

The Liberty and Security Coalition went into further detail on this. In a 2009 article, they list some of the specific examples of FBI abuses:

  • – FBI issued NSLs when it had not opened the investigation that is a predicate for issuing an NSL;
  • – FBI used “exigent letters” not authorized by law to quickly obtain information without ever issuing the NSL that it promised to issue to cover the request;
  • – FBI used NSLs to obtain personal information about people two or three steps removed from the subject of the investigation;
  • – FBI has used a single NSL to obtain records about thousands of individuals; and
  • – FBI retains almost indefinitely the information it obtains with an NSL, even after it determines that the subject of the NSL is not suspected of any crime and is not of any continuing intelligence interest, and it makes the information widely available to thousands of people in law enforcement and intelligence agencies.

Concern 4: Impacted cloud providers cannot guarantee continuity of service during investigations

CleverWorkArounds short answer:

Yes this is dead-set true and it has happened already.

CleverWorkArounds long answer:

An oft-overlooked side effect of all of this is that other organisations can be adversely affected. One aspect of cloud computing scalability that we talked about in part 1 is that of multitenancy. Now consider a raid on a datacenter. If cloud services are shared between many tenants, innocent tenants who had nothing whatsoever to do with the investigation can potentially be taken offline. Furthermore, the hosting provider may be gagged from explaining to these affected parties what is going on. Ouch!

An example of this happening was reported in the New York TImes in mid 2011 and concerned Curbed Network, a New York blog publisher. Curbed, along with some other companies, had their service disrupted after an F.B.I. raid on their cloud providers datacenter. They were taken down for 24 hours because the F.B.I.’s raid on the hosting provider seized three enclosures which, unfortunately enough, included the gear they ran on.

Ouch! Is there any coming back?

As I write this post, I wonder how many readers are surprised and dismayed by my four risk areas. The little security guy in me says If you are then that’s good! It means I have made you more aware than you were previously which is a good thing. I also wonder if some readers by now are thinking to themselves that their paranoid server huggers are right?

To decide this, let’s now examine some of the the counter-arguments of the Patriot Act issue.

Rebuttal 1: This is nothing new – Patriot Act is just amendments to pre-existing laws

One common rebuttal is that the Patriot Act legislation did not fundamentally alter the right of the government to access data. This line of argument was presented in August 2011 by Microsoft legal counsel Jeff Bullwinkel in Microsoft Australia’s GovTech blog. After all, it was reasoned, the areas frequently cited for concern (NSL’s and Section 215/FISA orders) were already there to begin with. Quoting from the article:

In fact, U.S. courts have long held that a company with a presence in the United States is obligated to respond to a valid demand by the U.S. government for information – regardless of the physical location of the information – so long as the company retains custody or control over the data. The seminal court decision in this area is United States v. Bank of Nova Scotia, 740 F.2d 817 (11th Cir. 1984) (requiring a U.S. branch of a Canadian bank to produce documents held in the Cayman Islands for use in U.S. criminal proceedings)

So while the Patriot Act might have made it easier in some cases for the U.S. government to gain access to certain end-user data, the right was always there. Again quoting from Bullwinkel:

The Patriot Act, for example, enabled the U.S. government to use a single search warrant obtained from a federal judge to order disclosure of data held by communications providers in multiple states within the U.S., instead of having to seek separate search warrants (from separate judges) for providers that are located in different states. This streamlined the process for U.S. government searches in certain cases, but it did not change the underlying right of the government to access the data under applicable laws and prior court decisions.

Rebuttal 2: Section 215’s are not often used and there are significant limitations on the data you can get using an NSL.

Interestingly, it appears that the more powerful section 215 orders have not been used that often in practice. The best article to read to understand the detail is one by Alex Lakatos. According to him, less than 100 applications for section 215 orders were made in 2010. He says:

In 2010, the US government made only 96 applications to the Foreign Intelligence Surveillance Courts for FISA Orders granting access to business records. There are several reasons why the FBI may be reluctant to use FISA Orders: public outcry; internal FBI politics necessary to obtain approval to seek FISA Orders; and, the availability of other, less controversial mechanisms, with greater due process protections, to seek data that the FBI wants to access. As a result, this Patriot Act tool poses little risk for cloud users.

So while section 215 orders seem less used, NSL’s seem to be used a dime a dozen – which I suppose is understandable since you don’t have to deal with a pesky judge and all that annoying due process. But the downside of NSL’s from a law enforcement point of view is that the the sort of data accessible via the NSL is somewhat limited. Again quoting from Lakatos (with emphasis mine):

While the use of NSLs is not uncommon, the types of data that US authorities can gather from cloud service providers via an NSL is limited. In particular, the FBI cannot properly insist via a NSL that Internet service providers share the content of communications or other underlying data. Rather [.] the statutory provisions authorizing NSLs allow the FBI to obtain “envelope” information from Internet service providers. Indeed, the information that is specifically listed in the relevant statute is limited to a customer’s name, address, and length of service.

The key point is that the FBI has no right to content via an NSL. This fact may not stop the FBI from having a try at getting that data anyway, but it seems that savvy service providers are starting to wise up to exactly what information an NSL applies to. This final quote from the Lakato article summarises the point nicely and at the same time, offers cloud providers a strategy to mitigate the risk to their customers.

The FBI often seeks more, such as who sent and received emails and what websites customers visited. But, more recently, many service providers receiving NSLs have limited the information they give to customers’ names, addresses, length of service and phone billing records. “Beginning in late 2009, certain electronic communications service providers no longer honored” more expansive requests, FBI officials wrote in August 2011, in response to questions from the Senate Judiciary Committee. Although cloud users should expect their service providers that have a US presence to comply with US law, users also can reasonably ask that their cloud service providers limit what they share in response to an NSL to the minimum required by law. If cloud service providers do so, then their customers’ data should typically face only minimal exposure due to NSLs.

Rebuttal 3: Too much focus on cloud data – there are other significant areas of concern

This one for me is a perverse slam-dunk counter argument that puts the FUD defence of a server hugger back in its box. The reason it is perverse is that it opens up the debate that for some server huggers, may mean that they are already exposed to the risks they are raising. You see, the thing to always bear in mind is that the Patriot Act applies to data, not just the cloud. This means that data, in any shape or form is susceptible in some circumstances if a service provider exercises some degree of control over it. When you consider all the applicable companies that I listed earlier in the discussion like IBM, Accenture, McAfee, EMC, RIM and Apple, you then start to think about the other services where this notion of “control” might come into play.

What about if you have outsourced your IT services and management to IBM, HP or Accenture? Are they running your datacentres? Are your executives using Blackberry services? Are you using an outsourced email spam and virus scanning filter supplied by a security firm like McAfee? Using federated instant messaging? Performing B2B transactions with a US based company?

When you start to think about all of the other potential touch-points where control over data is exercised by a service provider, things start to look quite disturbing. We previously established that pretty much any organisation with a US interest (whether US owned or not), falls under Patriot Act jurisdiction and may be gagged from disclosing anything. So sure. . .cloud applications are a potential risk, but it may well be that any one of these companies providing services regarded as “non cloud” might receive an NSL or section 215 order with a gag provision, ordering them to hand over some data in their control. In the case of an outsourced IT provider, how can you be sure that the data is not straight out of your very own datacenter?

Rebuttal 4: Most other countries have similar laws

It also turns out that many other jurisdictions have similar types of laws. Canada, the UK, most countries in the EU, Japan and Australia are some good examples. If you want to dig into this, examine Clive Gringa’s article on the UK’s Regulation of Investigatory Powers Act 2000 (RIPA) and an article published by the global law firm Linklaters (a SharePoint site incidentally), on the legislation of several EU countries.

In the UK, RIPA governs the prevention and detection of acts of terrorism, serious crime and “other national security interests”. It is available to security services, police forces and authorities who investigate and detect these offenses. The act regulates interception of the content of communications as well as envelope information (who, where and when). France has a bunch of acts which I won’t bore you too much with, but after 911, they instituted act 2001-1062 of 15 November 2001 which strengthens the powers of French law enforcement agencies. Now agencies can order anyone to provide them with data relevant to an inquiry and furthermore, the data may relate to a person other than the one being subject to the disclosure order.

The Linklaters article covers Spain and Belgium too and the laws are similar in intent and power. They specifically cite a case study in Belgium where the shoe was very much on the other foot. US company Yahoo was fined for not co-operating with Belgian authorities.

The court considered that Yahoo! was an electronic communication services provider (ESP) within the meaning of the Belgian Code of Criminal Procedure and that the obligation to cooperate with the public prosecutor applied to all ESPs which operate or are found to operate on Belgian territory, regardless of whether or not they are actually established in Belgium

I could go on citing countries and legal cases but I think the point is clear enough. Smile

Rebuttal 5: Many countries co-operate with US law enforcement under treaties

So if the previous rebuttal argument that other countries have similar regimes in place is not convincing enough, consider this one. Lets assume that data is hosted by a major cloud services provider with absolutely zero presence in, or contacts with, the United States. There is still a possibility that this information may still be accessible to the U.S. government if needed in connection with a criminal case. The means by which this can happen is via international treaties relation to legal assistance. These are called Mutual Assistance Legal Treaties (MLAT).

As an example, US and Australia have had a longstanding bilateral arrangement. This provides for law enforcement cooperation between the two countries and under this arrangement, either government can potentially gain access to data located within the territory of the other. To give you an idea of what such a treaty might look like consider the scope of the Australia-US one. The scope of assistance is wide and I have emphasised the more relevant ones:

  • (a) taking the testimony or statements of persons;
  • (b) providing documents, records, and other articles of evidence;
  • (c) serving documents;
  • (d) locating or identifying persons;
  • (e) transferring persons in custody for testimony or other purposes;
  • (f) executing requests for searches and seizures and for restitution;
  • (g) immobilizing instrumentalities and proceeds of crime;
  • (h) assisting in proceedings related to forfeiture or confiscation; and
  • (i) any other form of assistance not prohibited by the laws of the Requested State.

For what its worth, if you are interested in the boundaries and limitations of the treaty, it states that the “Central Authority of the Requested State may deny assistance if”:

  • (a) the request relates to a political offense;
  • (b) the request relates to an offense under military law which would not be an offense under ordinary criminal law; or
  • (c) the execution of the request would prejudice the security or essential interests of the Requested State.

Interesting huh? Even if you host in a completely independent country, better check the treaties they have in place with other countries.

Rebuttal 6: Other countries are adjusting their laws to reduce the impact

The final rebuttal to the whole Patriot Act argument that I will cover is that things are moving fast and countries are moving to mitigate the issue regardless of the points and counterpoints that I have presented here. Once again I will refer to an article from Alex Lakatos, who provides a good example. Lakatos writes that the EU may re-write their laws to ensure that it would be illegal for the US to invoke the Patriot Act in certain circumstances.

It is anticipated, however, that at the World Economic Forum in January 2012, the European Commission will announce legislation to repeal the existing EU data protection directive and replace it with more a robust framework. The new legislation might, among other things, replace EU/US Safe Harbor regulations with a new approach that would make it illegal for the US government to invoke the Patriot Act on a cloud-based or data processing company in efforts to acquire data held in the European Union. The Member States’ data protection agency with authority over the company’s European headquarters would have to agree to the data transfer.

Now Lakatos cautions that this change may take a while before it actually turns into law, but nevertheless is something that should be monitored by cloud providers and cloud consumers alike.

Conclusion

So what do you think? Are you enlightened and empowered or confused and jaded? Smile

I think that the Patriot Act issue is obviously a complex one that is not well served by arguments based on fear, uncertainty and doubt. The risks are real and there are precedents that demonstrate those risks. Scarily, it doesn’t make much digging to realise that those risks are more widespread than one might initially consider. Thus, if you are going to play the Patriot Act card for FUD reasons, or if you are making a genuine effort to mitigate the risks, you need to look at all of the touch points where service provider might exercise a degree of control. They may not be where you think they are.

In saying all of this, I think this examination highlights some strategy that can be employed by cloud providers and cloud consumers alike. Firstly, If I were a cloud provider, I would state my policy about how much data will be given when confronted by an NSL (since that has clear limitations). Many providers may already do this, so to turn it around to the customer, it is incumbent on cloud consumers to confirm with the providers as to where they stand. I don’t know if there is that much value in asking a cloud provider if they are exempt from the reach of the Patriot Act. Maybe its better to assume they are affected and instead, ask them how they intend to mitigate their customers downlevel risks.

Another obvious strategy for organisations is to encrypt data before it is stored on cloud infrastructure. While that is likely not going to be an option in a software as a service model like Office 365, it is certainly an option in the infrastructure and platform as a service models like Amazon and Azure. That would reduce the impact of a Section 215 Order being executed as the cloud provider is unlikely going to have the ability to decrypt the data.

Finally (and to sound like a broken record), a little information management governance would not go astray here. Organisations need to understand what data is appropriate for what range of cloud services. This is security 101 folks and if you are prudent in this area, cloud shouldn’t necessarily be big and scary.

Thanks for reading

Paul Culmsee

www.hereticsguidebooks.com

www.sevensigma.com.au

p.s Now do not for a second think this article is exhaustive as this stuff moves fast. So always do your research and do not rely on an article on some guys blog that may be out of date before you know it.

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud is not the problem-Part 4: Industry shakeout and playing with the big kids…

This entry is part 4 of 6 in the series Cloud
Send to Kindle

Hi all

Welcome to the fourth post about the adaptive change that cloud computing is going to have on practitioners, paradigms and organisations. The previous two posts took a look at some of the dodgier side of two of the industries biggest players, Microsoft and Amazon. While I have highlighted some dumb issues with both, I nevertheless have to acknowledge their resourcing, scalability, and ability to execute. On that point of ability to execute, in this post we are going to expand a little towards the cloud industry more broadly and the inevitable consolidation that is, and will continue to take place.

Now to set the scene, a lot of people know that in the early twentieth century, there were a lot of US car manufacturers. I wonder if you can take a guess at the number of defunct car manufacturers there have been before and after that time.

…Fifty?

…One Hundred?

Not even close…

What if I told you that there were over 1700!

Here is another interesting stat. The table below shows the years where manufacturers went bankrupt or ceased operations. Below that I have put the average shelf life of each company for that decade.

Year 1870’s 1880’s 1890’s 1900’s 1910’s 1920’s 1930’s 1940’s 1950’s 1960’s 1970’s 1980’s 1990’s 2000’s 2010’s
# defunct 4 2 5 88 660 610 276 42 13 33 11 5 5 3 5
avg years in operation 5 1 1 3 3 4 5 7 14 10 19 37 16 49 42

Now, you would expect that the bulk of closures would be depression era, but note that the depression did not start until the late 1920’s and during the boom times that preceded it, 660 manufactures went to the wall – a worse result!

The pattern of consolidation

image

What I think the above table shows is the classic pattern of industry consolidation after an initial phase of innovation and expansion, where over time, the many are gobbled by the few. As the number of players consolidate, those who remain grow bigger, with more resources and economies of scale. This in turn creates barriers to entry for new participants. Accordingly, the rate of attrition slows down, but that is more due to the fact that there are fewer players in the industry. Those that are left continue to fight their battles, but now those battles take longer. Nevertheless, as time goes on, the number of players consolidate further.

If we applied a cloud/web hosting paradigm to the above table, I would equate the dotcom bust of 2000 with the depression era of the 1920’s and 1930’s. I actually think with cloud computing, we are in the 1960’s and on right now. The largest of the large players have how made big bets on the cloud and have entered the market in a big, big way. For more than a decade, other companies hosted Microsoft technology, with Microsoft showing little interest beyond selling licenses via them. Now Microsoft themselves are also the hosting provider. Does that mean most the hosting providers will have the fate of Netscape? Or will they manage to survive the dance with Goliath like Citrix or VMWare have?

For those who are not Microsoft or Amazon…

image

Imagine you have been hosting SharePoint solutions for a number of years. Depending on your size, you probably own racks or a cage in some-one else’s data centre, or you own a small data centre yourself. You have some high end VMWare gear to underpin your hosting offerings and you do both managed SharePoint (i.e. offer a basic site collection subscription with no custom stuff – ala Office 365) and you offer dedicated virtual machines for those who want more control (ala Amazon). You have dutifully paid your service provider licensing to Microsoft, have IT engineers on staff, some SharePoint specialists, a helpdesk and some dodgy sales guys – all standard stuff and life is good. You had a crack at implementing SharePoint multi tenancy, but found it all a bit too fiddly and complex.

Then Amazon comes along and shakes things up with their IaaS offerings. They are cost competitive, have more data centres in more regions, a higher capacity, more fault tolerance, a wider variety of services and can scale more than you can. Their ability to execute in terms of offering new services is impossible to keep up with. In short, they slowly but relentlessly take a chunk of the market and continue to grow. So, you naturally counter by pushing the legitimate line that you specialise in SharePoint, and as a result customers are in much more trusted hands than Amazon, when investing on such a complex tool as SharePoint.

But suddenly the game changes again. The very vendor who you provide cloud-based SharePoint services for, now bundles it with Exchange, Lync and offers Active Directory integration (yeah, yeah, I know there was BPOS but no-one actually heard of that). Suddenly the argument that you are a safer option than Amazon is shot down by the fact that Microsoft themselves now offer what you do. So whose hands are safer? The small hosting provider with limited resources or the multinational with billions of dollars in the bank who develops the product? Furthermore, given Microsoft’s advantage in being able to mobilise its knowledge resources with deep product knowledge, they have a richer managed service offering than you can offer (i.e. they offer multi tenancy :).

This puts you in a bit of a bind as you are getting assailed at both ends. Amazon trumps you in the capabilities at the IaaS end and is encroaching in your space and Microsoft is assailing the SaaS end. How does a small fish survive in a pond with the big ones? In my opinion, the mid-tier SharePoint cloud providers will have to reinvent themselves.

The adaptive change…

So for the mid-tier SharePoint cloud provider grappling with the fact that their play area is reduced because of the big kids encroaching, there is only one option. They have to be really, really good in areas the big kids are not good at. In SharePoint terms, this means they have to go to places many don’t really want to go: they need to bolster their support offerings and move up the SharePoint stack.

You see, traditionally a SharePoint hosting provider tends to take two approaches. They provide a managed service where the customer cannot mess with it too much (i.e. Site collection admin access only). For those who need more than that, they will offer a virtual machine and wipe their hands of any maintenance or governance, beyond ensuring that  the infrastructure is fast and backed up. Until now, cloud providers could get away with this and the reason they take this approach should be obvious to anyone who has implemented SharePoint. If you don’t maintain operational governance controls, things can rapidly get out of hand. Who wants to deal with all that “people crap”? Besides, that’s a different skill set to typical skills required to run and maintain cloud services at the infrastructure layer.

So some cloud providers will kick and scream about this, and delude themselves into thinking that hosting and cloud services are their core business. For those who think this, I have news for you. The big boys think these are their core business too and they are going to do it better than you. This is now commodity stuff and a by-product of commoditisation is that many SharePoint consultancies are now cloud providers anyway! They sign up to Microsoft or Amazon and are able to provide a highly scalable SharePoint cloud service with all the value added services further up the SharePoint stack. In short, they combine their SharePoint expertise with Microsoft/Amazon’s scale.

Now on the issue of support, Amazon has no specific SharePoint skills and they never will. They are first and foremost a compelling IaaS offering. Microsoft’s support? … go and re-read part 2 if you want to see that. It seems that no matter the big multinational, level 1 tech support is always level 1 tech support.

So what strategies can a mid-tier provider take to stay competitive in this rapidly commoditising space. I think one is to go premium and go niche.

  • Provide brilliant support. If I call you, day or night, I expect to speak to a SharePoint person straight away. I want to get to know them on a first name basis and I do not want to fight the defence mechanism of the support hierarchy.
  • Partner with SharePoint consultancies or acquire consulting resources. The latter allows you to do some vertical integration yourself and broaden your market and offerings. A potential KPI for any SharePoint cloud provider should be that no support person ever says “sorry that’s outside the scope of what we offer.”
  • Develop skills in the tools and systems that surround SharePoint or invest in SharePoint areas where skills are lacking. Examples include Project Server, PerformancePoint, integration with GIS, Records management and ERP systems. Not only will you develop competencies that few others have, but you can target particular vertical market segments who use these tools.
  • (Controversial?) Dump your infrastructure and use Amazon in conjunction with another IaaS provider. You just can’t compete with their scale and price point. If you use them you will likely save costs, when combined with a second provider you can play the resiliency card and best of all … you can offer VPC 🙂

Conclusion

In the last two posts we looked at some of the areas where both Microsoft and Amazon sometimes struggle to come to grips with the SharePoint cloud paradigm. In this post, we took a look at other cloud providers having to come to grips with the SharePoint cloud paradigm of having to compete with these two giants, who are clearly looking to eke out as much value as they can from the cloud pie. Whether you agree with my suggested strategy (Rackspace appears to), the pattern of the auto industry serves as an interesting parallel to the cloud computing marketplace. Is the relentless consolidation a good thing? Probably not in the long term (we will tackle that issue in the last post in this series). In the next post, we are going to shift our focus away from the cloud providers themselves, and turn our gaze to the internal IT departments – who until now, have had it pretty good. As you will see, a big chunk of the irrational side of cloud computing comes from this area.

 

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud isn’t the problem–Part 2: When complex technology meets process…

This entry is part 2 of 6 in the series Cloud
Send to Kindle

Hi all

Welcome to my second post that delves into the irrational world of cloud computing. In the first post, I described my first foray into the world of web hosting, which started way back in 2000. Back then I was more naive than I am now (although when it comes to predicting the future I am as naive as anybody else.) I concluded Part 1 by asserting that cloud computing is an adaptive change. We are going to explore the effects of this and the challenges it poses in the next few posts.

Adaptive change occurs in a number of areas, including the companies providing a cloud application – especially if on-premise has been the basis of their existence previously. To that end, I’d like to tell you an Office 365 fail story and then see what lessons we can draw from it.

Office 365 and Software as a Service…

For those who have ignored the hype, Office 365 known in cloud speak as “Software as a Service” (SaaS). Basically one gets SharePoint, Exchange mail, web versions of Office Applications and Lync all bundled up together. In Office 365, SharePoint is not run on-premise at all, and instead it is all run from Microsoft servers in a subscription arrangement. Once a month you pay Microsoft for the number of users using the service and the world is a happy place.

Office 365, like many SaaS models, keeps much of the complexity of managing SharePoint in the hands of Microsoft. A few years back, Office 365 would have been described in hosting terms as a managed service. Like all managed services, one sacrifices a certain level of control by outsourcing the accompanying complexity. You do not manage or control the underlying cloud infrastructure including network, servers, operating systems, SharePoint farm settings or storage. Furthermore, limited custom code will run on Office 365, because developers do not have back-end access. Only sandbox solutions are available, and even then, there are some additional limitations when compared to on-premise sandbox solutions. You have limited control of SharePoint service applications too, so the best way to think about Office 365 is that your administrative control extends to the site collection level (this is not actually true but suffices for this series.)

One key reason why its hard to get feature parity between on-premise and SaaS equivalents is because many SaaS architectures are based around the concept of multitenancy. If you have heard this word bandied about in SharePoint land is because it is something that is supported in SharePoint 2010. But the concept extends to the majority of SaaS providers. To understand it, imagine an swanky office building in the up-market part of town. It has a bunch of companies that rent out office space and are therefore tenants. No tenant can afford an entire building, so they all lease office space from the building and enjoy certain economies of scale like a great location, good parking, security and so on. This does have a trade-off though. The tenants have to abide by certain restrictions. An individual tenant can’t just go and paint the building green because it matches their branding. Since the building is a shared resource, it is unlikely the other tenants would approve.

Multi tenancy allows the SaaS vendor to support multiple customers with a single platform. The advantages of this model is economies of scale, but the trade-off is the aforementioned customisation flexibility. SaaS vendors will talk up this by telling you that SaaS applications can be updated more frequently than on-premise software, since there is less customisation complexity from each individual customer. While that’s true, it nevertheless means a loss of control or choice in areas like data security, integration with on-premise systems, latency and flexibility of the application to accommodate change as an organisation grows.

A small example of the restricting effect of multi-tenancy is when you upload a PDF into a SharePoint document library in Office 365. You cannot open the PDF in the browser and instead you are prompted to save it locally. This is because of a well-known issue with a security feature that was added to IE8. In the on-premise SharePoint world, you can modify the behaviour by changing the “Browser File Handling” option in the settings of the affected Web Application. But with Office 365, you have to live with it (or use a less than elegant workaround) because you do not have any access at a web application level to change the behaviour. Changing it will affect any tenants serviced by that web application.

Minor annoyances aside, if you are a small organisation or you need to mobilise quickly on a project with a geographically dispersed team, Office 365 is a very sweet offering. It is powerful and integrated, and while not fully featured compared to on-premise SharePoint, it is nonetheless impressive. One can move very quickly and be ready to go with within one or two business days – that is, if you don’t make a typo…

How a typo caused the world to cave in…

A while back, I was part of a geographically dispersed, multi-organisation team that needed a collaborative portal for around a year. Given the project team was distributed across varying organisations, various parts of Australia, the fact that one of the key stakeholders had suggested SharePoint, and the fact that Office 365 behaved much better than Google apps behind overly paranoid proxy servers of participating organisations, Office 365 seemed ideal and we resolved to use it. I signed up to a Microsoft Office 365 E3 service.

Now when I say sign up, Microsoft uses Telstra in Australia as their Office 365 partner, so I was directed to Telstra’s sign up site. My first hint of trouble to come was when I was asked to re-enter my email address in the signup field. Through some JavaScript wizard no doubt, I was unable to copy/paste my email address into the confirmation field. They actually made me re-type it. “Hmm” I thought, “they must really be interested in data validation. At least it reduces the chance that people do not copy/paste the wrong information into a critical field.” I also noted that there was also some nice JavaScript that suggested the strength of the password chosen as it was typed.

But that’s where the fun ended. Soon after entering the necessary detail, and obligatory payment details, I am asked to enter a mysterious thing listed only as an Organization Level Attribute and more specifically, “Microsoft Online Services Company Identifier.” Checking the question mark icon tells me that it is “used to create your Microsoft Online Services account identity.”

image

I wondered if this was the domain name for the site, as there was no descriptive indicator as to the significance of this code. For all I knew, it could be a Microsoft admin code or accounting code. Nevertheless I assumed it was the domain name because I just had a feeling it was. So I entered my online identity and away I went. I got a friendly email message to say things were in motion and I waited my obligatory hour or so for things to provision.

The inbox sound chimes and I received two emails. One told me I now have a “Telstra t-Suite account” and the other is entitled “Registration confirmation from Microsoft online.” I was thanked for purchasing and the email stated that “the services are managed via Microsoft Online Portal (MOP), a separate portal to the Telstra T-Suite Management Console.” I had no idea what the Telstra T-Suite Management Console was at this point, but I was invited to log into the Microsoft Online Portal with a supplied username and password.

At this point I swore…I could see by my username, that I made a typo in the Microsoft Online Services Company Identifier. Username: admin@SampleProjject.onmicrosoft.com – which means I typed in “SampleProjject” instead of “SampleProject” (Aargh!)

The saga begins…

Swearing at my dyslexic typing, I logged a support call to Telstra in the faint hope that I can change this before it’s too late… Below is the anonymised mail I sent:

“Hiya

In relation to the order below I accidentally set SampleProjject as the identifier when it should be SampleProject. Can this be rectified before things are commissioned?

Thanks

Paul”

Another hour passed by and my inbox chimed again with a completely unsurprising reply to my query.

“Hi Paul , sorry but company identifier can not be changed because it is used to identify the account in Office 365 database.”

Cursing once again at my own lack of checking, I cannot help but shake my head in that while I was forced to type in my email address twice (and with cut and paste disabled) when I signed up to Office 365, I was given no opportunity to verify the Microsoft Online Services Company Identifier (henceforth known as MOSI) before giving the final go-ahead. Surely this identifier is just as important as the email address? Therefore, why not ask for it to be entered twice or visually make it clear what the purpose of this identifier is? Then dumb users like me would get a second chance before opening the hellgate, unleashing forces that can never be contained.

At the end of the day though, the fault was mine so while I think Telstra could do better with their validation and conveying the significance of the MOSI, I caused the issue.

Forces are unleashed…

So I log into Telstra’s t-suite system and try and locate my helpdesk call entry. The t-suite site, although not SharePoint, has a bit of a web part feel about it – only like when you have fixed the height of a web part far too small. It turned out that their site doesn’t handle IE9 well. If you look closely the “my helpdesk cases” and “my service access” are collapsed to the point that I can’t actually see anything. So I tried Chrome and was able to operate the portal like a normal person would. My teeth gnashed once more…

image  image

Finally, being able to take an action, I open my support request and ask the following:

*** NOTES created by Paul Culmsee
Can I cancel this account and re provision? A typo was made when the MOSI was entered.  The domain name is incorrect for the site.

A few emails went back and forth and I received a confirmation that the account is cancelled. I then return to the Office 365 site and re-apply for an E3 service. This time I triple checked my spelling of the MOSI and clicked “proceed.” I received an email that thanked me for my application and that I should receive a provisioning notification within an hour or so.

So I wait…

and wait…

and wait…

and wait…

24 hours went by and I received no notification of the E3 service being provisioned. I log into Telstra’s t-suite and log a new call, asking when things will be provisioned. Here is what I asked…

Hi there, I have had no notification of this being provisioned from Microsoft. Surely this should be done by now?

In typical level 1 helpdesk fashion, the guy on the other end did not actually read what I wrote. He clearly missed the word “no”

Hi Paul,

that’s affirmative. Your T-Suite order has been provisioned. As per the instructions in the welcome email you can follow the links to log in to portal.

Contact me on 1800TSUITE Option 2.3 to discuss it further. I’ll keep this case open for a week.

*sigh* – this sort of bad level 1 email support actually does a lot of damage to the reputation of the organisation so I mail back…

But I received no welcome email from Microsoft with the online password details… I have no means to log into the portal

This inane exchange costs me half a day, so I took Telstra’s friendly advice and contacted them “on 1800TSUITE Option 2.3 to discuss it further.” I got a pretty good tech who realised there indeed was a problem. He told me he would look into it and I thanked him for his time. Sometime later he called back and advised me that something was messed up in the provisioning process and that the easiest thing to do, was for him to delete my most recent E3 application, and for me to sign up from scratch using a totally different email address and a totally new MOSI. Somehow, either Telstra’s or Microsoft’s systems had associated my email address and MOSI with the original, failed attempt to sign up (the one with the typo), and it was causing the provisioning process to have an exception somewhere along the line.

In hearing this, I can imagine some giant PowerShell provisioning script with dodgy exception handling getting halfway through and then dying on them. So I was happy to follow the tech’s advice went through the entire Office 365 sign up process from the very beginning again (this is the third time). This time I used a fresh email address and quadruple checked all of the fields before I provisioned. Eureka! This time things worked as planned. I received all the right confirmation emails and I was able to sign into the Microsoft online portal. From there I created user accounts, provisioned a SharePoint site collection and we were ready to rock and roll. Although the entire saga ended up taking 5 business days from start to finish, I have my portal and the project team got down to business.

Now for what it’s worth, it should be noted that if you are an integrator or are in the business of managing multiple Office 365 services, Telstra requires a different email address to be used for each Office 365 service you purchase. One cannot have an alias like provision@myoffice365supportprovider as the general account used to provision multiple E1-E4 services. Each needs its own t-suite account with a different email address.

Plunged into darkness…

Things hummed along for a couple of months with no hiccups. We received an invoice for the service by email, and then a couple of days later, received a mail to confirm that in fact the invoice has been automatically paid via credit card. For our purposes, Office 365 was a really terrific solution and the project team really liked it and were getting a lot of value out off it.

I then had to travel overseas and while I was gone, suddenly the project team were unable to login to the portal. They would receive a “subscription expired” message when attempting to login. Now this was pretty serious as a project team was coming to an important deadline and now no-one could log in.  We checked the VISA records and it seemed that the latest invoice had not been deducted from the account as there was still a balance owing. Since I was in overseas, one of my colleagues immediately called up Telstra support (it was now after hours in Perth) and was stuck in a queue for an hour and then ended up speaking to two support people. After all of the fuss with the provisioning issues around the MOSI and my typo, it seemed that Telstra support didn’t actually know what a MOSI was in any event. This is what my colleague said:

I was asked for an account number straight away both times, and I explained that I didn’t have one, but I did have the invoice number in question, and that this was a Microsoft Office 365 subscription. They were still unable to locate the account or invoice. I then gave them the MOSI, thinking this would help. Unfortunately, they both had no idea what I was talking about! I explained that users were unable to login to the site with a ‘subscription expired’ error message. I also explained the fact that the VISA had not been processed for this period (although it was fine in the last period).

Both support staff could not access the Office 365 subscription information (even after I gave them our company name). Because I called after hours, t-suite department was not available. The two staff I talked to could not access the account, so could not pull up any of the relevant details. It turns out that after business hours, Telstra redirect t-suite support to the mobile and phones department. The first support person passed me onto technical but the transfer was rerouted to the original call menu – so I went through the whole thing again, press x for this, press x for that, etc. The second time round, I explained it all over again. The tech assured me that it couldn’t be a billing issue and that Telstra generally would not suspend an account because of a few days late payment. If that was the case, prior to suspension, Telstra would send out an email to notify customers of overdue payment. I told him that no such email had been sent. He then said that it would most likely be a technical problem and would have to be dealt with the next day as the T-suite department would not be available til next morning between office hours 9-5pm EST.

I hung up frustrated, no closer to solving the problem after two hours on the phone.

My colleague then got up early and called Telstra at 6am the next day (9am EST is 3 hours ahead of Perth time). She explained the situation again to Telstra t-suite support person all over again. Here again is the words of my colleague:

The first person who took my call (who I will call “girl one”) couldn’t give me an answer and said she’d get someone to call back, and in the meantime she’d check with another department for me. She put me on hold and during this time the call was re-routed back to the original menu when you first call. I thought that instead of waiting for a call that I may not receive soon as this was an emergency, I went through the menu again. This time I got “girl two” and explained the whole thing *again*. I got her to double check that the E3 subscription was set to automatically deduct from the VISA supplied – yes, it was. She noticed that it said 0 licenses available. She told me that she was not sure what that was all about, so would log a call with Microsoft. Girl two advised me that it could take any time between an hour to a few days for a response from Microsoft.

I then got a call from Telstra (girl three) on the cell-phone just after I finished with girl two. This was the person who girl one promised would call back. I told her what I’d gone through with all support staff so far, and that “girl two” was going to log a call with Microsoft. Girl three, like girl two, noticed the 0 licenses available. She wasn’t sure that it was because there were none to begin with or that there were no more available. I stated that the site had been working fine till yesterday. I explained that no one could access the site and that they all got the same message. Same as girl two, girl three advised that she would also log a call with Microsoft. Again, I was told that it could take up to several days before I could get a reply.

Half an hour later, we received an email from Telstra t-suite support. It stated the following:

Case Number: xxxxxxxx-xxxxxxx

Case Subject: subscription has expired for all users

I checked your account info and invoices. The invoice xxxxx paid for 01 Oct to 01 Nov was for company ID SampleProjject not SampleProject. Please call billing department to change it for you.

With this email, we now knew that the core problem here was related to billing in some way. As far as we had been told, Telstra had deleted the original two failed Office 365 subscriptions, but apparently not from their billing systems. The bill was paid against a phantom E3 service – the deleted one called “SampleProjject”. Accordingly the live service had expired and users were locked out of the system.

As instructed in the above email, my colleague called up t-suite billing (there was a phone number on the invoice). In her words:

Once again, the support person asked for the account number to which I said I didn’t have one. I offered him the invoice number and the MOSI, thinking someone’s got to know what it was since it was ‘used to identify the account in Office 365 database.’ He stated he could not ‘pull up an account with the MOSI’ and said something to the effect that he didn’t know what the MOSI ‘was all about’. He asked what company registered the service and I gave him our details. He immediately saw several ‘accounts’ in the billing system related to our company. He noted that the production E3 was a trial subscription and the trial had now expired and he surmised that the problem was most likely due to that fact. I queried why this was the case when the payment subscription was set to automatically deduct from the supplied visa account. He told me that as going from trial to production was a sales thing, I would have to speak to t-suite sales department. He also added that we were lucky because there was a risk that the mistakenly expired E3 service could have been deleted from Office 365.

I called up sales and finally, they were able to correct the problem.

So after a long, stressful and chaotic evening and morning, Armageddon was averted and the portal users were able to log in again.

Reflections…

This whole story started from something seemingly innocuous – a typo that I made on a poorly described text box (MOSI). From it, came a chain of events that could have resulted in a production E3 service being mistakenly deleted. There were multiple failures at various levels (including my bad typing that set this whole thing off). Nevertheless, first thing that becomes obvious is that this was a high risk issue that had utterly nothing to do with the Office 365 service itself. As I said, the feedback from the project team has been overwhelmingly positive for Office 365. There was no bug or no extended outage because of any technical factors. Instead, it was the lack of resilience in the systems and processes that surround the Office 365 service. At the end of the day, we got almost nailed because of a billing screw-up. It was exacerbated by some poor technical support outcomes. Witness the number of people and departments my colleague had to go through to get a straight answer, as well as the two times she was redirected back to the main phone menuing system when she was supposed to be transferred.

Now I don’t blame any of the tech support staff (okay, except the first guy who did not read my initial query). I think that the tech support themselves were equally hamstrung by immature process and poor integration of systems. What was truly scary about this issue was that it snuck up upon us from left field. We thought the issue was resolved once the service was finally provisioned (third time lucky), and had email receipts of paid invoices. Yet this near fatal flaw was there all along, only manifesting some three months later when the evaluation period expired.

I think there are a number of specific aspects to this story that Microsoft needs to reflect on. I have summarised these below:

  • Why is the registration process to sign up to Office 365 via Telstra such a complete fail of the “Don’t Make Me Think” test.
  • Why is the significance of the MOSI not made more clear when you first enter it (given you have to enter your email address twice)?
  • Why did no-one at all in Telstra support have the faintest idea what a MOSI is?
  • When you entrust your data and service to a cloud provider how confident do you feel when tech support completely misinterprets your query and answers a completely opposite question?
  • How do you think customers with a critical issue feel when the company that sits between you and Microsoft tells you that it will take “between an hour to a few days for a response from Microsoft”. Vote of confidence?
  • How do you think customers with a critical issue feel when the company that sits between you and Microsoft redirects tech support to their cell phone division after hours?
  • How do you think customers with a critical issue feel when the company that sits between you and Microsoft has to pass you around from department to department to solve an issue, and along the way, re-route you back to the main support line?
  • We were advised to delete our E3 accounts and start all over again. Why did Telstra’s systems not delete the service out of their billing systems? Presumably they are not integrated, given that from a billing perspective, the old E3 service was still there?

Now I hope that I don’t sound bitter and twisted from this experience. In fact, the experience reinforced what most in IT strategy already know. It’s not about the technology. I still like what Office 365 offers and I will continue to use and recommend it under the right circumstances. This experience was simply a sobering reality check though that all of the cool features amounts to naught when it can be undone by dodgy underlying supporting structures. I hope that Microsoft and Telstra read this and learn from it too. From a customer perspective, having to work through Telstra as a proxy for Microsoft feels like additional layers of defence on behalf of Microsoft. Is all of this duplication really necessary? Why can’t Australian customers work directly with Microsoft like the US can?

Moving on…

No cloud provider is immune to these sorts of stories – and for that matter no on-premise provider is immune either. So for Amazon fanboys out there who want to take this post as evidence to dump on Microsoft, I have some news for you too. In the next post in this series, I am going to tell you an Amazon EC2 story that, while not being an issue that resulted in an outage, nevertheless represents some very short sighted dumbass policies. The result of which, we are literally forced to hand our business to another cloud provider.

Until then, thanks for reading and happy clouding 🙂

Paul Culmsee

www.sevensigma.com.au

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle