Incredible! Download a PowerApp that determines your IQ from a selfie…

Send to Kindle

Hiya

Some of you might have seen my post where I explained how how my daughter, Ashlee (18), won a competition from Microsoft to develop a fidget spinner app. Well if you think that was impressive, check this out…

image

Inspired by the positive feedback from the community, Ashlee decided to up her game and investigated some of the various online services that PowerApps and Flow can connect to. The net result is an app which has capabilities that are quite extroadinary. By connecting to Microsoft Cognitive Services in Azure, Ashlee has created an app that will determine your IQ from your facial structure. In other words, you download this app, take a selfie, and via cloud-based machine learning that has processed millions of photos, you can find out your IQ without doing a traditional IQ test!

Even better, Ashlee has kindly donated it to the community as a learning tool, so you can not only learn about these ground-breaking techniques, but you can download/share it with your colleagues too.

So far this app has been used in various global organisations including Microsoft. It has also been demonstrated to rapturous applause at the recent Digital Workplace Conference in Sydney.

 

So DO NOT MISS OUT on this, especially when using this app is a simple set of easy-to-follow steps!

 

Ready… Steady…

 

Wait… I forgot to mention, this app is optimised for phones, and utilises Microsoft speech recognition, so be sure to turn up the volume so you can hear the instructions…

 

Right are you ready?

 

Oh… don’t forget… you need an Office365 subscription for this so make sure PowerApps is enabled on your tenant…

 

Okay, so do the following right now!

 

1. Download and install PowerApps studio onto your PC.

2. Download Ashlee’s Guess your IQ MSAPP (rename the file extension from .zip to .msapp) and open it in PowerApps Studio

3. In PowerApps studio, save a copy of the App to your tenant (“the cloud”) via the File menu

image

4. Now install PowerApps on your IOS or Android Phone (you will find it in the app store and its free)

5. Run PowerApps on your phone and sign in with the same user account you used in step 3

6. Select the app called Guess Your IQ. It might take a little while to load, but don’t worry – there is a lot of awesome functionality.

7. Enter your name when prompted…

image

8. Now use your phone camera to take a selfie…

image

9. Click the “Find my IQ” button and watch the magic of machine learning…

image  image

10. Seriously, you WILL NOT BELEIVE what happens next!… but you have to try for yourself!

 

Make sure you let me know what your IQ is! Smile

 

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

How’s the weather? Using a public API with PowerApps (part 1)

This entry is part 2 of 3 in the series OpenAPI
Send to Kindle

Introduction

Okay citizen developers, listen up! This post and its forthcoming second part will teach you how to get PowerApps to connect to an online web service to add an extra dimension of awesomeness to your app. It also covers what I think is the most off-putting aspect of PowerApps work in general – dealing with the horribleness of Swagger/OpenAPI. Hopefully by the end you will find my approach useful to make make it a bit less horrible. If you have not come across OpenAPI. you did not read a previous post of mine when I whined about it. That’s ok because that post is not a prerequisite for this article, BUT another post is required pre-reading.

In a previous post, I created a sample app that demonstrated an inspection scenario where a user would be taking photos, which were then sent to SharePoint via Flow, with nice file names. In this small series of posts we are going to build on this example, so I suggest starting there first and getting to the point where you can save photos to SharePoint via Flow as shown below:

image  image

In this post, I will focus on getting PowerApps talking to an external web service…

Now a common inspection scenario would be to capture current weather information each time a photo was taken. A good example would be an park ranger, inspecting sensitive environmental sites, or a plant operator who needs to conduct and document a safety inspection of equipment before it is used. In both scenarios, weather such as rain or wind, might have a material impact on the result of the inspection, so it makes sense to capture location and weather data along with the photo.

Now location data is easy, as PowerApps has the in-built ability to capture GPS data. All we need to do is pass that GPS data to a weather web service to get the local conditions like temperature, humidity and wind speed/direction. At first I thought I could do it all out of the box because Microsoft provides a built-in connection to MSN Weather API as shown below, but unfortunately you can only pass it location data in the form of a city name, not a latitude/longitude.

image

It did not take long for me to find an alternative. The nice folks at OpenWeatherMap offer an API that does accept geographic co-ordinates as input. Even better, they have a free option, provided you stay within certain limits. To use the service, you need to sign up and generate an API key so they can identify you, which is added to the API call. To access weather data for a location, the URL looks like the following:

http://api.openweathermap.org/data/2.5/weather?lat=[latitude]&lon=[longitude]&appid=[API Key]&units=metric

Of course, the “units” parameter at the end of the URL can be changed in case you live in one of the 3 remaining countries holding out against metric (looking at you USA! Smile)

The data that is returned by this API is in JSON format. For example here is what comes back for my home town of Perth (lat -31.9529 & lon 115.8573)

image

I have pasted the data in a clearer format below. Note, it pays to get used to the JSON format, as it underpins a lot of modern web applications.

 

{"coord":{
   "lon":115.86,
   "lat":-31.95},
 "weather":[{
   "id":804,
   "main":"Clouds",
   "description":"overcast clouds",
   "icon":"04n"}],
 "base":"stations",
 "main":{
   "temp":5.37,
   "pressure":1023,
   "humidity":100,
   "temp_min":5,
   "temp_max":6},
 "visibility":7000,
 "wind":{
   "speed":1.5,
   "deg":50},
 "clouds":{
   "all":90},
 "dt":1499297700,
 "sys":{
   "type":1,
   "id":8189,
   "message":0.0089,
   "country":"AU",
   "sunrise":1499296640,
   "sunset":1499333125},
"id":2066756,
"name":"Maylands",
"cod":200}

As you can see above this is a bunch of name/value pairs. But now the challenge is to get this API/data into PowerApps so we can use it. Below is an image showing what it looks like when things are wired up into PowerApps. You might be thinking, where did this “OpenWeather.getWeather” function come from? How does it know to ask for latitude, longitude and units of measure?

image

The answer my friends is OpenAPI/Swagger. Strap yourselves in fellow citizen developers, as we need to go on a fun-filled ride.

Generating an initial OpenAPI file

Basically the OpenAPI specification (formerly known as Swagger) is a way to describe web services. Just as we create columns in document libraries so we have metadata to describe our documents, OpenAPI is basically “metadata for web services”. Unfortunately the comparison ends here because unlike documents in a SharePoint library, describing web services is an entirely different beast. I found the OpenAPI format so ugly and hard to get my head around, it took me an entire day just to use it previously. In short the learning curve is high, and although with persistence you will eventually get there, I found the whole thing to be poorly documented with not enough good examples.

Now it is not my intent to describe the format of OpenAPI. Luckily for us, there are websites that make the process of creating the metadata we need much easier. To this end, we are going to use a service from Apigee called openApiGen that takes 90% of the pain away.

So go ahead and sign up to OpenWeather and get yourself an API key. Confirm that it works in the browser as I showed earlier and then head on over to http://specgen.apistudio.io and follow these simple steps:

Step 1: Run your API

Paste your working API call from earlier into the text box and click the Send button. Check for a successful response and the JSON is in the expected format as shown below: 

image

Step 2: Enter API Program Information

This is where you describe your API in general. Note that PowerApps uses this info, so keep the title short and sweet and rename it from the default. I called mine OpenWeather.

image

Step 3: Update API Call Information

In this step, you have the option to perform more fine-grained tuning of your API, such as the function name that will be called from PowerApps. Like step 2, PowerApps uses this info, so keep your title short and sweet. The main thing is to set the Operation Id, as this corresponds directly to PowerApps as I show below (I called mine getWeather)

image

Step 4: Update header info

In this step, you provide information about any API parameters passed in the HTTP headers. In this case the OpenWeather API does not use headers, so there is nothing to verify/edit here…

image

Step 5: Update Query Parameters

In this step, need to provide additional metadata information the parameters passed to the query in step 1. As you can see below, the 4 parameters we used have been found. Go ahead and add descriptions for the lat, lon and unit parameters, but ignore appid. We actually will not be using appid in the same way as the other three, so no edits need to be made (we will take care of appid later).

image

Step 6: API Path Info

This step is designed to handle API’s where the URL needs to adjust. (eg /api/{somedynamicvalue}/weather. In our case the URL is static, so there is nothing to do here.

image

Step 7: Save your goodness!

At this point, you have generated an OpenAPI file! Now that wasn’t too bad was it? Click the bright orange Download button to grab a copy of it.

image

Tweaking your initial OpenAPI file

At this point you might think that you are done, but not so..  While that site does a great job of generating your file, it is not a 100% guarantee to work, and in fact if we try this in PowerApps now, it will have an issue. For the sake of learning let’s go through the process anyway so you know how to deal with quirks…

So let’s turn our attention to PowerApps. Choose Connections from the left navigation and then click Manage custom connectors”. On the following screen, click Create Custom Connector.

image  image

One the next screen, upload your newly minted OpenAPI file. Note the connector name, as it will be based on the API Program Title you specified in step 2 of the previous section.

image

Scroll down and optionally, add a custom icon. I grabbed the OpenWeather icon from their site and added it here. Click the Continue button at the bottom of the screen.

image

The next screen is really important to get right. PowerApps needs to know what authentication method should be used when talking to the OpenWeather API. As we learnt earlier, it is an API Key, which is one of the options available. On choosing this option, you will be asked to provide some additional detail. As it states, “users will be required to provide the API Key when creating a connection”. This is a good thing because it allows you to distribute your PowerApp to others without requiring you to hardcode your own API key.

image

The important thing to get right is Parameter name and Parameter location. You will note that I entered “appid” for Parameter name, which corresponds to identically named parameter in the URL for the web service, I.e.

http://api.openweathermap.org/data/2.5/weather?lat=[latitude]&lon=[longitude]&appid=[API Key]&units=metric

The Parameter location simply tells PowerApps that the API key is in the URL query, and not in the HTTP headers.

The next screen is a busy one, principally because it is where you get to tweak your API definition as from what is specified in the OpenAPI file. First up, on the top half of the screen, in the General section, verify that the Operation ID is correct as per step 3 of the previous section. Note that PowerApps will warn you if the function name does not start with an Upper case letter, so make that change if you are really concerned about it.

image  image

The more important bit is further down the screen. Scroll down to the Request section and delete the appid parameter altogether. Since we have already told PowerApps to ask for it when you add a connection to OpenWeather, we should not be asking for it inside the PowerApps function call…

image  image\

Finally, complete the operation by clicking Create connector.

image

PowerApps will chug away and eventually you will see an error. Unfortunately for us, whatever team designed this part of PowerApps didn’t think things through, because they leave limited space for the error message and the actual meaningful part of the error is truncated. To alleviate this piece of poor design, so move your cursor over the error message to see the full text. Unfortunately the poor design does not stop as this error hover over is fleeting and disappears, requiring you to hover off and back on again to see it.

For the record, it will say:

Your custom connector has been successfully created, but there was an issue converting it to WADL for PowerApps: An error occurred while converting OpenAPI file to WADL file. Error: ‘Required property “type” is not present or not a string at JSON path paths[‘/data/2.5/weather’].get.responses.200.schema.properties.weather.items’

image

So we are going to have to tweak our OpenAPI file to fix this. If you do not have a text editor that supports JSON, I suggest you download and install Notepad++ to make this easier…

Fixing the OpenAPI file…

Now delving into the guts of the OpenAPI specification is not my idea of fun, nor is teaching you to learn JSON. So let’s just assume you are not familiar with JSON and  review the message and see if it helps us:

Error: ‘Required property “type” is not present or not a string at JSON path paths[‘/data/2.5/weather’].get.responses.200.schema.properties.weather.items’

Looking at the OpenAPI file, I see the a pattern. First I can see how it corresponds with the OpenWeather web service API output. For example, the response from calling the API in the browser produces output that starts with a “coord” parameter and then a “weather” parameter.

image

When I look in the OpenAPI file, I can see those same parameters under the “properties” declaration. Check out the image below… given the property section is below the “Responses” and “200” sections, it is easy to see now that this section of the OpenAPI file is describing the metadata for a successful response from OpenWeather (i.e. HTTP code 200). Hmm… suddenly this ugly file is looking slightly less ugly…

image

While the image above shows me the section I need to focus on according to the error message. The image below more detailed view of that section with some annotations (and can you tell I just bought a laptop with a pen? Smile). I noticed that every property has a type declaration for it (highlighted), except for the property called “items” (marked with red).  It seems to be anomalous in that no “type” is designated for “item”.

image

So the next question is if a type is missing, what type is it? Looking elsewhere in the OpenAPI file, it appeared the missing type was “object” based on the fact that the items property itself was made up of sub properties. So I inserted the “type” : “object” under the declaration of the items property. made the following change:

image

I saved the file and returned to PowerApps. I deleted the problematic connector and then repeated the steps in PowerApps to create a custom connection. Aha! We have lift-off!!!

image

Okay, so at this point we have things wired up. I hope this gives you the confidence to wire up other web services to PowerApps.

In the next post, we will make use of this newly connected API in PowerApps!

 

 

Until then, thanks for reading

 

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

A Filename Generation Example for PowerApps with Flow

This entry is part 1 of 3 in the series OpenAPI
Send to Kindle

Hiya

A client recently asked to make a PowerApps proof of concept audit app for safety inspections. The gist was that a user would enter an audit number into the App, take a bunch of photos and make some notes. The photos needed to be named using the convention: <Audit Number>-<Date>-<Sequence Number>. Eg

  • 114A3-13:04:17-3.jpg is Audit number 114A3, photo taken on 13th of April and it was the 3rd photo taken.
  • 114A6-14:04:17-7.jpg is Audit number 114A3, photo taken on 14th of April and it was the 7th photo taken.

As I type these lines, it is still not possible to save pictures directly into SharePoint from PowerApps, so we are going to build on the method that Mikael Svenson documented. But first let’s look at this file name requirement above. While it seems straightforward enough, if you are new to PowerApps this might take a while to figure out. So let’s quickly build a proof of concept app and show how this all can be achieved.

In this post I will build the app and then we will use Microsoft Flow to post the images to a SharePoint document library. I’ll spend a bit of time on the flow side of things because there are a few quirks to be aware of.

First up create a blank app and then add a text box, camera, gallery and two buttons to the form… Rename the text box control to “Auditnumber” and arrange them similar to what I have done below…

image

First up, let’s disable the submit button until an audit number is entered… Without an audit number we cannot generate a filename. To do this, set the Disabled property of the button labelled “Submit” to AuditNumber.Text=””. This means that while the audit number text box is blank, this formula will return “true” and disable the button.

image

image  image

Now let’s set things up so that when a photo is taken, we save the photo into a Collection. A collection is essentially an in-memory data table and I find that once you get your head around collections, then it opens up various approaches to creating effective apps.

On the camera control (assuming it is named Camera1), set the OnSelect property to “Collect(PictureList,{ Photo: Camera1.Photo, ID: CountRows(PictureList)+1 } )

image

Now if you are a non developer, you might think that is a bit ugly, so let’s break it down.

  • The collect function creates/populates a collection – and the first parameter is simply the name of the collection. You could call this anything you like, but I chose to call mine PictureList.
  • The second parameter is a record to insert into the collection. This consists of a comma delimited set of fields and values inside curly braces. eg: { Photo: Camera1.Photo, ID: CountRows(PictureList)+1 }

Now that whole curly brace thing is a bit ugly so let’s break it down further. First up here is an image straight from Microsoft’s own documentation. You can see a record is a row of fields – pretty standard stuff if you are familiar with databases or SharePoint list items.

In PowerApps formulas, a record can be specified using curly braces. So { Album: “Thriller”, Price: 7.99 } is a record with 2 fields, Album and Price.

Now take a look at the record I used: { Photo: Camera1.Photo, ID: CountRows(PictureList)+1 } . This shows two fields, Photo and ID. The values for the Photo field is Camera1.Photo, which holds the latest photo taken by the camera. The ID field is a unique identifier for the photo. I generated this by calling the CountRows function, passing it the PhotoList collection and then adding 1 to the result.

So here’s what happens when this app starts:

  • The PhotoList collection is empty
  • A user clicks the camera control. A photo is taken and stored in the Camera1.Photo property.
  • I then count the number of records in the PhotoList collection. As it is currently empty, it returns 0. We add 1 to it
  • The photo and an ID value of 1 is added to the PhotoList collection
  • A user clicks on the camera control again. A photo is taken and stored in the Camera1.Photo property.
  • I then count the number of records in the PhotoList collection. As it is currently has 1 record from steps 1-4, it returns 1. We add 1 to it
  • The photo and an ID value of 2 is added to the PhotoList collection

… and so on. So let’s test this now…

Press play to test your app and take a couple of photos. It may not look like anything is happening, but don’t worry. Just exit and go to the File Menu and choose “Collections”. If everything has gone to plan, you will now see a collection called PictureList with the Photo and ID columns. Yay!

image

So next let’s bind this collection to the Gallery so we can see photos taken. This part is easy. On the Gallery you added to the page, set the Items property to PictureList (or whatever you named your collection). Depending on the type of gallery you added, you should see data from the PictureList collection. In my case, I changed the gallery layout to “Image and Title”. The ID field was bound to the title field and now you can see each photo and its corresponding ID.

image  image  image

Now let’s provide the ability to clear the photo gallery. On the button labelled “Clear”, set the OnSelect property to “Clear(PictureList)”. The clear command does exactly what it suggests: clears out all items from a collection. Now that we have the gallery bound, try it out. You can take photos and then clear them to your hearts content.

Now the point of this exercise is to generate a nice filename. One way is by modifying the record definition we were using: eg

From this:

  • Collect(PictureList,{ Photo: Camera1.Photo, ID: CountRows(PictureList)+1 })

To this crazy-ass looking formula:

  • Collect(PictureList,{ Photo: Camera1.Photo, ID: Concatenate(AuditNumber.Text,”-“,Text(Today(),”[$-en-US]dd:mm:yy”),”-“,Text(CountRows(PictureList)+1),”.jpg”) } )

To explain it, first check out the result in the gallery. Note my unique identifier has been replaced with the file-name I needed to generate.

image

So this formula basically uses the Concatenate function to build the filename in the format we need. Concatenate takes a list of strings as parameters and munges them together. In this case, it takes:

  • The audit number from the textbox – Auditnumber.Text
  • A hyphen – “-“
  • Todays date in dd:mm:yy format and converts it to text – Text(Today(),”[$-en-US]dd:mm:yy”))
  • Another hyphen – “-“
  • The row count for PictureList collection, adds 1 to it and converts the result to text – Text(CountRows(PictureList)+1)
  • The file type – “.jpg”

The net result is our unique filename for each photo.

Now we have one final step. We are going to send the entire collection of photos and their respective file names to flow to put into a SharePoint Library. The method we will use will take the PictureList collection, and save it as one giant string. We will send the string to Flow, and have Flow then pull out each photo/filename pair. Mikael describes this in detail in his post, so I will just show the code here, which we will add to the onSelect property of the submit button.

UpdateContext( { FinalData: Concat(PictureList, ID & “|” & Photo & “#”) } )

image

So what is this formula doing? Well working from inside to out, the first thing is the use of the Concat function. This function does something very useful. It works across all the records of a PowerApps collection and returns a single string. So Concat(PictureList, ID & “|” & Photo & “#”) takes the file name we generated (ID), joins it to a pipe symbol “|” and joins that to the photo, and then adding a hash “#”. The UpdateContext function enables us to save the result of the Concat into a variable called FinalData.

If we test the app by taking some photos and clicking the submit button, we can see what the FinalData variable looks like by the Variable menu as shown below. Our table is now one giant text string that looks like:

“filename 1 | encoded image data 1 # filename 2 | encoded image data 2 # filename x | encoded image data x “

image

Now a quick caveat, avoid using PowerApps desktop client to do this. You should use PowerApps web creator due to an image encoding bug.

Anyway, let’s now move to Flow to finish this off. To do this, click on the Submit button on your app and choose Flows from the Action menu and on the resulting side panel choose to Create a new flow.

image  image

A new browser tab will open and sign you into Microsoft flow, and be nice enough to create a new flow that is designed to be triggered from PowerApps. Rename the flow to something more meaningful like “Upload Photos to Audit Lib” and then click the New Step button, and add a new Action. In the search bar, type in “Data Operations” and in the list of actions, choose the “Compose” action…

image  image

Okay so at this point I should a) explain what the hell we are going to do and b) credit Mikael Svenson for first coming up with this method. In relation to the first point, PowerApps is going to send flow a giant string of photos and filenames (remember the FinalData variable – that’s what flow will receive). If you recall with the FinalData, each photo/filename pair is delimited by a hash “#” and the file name is delimited by a pipe “|”. So we need to take what PowerApps sends, and turn it back into rows. Then for each row, we grab the file name, and the file content and upload it to our friendly neighbourhood SharePoint library.

Sounds easy right?

Our first step is to use the compose action we just added to split the data from PowerApps back into photos. This particular workflow action does something really neat. It executes workflow definition language formulas. What are these? Well workflow definition language is actually a part of another Microsoft product called Azure Logic Apps. Flow leverages Azure Logic Apps, which means this language is available to us. Sounds complex? Well yeah, it does at first but when you think about it this is not new. For example, MS Teams, Groups and Planner use SharePoint behind the scenes.

Anyway, the point is that there are several workflow definition language functions we can use, namely:

  • Split() –  Takes a string, and splits it up into smaller strings based on a delimiter
  • Item() – This is used to to return an element of an array. Eg if we use the split command above, item() will refer to each smaller string
  • dataUriToBinary() – This takes an image encoded and turns it back into binary form.

Okay enough talk! Let’s fill in this compose action. First up (and this is important), rename the action to something more meaningful, such as “ProcessPhotos”. After renaming, click on the text box for the Compose action and a side panel will open, showing a PowerApps label and a box matching the PowerApps logo called Body. Click the Body button and the compose textbox should have a value called ProcessPhotos_Inputs as shown in the 3 images below…

image

image

image

So what have we just done? Essentially we told the Compose method to ask PowerApps to supply data into a variable called ProcessPhotos_Inputs. In fact, let’s test this before going any further by saving the flow.

Switch back to PowerApps, click the Submit button and select the onSelect method we used earlier. You should still see the function where we created the FinalData variable. Copy the existing function to the clipboard, as you’re about to lose everything you typed in. Now click the Flow once and it should say that it’s adding to PowerApps.

At this point, the function you so painstakingly put together has been replaced by a reference to the Flow. Delete what’s been added and paste the original function back. Add a semicolon to the end of the function (which tells PowerShell that another command is coming) and then press SHIFT+ENTER to insert a newline. Now type in the name of your Flow and it should be found via auto-complete. Click the matching flow with .Run on the end. Add a left bracket to the end of the flow and it will ask for an input parameter. If you have done it right, that parameter will be called ProcessPhotos_Inputs. Note that it matches the parameter from above image. The parameter we are passing is the FinalData variable we constructed earlier.

image   image

image

Okay so basically we have wired up PowerApps to flow and confirmed that Flow is asking PowerApps for the correct parameter. So let’s now get back to Flow and finish the first action. If you closed the Flow tab in your browser, you can start a new flow editing session from within PowerApps. Just click the ellipsis next to your flow and click Edit.

image

Right! So after that interlude, lets use the first function of interest – split(). In the text box for your compose function, modify it to look like the string below. Be sure to put the whole thing in quotes because Flow is going to try and be smart and in doing so, make things really counter intuitive and hard to use. Don’t be surprised if your ProcessPhotos_Inputs box disappears. Just add it back in again via the “dynamic content” button.

image

In fact, save and close the flow at this point and then edit it again. You will now see what the real function looks like. Note how the ProcessPhotos_Input has magically changed to {@triggerbody()[‘ProcessPhotos_Inputs’]}.

image

Unfortunately this sort of magic will not actually work… there are a few too many curly braces and excessive use of “@” symbols. So replace the function with the following (include quotes):

  • “@split(triggerBody()[‘ProcessPhotos_Inputs’], ‘#’)”

Like I said… flow is trying to be smart, only its not Smile. If you have done things right the action looks like the screen below. Double check this because if you do not have the quotes right, it will get messy. In fact if you save the flow and re-open it the quotes will disappear. This is actually a good sign…

image

After saving the flow, closing an re-opening… look Ma, no quotes!

image

Now for the record, according to the documentation,  the triggerbody() function is a reference function you use to “reference outputs from other actions in the logic app or values passed in when the logic app was created. For example, you can reference the data from one step to use it in another”. This makes sense – as ProcessPhotos_Inputs is being passed from the PowerApps trigger step to this compose function we are building.

In any event, let’s add the next workflow step. The output of the ProcessPhotos step should be an array of photos, so now we need to process each photo and upload it to SharePoint. To do this, click on “New step”, and from the more button, choose “Add an apply to each” action. In the “Select an output from previous steps” action, click “Add dynamic content” and choose the output from the ProcessPhotos step as shown in the sequence of images below.

image   image

image

Next click “Add a condition” and choose the option “Edit in advanced mode”. Replace any existing function with :

  • @not(empty(item()))

image

image

The item() function is specifically designed for a “repeating action” such as our “apply to each” event. It returns the item that is in the array for this iteration of the action. We are then using the empty() function to check if this item has any data in it and then via the not() function so we can only take action of the item has data. In case you are wondering why I need to test this, the data that comes from PowerApps has an extra blank row and this effectively filters it out.

The resulting screen should look like this:

image

At this point we should have a file name and file body pair, delimited by a pipe symbol. So let’s add an action to get the file name first. On the “If Yes” condition, click “Add an action” and choose another Compose Action. Rename the action to “Get File Name” and enter the function:

  • “@split(item(),’|’)[0]”

image  image

The square brackets are designed to return a value from an array with a specific index. Since our split() function will return an array of two values, [0] tells Flow to grab the first value.

Now let’s add an action to get the file body. Click “Add an action” and choose another Compose Action. Rename the action to “Get File Body” and enter the function:

  • “@dataUriToBinary(split(item(),’|’)[1])”

image

Looking at the above function, the split() side of things should be easy to understand. We are now grabbing the second item in the array. Since that item is the image in an encoded format, we are then passing it to the dataUriToBinary() to turn it back into an image again. Neat eh?

Now that we have our file name and file body, let’s add the final action. This time, we will choose the “Create File” action from the SharePoint connector. Rename the action to “Upload to Library” and enter the URL of the destination SharePoint site. If you have permission to that site, you can then specify which library to upload the photo to by browsing the Folder Path.

image  image

Now comes the final steps. For the File Name, click the “Add dynamic content” button and add the Output from the Get File Name step (now you know why we renamed it earlier). Do the same for the File Content textbox by choosing the output from the Get File Body step.

image

Right! We are ready to test. Sorry about screenshot hell in the Flow section, but I am writing this on the assumption you are new to it, so I hope the extra detail helped. To test, use PowerApps on your phone or via the web browser, as the PowerApps desktop tool has a bug at the time of writing that will prevent the photos being sent to Flow in the right format.

Now it is time to test.

So check your document library and see if the photos are there! In the example below I took a photo of my two books for blatant advertising purposes Smile . The 3rd image is the document library showing the submitted photos!

image   image

image

Finally, here are some tips for troubleshooting along the way…

First up if you are having trouble with your flows, one lame but effective way to debug is to add a “Send an email” action from the Ofifce365 Outlook Connector to your flow. This is particularly handy for encoded images as they can be large and often in the flow management tools, you will see a “value too big to display” error. This method is crude, and there are probably better approaches, but it is effective. In the image below you can see how I have added to action below my ProcessPhotos action prior to splitting it into an array.

image   image

Another thing that has happened to me (especially when I rename flow actions) is a disconnect between the trigger connection between Flow and PowerApps. Consider this error…

image  image

If your flow fails and you see an error like

“Unable to process template language expressions in action [one of your actions] inputs at line ‘1’ and column ‘1603’: ‘The template language expression ‘json(decodeBase64(triggerOutputs().headers[‘X-MS-APIM-Tokens’]))[‘$connections’][‘shared_office365’][‘connectionId’]’ cannot be evaluated because property ‘shared_office365′ doesn’t exist, available properties are ”. Please see https://aka.ms/logicexpressions for usage details.’

.. the connection between PowerApps and Flow has gotten screwed up.

To resolve this, disconnect your flow from PowerApps and reconnect it. Your flow will be shown as a data source which can easily be removed as shown below.

image

Once removed I suggest you go back to your submit button and copy your function to the clipboard. This is because adding the flow back will wipe the contents of the button. With your button selected, click the Flows button from the Action menu and then click on the Flow to re-associate it to PowerApps. The third image shows the onSelect property being cleared and replaced with the flow instantiation. This is where you can re-paste your original code (4th image)…

image

image

image

image

Finally, another issue you may come across is where no pictures come through, and instead you are left with an oddly named file, which (if you look closely) looks like one of your compose actions in Flow. You can see an example of this in the image below.

The root cause of this is similar to what I described earlier in the article where I reminded you to put all of your compose actions in quotes. I don’t have a definitive explanation for why this makes a difference and to be honest, I don’t care. The fix is easy: just make sure all of your compose actions are within quotes, and you should be cooking with gas.

Phew! Another supposedly quick blog post that got big because I decided to explain things for the newbie. Hopefully this was of use to you and I look forward to your feedback.

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 3)

This entry is part 3 of 3 in the series Apps Model
Send to Kindle

Hiya

This is the third post in some articles aimed helping strategic or business focused users understand the SharePoint 2013 and Office365 “apps model”, and what it means for the future of SharePoint. In part 1 of this series, I outlined the opportunities and challenges that Microsoft are currently trying to addressing. They were:

  • Changing perceptions to cloud technologies and increased adoption; which enables…
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange in the form of Google Apps; as well as…
  • An increasing number of smaller cloud-based “point solution” players who chip away at SharePoint features with cheaper and easier to use offerings; while suffering from…
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace; while dealing with…
  • Customers unable to handle the ever increasing complexity of SharePoint, leading to delaying upgrades for years

These reasons prompted Microsoft’s to take a strongly cloud driven strategy and have really been transforming their business to deliver it. They really have transitioned from a software provider to an application hosting provider and in terms of SharePoint, the “apps” model is now the future of customisation.

Now “apps” is a multifaceted topic and the word has been overused unfortunately. So in part two we started to unpack the apps model by channelling the kids TV show playshcool to show the idea of SharePoint customisations being hosted on separate servers, but presenting a seamless experience for users.

If you never read part 1 and 2, I seriously suggest you do. To whet your appetite, here are some pretty diagrams to highlight what you missed out on!

image image  image

The main point I made was the notion that custom SharePoint components ran on separate, non SharePoint servers and were embedded into SharePoint via Iframes. These remote apps then communicate securely with SharePoint (eg read or write data from lists) via web services.

I concluded part 2 by showing the benefits of the apps model from Microsoft’s perspective. Among other things, this model of developing custom SharePoint solutions can be supported on Office365 and on-premises. For on-premises customers, SharePoint servers remain pristine and free of the muck and clutter of 3rd party code, making service packs and cumulative updates much less complex and costly. It also enables Microsoft to offer an app store, where 3rd party vendors can maintain cloud based services that can be embedded and consumed by on-premises and online SharePoint installs. If you go back to the 5 key strategic threats I started this post with, it addresses each one nicely.

So Microsoft’s intent is good, and there is nothing wrong with good intentions… or is there?

image

Digging deeper

So where does one start with unpacking the apps model? Let’s make this a little less of a dry read by channelling Big Bang Theory to find out. First up, Penny wants to know where app data stored is stored, given that the app runs on a different server to SharePoint as shown below. (If you think about it, from apps perspective – SharePoint is the remote server).

image

Sheldon’s answer? (Of course Sheldon invented the apps model right)…

image

Er… come again? Let’s see if Leonard can give a clearer explanation…

image

Leonard’s explanation is a little better. Ultimately the app developer has the choice over where app data is stored. For example, let’s say someone writes a survey app for SharePoint and it renders on the home page via an iframe. When users fill in this poll, the results could be stored on the server that hosts the app and not SharePoint at all (which in SharePoint terms is referred to as the remote web). Alternatively, the app could store the survey results in a list on a SharePoint site which the app is being invoked from (this is called the host web). Yet another alternative is something called the App Web, which I  will return to in a moment.

First lets look at the pros and cons of the first two options.

Option A: Store App Data in SharePoint Lists

If the app developer chooses this approach, the app reads/writes to SharePoint lists in the site where the app is deployed (henceforth called the host web). In this approach, when a site administrator chooses to add an app to the site, the app has to specify the level of access is required for the site, and it asks the site admin to authorise this access. In the image below, you can see that this Kodak app is requesting the ability to edit or delete items in document libraries and lists on this site, as well as access user profile information. If the site administrator clicks “Trust it”, the app now has the access it needs.

image

The pro to this approach is that all app data resides in regular old SharePoint, where it is searchable and can take advantage of all of the goodies that lists and libraries give you like versioning, information management policies and workflow. Additionally, multiple apps can access these lists, so this allows for the development of componentised solutions that work with a single authoritative data source.

The potential cons (or implications to be aware of) to the approach are:

  • SharePoint lists and libraries are not always an appropriate data stores for some types of data. Most people are well aware now that a SharePoint list is most definitely not a relational database and it has performance issues when misused (among other things). SharePoint also has in-built thresholds that kick in when lists get big (list queries that generate a result set of 5000 or more items will fail by default). Microsoft state in their SharePoint 2013 and SharePoint Online solution packs documentation “If your business needs require you to work with large data sets and query result sets, this approach won’t work”.
  • If the app has been deployed to many sites and site collections (and uses lists on each), then things can get painful if a new version of the app requires a new or modified set of columns on the list in the host web.
  • if you delete the app from the host web, the list data remains on the site as the lists will likely not be deleted. Sometimes this is a good thing as the data might be important or used in other ways, but if the app developer is storing configuration data here, would leave orphaned data in the site.

Also think about what happens when you have an on-premises SharePoint server but the app is hosted by a 3rd party outside your firewall. How is the remote app even able to get to your SharePoint box in the first place? To enable this to happen, you are likely going to need to talk to your network/security people because you are going to need some funky firewall/reverse proxy infrastructure to allow that to happen. Additionally, some organisations might be uncomfortable that an app from a 3rd party on the interweb can have the ability to read and write data inside an internal SharePoint server anyway.

So what alternatives are there here?

Option B: Store data in the remote app

The other option for the app developer is to store the app data on the same server the app is running from (called the remote web). In this approach, no data is stored in SharePoint at all. The pros for this option is that it alleviates two of the issues from option A above in that developers can use any data storage system they want (eg SQL Server, a GIS system or a graph database) and you do not have any of those pesky firewall issues with the app connecting back to your SharePoint server. The app renders in its iframe and does its thing.

Unfortunately this also has some cons.

  • There are potential data sovereignty issues. Where is the remote app hosted? Are you sure you trust the 3rd party app provider with your data? Consider that a 3rd party might host an app for many organisations. Do they have adequate precautions for keeping your data isolated and secure (ie Is your stuff stored in a separate database to everyone else?) Are they adequately backing up that data consistent with your internal standards? If you uninstall the app, is the data also uninstalled at the app provider end?
  • This data is very likely not searchable or easily usable by SharePoint for other purposes as it is not necessarily directly accessible by SharePoint.
  • Chances are that the app will have to talk to SharePoint in some way, so you really don’t get out of dealing with your network/security people to make it all work if the remote web is outside your firewall.
  • Also consider data integrity. if say, you needed to restore SharePoint because of a data loss or data corruption issue, what does this mean for the data stored on the remote app? will things get out of sync?

These (and other questions) bring us to the next option that is a bit of a mind-bending middle ground.

Option C: Store data in the App Web

Now here is where things start to mess with your head quite a bit. Most people can understand the idea behind option A and B, but what the heck is this thing known as an app web? The short answer, it is a special SharePoint sub-site that is used for certain apps. It usually gets created when a site administrator adds an app to a site and importantly, removed when a site administrator removes the app. If you consider the diagrams below we can see our mythical SharePoint 2013 homepage with 3 apps on it, all running on separate servers as explained in part 2. If we assume each of these apps have deployed an app web on our SharePoint site, there are now three SharePoint sub-sites created underneath it as shown below. These are the app webs.

image

Now app webs are no ordinary subsites in the way you might know them now. For a start, if you tried to access them from your SharePoint host web, you would not be able to at all. Through some trickery, SharePoint puts these sites on a completely different URL than the site where the apps were installed. For example, if you had a site called http://myintranet/sites/web1 and you added a survey app, a subsite exists but it would absolutely not be http://myintranet/sites/web1/survey or anything like that. It would instead look something like:

http://app-afb952fd75de4a.apps.mycompany/sites/web1/surveyapp/

Now being a business audience reading this, trust me that there are good reasons for this apparent weirdness related to security which I will speak to in the next post.

But if this makes sense so far, then in some ways, one can argue that an app web is a weird cross between option A and B in that it is a subsite on your SharePoint farm, yet SharePoint treats it as if it is a remote data store. This means the app web is a special isolated storage area for app developers to put stuff like data, configuration, CSS, JavaScript and whatever other functionality their app needs to do its thing.

This approach has some advantages:

  • The app web is technically a SharePoint site, so app developers can create whatever structure they need (think lists, libraries and columns) to store their stuff like images, css, JavaScript and other goodies. This allows for much more flexible data structures that can easily be accommodated that writing to the odd list in a host web (Option A above)
  • The app web lives is on your SharePoint server, so it means some app components (and data) can be stored here instead of on a remote web on a server far away from you. When you back up your configuration database, the app web is backed up like everything else. (Less data integrity risk than option B).
  • The app web facilitates clean install/uninstall of an app, since the app web is removed if an app is removed. In other words, no more orphaned data lying around

But if you think all of those points through, you might see several important implications:

  • If the app developer decided to store critical app data in the app web, when the app is uninstalled that data is lost (or put better, developers have to write special uninstall code to copy the data somewhere else which means yet more code)
  • Just like option A, SharePoint lists and libraries are not always an appropriate data stores for some types of data. (remember the list item threshold I mentioned in option A? They also apply here too)
  • Apps cannot share app webs between them. In other words, apps cannot reach in and access data from other app webs. Therefore you can’t easily use the information stored in app webs with other apps and applications. In fact if you want to do this you pretty much have to choose option A. Store the shared data apps need in the host web and have both apps access that data
  • You may end up with many many app webs. If you take my example above there are 3 app webs to handle 3 apps. What if this was a project site template and your organisation has hundreds of projects. That means you have thousands of app webs, all with potentially interesting data that trapped in mini information silos.
  • SharePoint search cannot index the app webs
  • A critical but often overlooked one is that developers can’t update the library/list metadata schema in an app web (think columns, content types, libraries, etc) without updating and redeploying the app. As you will see in a future post, this gets real ugly real fast!
  • SharePoint App Webs are created with special templates which block SharePoint Designer (that’s probably a good thing given the purpose of an app web)

Conclusion

So if you are a non-developer reading this post, consider that none of the above options are on their own, likely to give you a solution. For each option, there seems to be just as many advantages as flaws. The reality is that many apps will use at least two, or even all three options. Things like images, css and javascript might be loaded from the app web, some critical reusable SharePoint content from the host web and the remote web for some heavy duty data manipulation.

If you think that through, that means as SharePoint administrators and governance teams, you will likely end up dealing with all of the cons of each of the options. Imagine asking a developer to conceptually draw an app that uses each of the options and consider how many “moving parts” there are to it all. Then when you add the fact that most organisations still have a legacy of full trust solutions to deal with, you can start to see how complex this will be to manage.

Now this really is just the tip of the iceberg. In the next post I am going to talk a little about how all this stuff is wired up from an authentication and security standpoint. I am also going to focus on the application lifecycle management implications of this model. If you think about the picture I have painted here with all of the potential moving parts, how to you think an upgrade to an app would fare?

thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 2)

This entry is part 2 of 3 in the series Apps Model
Send to Kindle

Hiya

This is the second post in some articles aimed at demystifying the SharePoint “apps” model for the strategic or business focused user. In case you are not aware, Microsoft have gotten a serious case of “app fever” in recent times, introducing the terminology not only into SharePoint, but Office as well. While there are very good reasons for this happening, Microsoft used the “app” terminology in multiple ways, therefore making their message rather confusing. As a result, Microsoft have not communicated their intent particularly well and customers often fail to understand why they make the changes that they do.

Things are definitely getting better, but I nevertheless see a lot of confusion around the topic. So in part 1 of this series, I explained the reasons Microsoft have adopted the strategy that they have done. To recap, they are trying to respond to five major disruptive forces that challenge their market position:

  • Changing perceptions to cloud technologies and increased adoption; which enables…
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange in the form of Google Apps; as well as…
  • An increasing number of smaller cloud-based “point solution” players who chip away at SharePoint features with cheaper and easier to use offerings; while suffering from…
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace; while dealing with…
  • Customers unable to handle the ever increasing complexity of SharePoint, leading to delaying upgrades for years

Microsoft’s answer to this has to go all-in with cloud, as this is the only way to beat the cloud providers at their own game, while reducing the complexity burden on their customers.  This of course is in the form of Office365, OneDrive and an ever increasing set of cloud oriented tools like Delve and Project Online.

But in SharePoint land, this has turned traditional development upside down. More than a decade of customisation “best practices” are no longer best – in fact they are no longer usable in many circumstances. The main reason is that the most common method of customisation normally applied to SharePoint (full-trust server side code) is not permitted in the cloud. Microsoft couldn’t risk untrusted, 3rd party custom code on their servers. What happens if one clients dodgy code affects everybody else sharing the service? This would threaten performance, uptime and Microsoft’s ability to upgrade their service over time.

So things had to change. Microsoft’s small army of product architects commandeered a whiteboard and started architecting innovative solutions to deal with these challenges and the apps model is the result. So let’s examine some core bits to the apps model by channelling a much loved children’s TV show.

There is a bear in there…

Now at this point I have to warn the developers or tech people writing this post. I am going to give a simplified version of the apps model intended for a decision making audience. I will omit many details I don’t deem necessary to make my key points. You have been warned…

image

Any parent of small children in most countries might be familiar with Playschool – a show for toddlers that has been around for eons. It is well known for its theme song starting with the line “There is a bear in there…and a chair as well”. When trying to come up with a suitable way to explain the SharePoint apps model, using Playschool as a metaphor turned out to work brilliantly. You see in each episode of Playschool, there was a segment where viewers were taken through the “magic window” to faraway lands. In the show, the presenter would pick one of the three windows and we would zoom into it, resulting in a transition to another segment. In our case, we have to pick the square window for two reasons. Firstly, a good many apps are in effect windows to somewhere else. Secondly, and much more importantly, it perfectly matches the new Microsoft corporate logo. Perfect metaphor or what eh? 🙂

Like the Playschool magic window, browsers have a similar capability to enable you to visit strange and magical lands… Not only is there a bear in there and a chair as well, but there are plenty of other things like YouTube videos and Yammer discussions. I have drawn this conceptually shown below. Note the black window in the SharePoint team site on the left, that can be filled with YouTube or Yammer.

image

You have no doubt visited web sites that have embedded content like YouTube videos or SlideShare slides in them (This blog site has lots of embedded ads that make me no money!). Essentially, it is possible for browsers to include content from different sites together into a single “page” experience. Users see it all as one page, even through content can come from all over the place. This is really useful, because it means you can leverage the capability of other sites to enhance the functionality of your own sites..

This my friends, is one of the core tenets of the current SharePoint 2013 apps model. Instead of running on the SharePoint server, many apps now run separately from SharePoint, embedded in SharePoint pages so that they look like they are part of SharePoint. In the example mock-up below, we have a SharePoint team site. In it, we have a remote web site that displays some pretty dashboard data. By loading that emote content into our magical square window, it now appears a part of SharePoint.

image       image

Going back to Microsoft’s core pain points, this helps things a lot. For a start, it means no custom code has to be installed onto the SharePoint server. Instead, SharePoint simply embeds the external content on the page. In the leftmost image above, you can see the SharePoint server (labelled as “Your server”), rendering a page with a placeholder in it. It then retrieves content from a remote server (labelled “my server”) and displays it in the placeholder to render the complete page (the rightmost image above).

So what feat of Microsoft innovation and general awesomeness enabled this to happen?

Everybody meet “Mr IFrame”.

Inline Frames (IFrames) are windows cut into your webpage that allow your visitor to view content on another site without reloading the entire page. The concept was first implemented in Microsoft Internet Explorer way back in 1997. Yep – you heard right… 1997. So IFrames are not a new concept at all – in fact its positively ancient when you count time in internet years. For this reason, when developers find this out, their reaction is usually something like this…

image

But there is more than meets the eye…

Now if the apps model was just IFrames alone, then you you might wonder what the big deal is with apps. In fact IFrames have been used this way in SharePoint for years via the Page Viewer Web Part. For years, companies with SharePoint deployments have embedded stuff like Twitter, YouTube or Facebook widgets via iFrames.

So of course, there is more to it…

Let’s revisit the “your server” and “my server” diagram used above and consider the question.. What if these remote applications displayed inside an iFrame can interact with SharePoint? In other words, What if the remote application running on my remote server is able to connect to your SharePoint server and read/write data? In the diagram below I have illustrated the idea. The top half of the diagram represents a SharePoint server that could be on-premises or an Office365 tenant. On the left is a Products list, that is somewhere inside this SharePoint server. At the bottom is my application running on my server that creates a pretty dashboard. What if my remote application queried the SharePoint products list to create the dashboard? Now we have an application, that while not running inside SharePoint, can nevertheless utilise live data from SharePoint to create a seamless experience for users.

image

If we now add 3 iframes to a page, the implications should start to become more clear. We can build hybrid solutions leveraging the best of what SharePoint can do, whilst leveraging the best of what other platforms can do. To the user, these are still SharePoint sites, but the reality is that we are now viewing a page that has been delivered by various different platforms. Each can interact with SharePoint data in different ways to deliver a seamless experience. Because these remote apps are not SharePoint at all, developers can write any application they want to, using the platform and tools of their choice. But to the user it is still a SharePoint page… neat huh? I’m sure the Microsoft product team thought that this was a brilliant conceptual masterpiece when they dreamt it up.

image

A beautiful model…

I don’t know if you have ever watched developers come up with API’s, but it tends to be a lot of excitement around a whiteboard as they revel in the glory of their elegant solution designs. So let’s quickly re-examne the benefits of this remotely hosted app approach from Microsoft’s perspective and see how we are going so far…

image

First and foremost, we now have SharePoint customisation approach that they can be fully support in Office365. Microsoft don’t have to put code on their online servers, yet can support extensibility. Now they are much more evenly matched with Google, while at the same time, reduce their tech support costs of SharePoint because they have isolated 3rd party code out of SharePoint. If any problems are encountered with a remote app, SharePoint will keep humming along and Microsoft can now legitimately tell the clients “no really it is not SharePoint causing your issue – go see your friendly neighbourhood app developer”.

More importantly since apps can also can be used in on-premises SharePoint deployments too, meaning both Microsoft and their customers now have pristine SharePoint servers free of the muck and clutter of 3rd party code. Therefore service packs and cumulative updates should no longer strike fear into admins. Microsoft also now nails google’s ass because Google has no real concept of on-premises at all in the way Microsoft does. Thus when hybrid scenarios come up in conversation, Microsoft has a much stronger story to tell.

But there is a more important implication than all of that. Microsoft can now do the app store thing. Vendors can maintain cloud based services that can be embedded and consumed by on-premises and online SharePoint installs. This means 3rd parties can tap into the customer ecosystem with a captive marketplace and customers can browse the store to examine what options are out there to extend SharePoint functionality. In theory, this should enable hundreds of vendors to do some slight modifications to their existing web based applications and incorporate them into the SharePoint ecosystem.

image

But reality is not what’s on the whiteboard…

At this point, I hope I have painted a pretty good picture of the advantages offered by this new paradigm and you can probably appreciate the Microsoft nerds completely falling in love with this conceptual model of future SharePoint customsiations. The Microsoft strategy dudes probably loved it too because it elegantly dealt with all of the challenges they were seeing. Unfortunately though, with most conceptual models, reality is a very different beast from the convenient fiction of models.

So in the next post, we are going to dig a little deeper. For example, how can a remote app even have permissions to talk to SharePoint in the first place? Do you really want code running in some untrusted 3rd party server to be fiddling with data in your SharePoint lists and libraries? How does that even work anyway in an on-premises scenario when a cloud hosted app has to access data behind your firewall?

Fear not though – the Microsoft guys thought of this (and more) when they were drawing their apps model concept on the big whiteboard. So in the next post, we are going to look at what it takes to bring this conceptual masterpiece into reality.

 

Thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

So what is this newfangled apps model anyway and why do I care? (part 1)

This entry is part 1 of 3 in the series Apps Model
Send to Kindle

Hiya

I’ve been meaning to write about the topic of the apps model of SharePoint 2013 for a while now, because it is a topic I am both fascinated and slightly repulsed by. While lots of really excellent material is out there on the apps model (not to mention a few good rants by the usual suspects as well), it is understandably written by developers tasked with making sense of it all, so they can put the model into practice delivering solutions for their clients or organisations.

I spent considerable time reading and researching the various articles and videos on this topic produced by Microsoft and the broader SharePoint community and made a large IBIS map of it. As I slowly got my head around it all, subtle, but significant implications begin to emerge for me. The more I got to know this topic, the more I realised that the opportunities and risks of the apps model holds many important insights and lessons for how organisations should be strategically thinking about SharePoint. In other words, it is not so much about the apps model itself, but what the apps model represents for organisations who have invested into the SharePoint platform.

So these posts are squarely aimed at the business camp. Therefore I am going to skip all sorts of things that I don’t deem necessary to make my points. Developers in particular may find it frustrating that I gloss over certain things, give not quite technically correct explanations and focus on seemingly trivial matters. But like I said, you are not my audience anyway.

So let’s see if we can work out what motivated Microsoft head in this direction and make such a significant change. As always, context is everything…

As it once was…

I want you to picture Microsoft in 2011. SharePoint 2010 has come out to positive reviews and well and truly cemented itself in the market. It adorns the right place in multiple Gartner magic quadrants, demand for talent is outstripping supply and many organsiations are busy embarking on costly projects to migrate from their legacy SharePoint 2007 and 2003 deployments, on the basis that this version has fixed all the problems and that they will definitely get it right this time around. As a result, SharePoint is selling like hotcakes and is about to crack the 2 billion dollar revenue barrier for Microsoft. Consultants are also doing well in this time since someone has to help organsiations get all of that upgrade work done. Life is good… allegedly.

But even before the release of SharePoint 2010, winds of change were starting to blow. In fact, way back in 2008, at my first ever talk at a SharePoint conference, I showed the Microsoft pie chart of buzzwords and asked the crowd what other buzzwords were missing. The answer that I anticipated and received was of course “cloud”, which was good because I had created a new version of the pie and invited Microsoft to license it from me. Unfortunately no-one called.

image

Winds of change…

While my cloud picture was aimed at a few cheap laughs at the time, it holds an important lesson. Early in the release cycle of SharePoint 2007, cloud was already beginning to influence people’s thinking. Quickly, services traditionally hosted within organisations began to appear online, requiring a swipe of the credit card each month from the opex budget which made CFO’s happy. A good example is Dropbox which was founded in 2008. By 2010, won over the hearts and minds of many people who were using FTP. Point solutions such as Salesforce appeared, which further demonstrated how the the competitive landscape was starting to heat up. These smaller, more nimble organsiations were competing successfully on the basis of simplicity and focus on doing one thing well, while taking implementation complexity away.

Now while these developments were on Microsoft’s radar, there was really only one company that seriously scared them. That was Google via their Google Docs product. Here was a company just as big and powerful as Microsoft, playing in Microsoft’s patch using Microsoft’s own approach of chasing the enterprise by bundling products and services together. This emerged as a credible alternative to not only SharePoint, but to Office and Exchange as well.

Some of you might be thinking that Apple was just a big a threat to Microsoft as Google. But Microsoft viewed Apple through the eyes of envy, as opposed to a straight out threat. Apple created new markets that Microsoft wanted a piece of, but Apple’s threat to their core enterprise offerings remained limited. Nevertheless, Microsoft’s strong case of crimson green Apple envy did have a strategic element. Firstly, someone at Microsoft decided to read the disruptive innovation book and decided that disruptive was obviously cool. Secondly, they saw the power of the app store and how quickly it enabled an developer ecosystem and community to emerge, which created barriers for the competition wanting to enter the market later.

Meanwhile, deeper in the bowels of Microsoft, two parallel problems were emerging. Firstly, it was taking an eternity to work through an increasingly large backlog of tech support calls for SharePoint. Clients would call, complaining of slow performance, broken deployments after updates, unhandled exceptions and so on. More often than not though, these issues had were not caused by the base SharePoint platform, but via a combination of SharePoint and custom code that leaked memory, chewed CPU or just plain broke. Troubleshooting and isolating the root cause very difficult which led to the second problem. Some of Microsoft’s biggest enterprise customers were postponing or not bothering with upgrades to SharePoint 2010. They deemed it too complex, costly and not worth the trouble. For others, they were simply too scared to mess with what they had.

A perfect storm of threats

image  image

So to sum up the situation, Microsoft were (and still are) dealing with five major forces:

  • Changing perceptions to cloud technologies (and the opex pricing that comes with it)
  • The big scary bogeyman known as Google with a viable alternative to SharePoint, Office and Exchange
  • An increasing number of smaller point solution players who chip away at SharePoint features with cheaper and easier to use offerings
  • A serious case of Apple envy and in particularly the rise of the app and the app marketplace
  • Customers unable to contend with the ever increasing complexity of SharePoint and putting off upgrades

So what would you do if you were Microsoft? What would your strategy be to thrive in this paradigm?

Now Microsoft is a big organisation, which affords it the luxury of engaging expensive management consultants, change managers and corporate coaches. Despite the fact that it doesn’t take an MBA to realise that just a couple of these factors alone combine as a threat to the future of SharePoint, lots of strategic workshops were no doubt had with associated whiteboard diagrams, postit notes, dodgy team building games and more than one SWOT analyses to confirm the strategic threats they faced were a clear and present danger. The conclusion drawn? Microsoft had to put cloud at the centrepiece of their strategy. After all, how else can you bring the fight to the annoying cloud upstarts, stave off the serious Google threat, all the while reducing the complexity burden on their customers?

A new weapon and new challenges…

In 2011, Microsoft debuted Office365 as the first salvo in their quest to mitigate threats and take on their competitors at their own game. It combined Exchange, Lync and SharePoint 2010 – packaging them up in a per-user per month approach. The value proposition for some of Microsoft’s customers was pretty compelling because up-front capital costs reduced significantly, they now could get the benefits of better scalability, bigger limits on things like mailboxes, while procurement and deployment was pretty easy compared to doing it in-house. Given the heritage of SharePoint, Exchange and Lync, Microsoft was suddenly competitive enough to put Google firmly on the back foot. My own business dumped gmail and took up Office365 at this time, and have used it since.

Of course, there were many problems that had to be solved. Microsoft was now switching from a software provider to a service provider which necessitated new thinking, processes, skills and infrastructure internally. But outside of Microsoft there were bigger implications. The change from software provider to service provider did not go down well with many Microsoft partners who performed that role prior. It also freaked out a lot of sysadmins who suddenly realised their job of maintaining servers was changing. (Many are still in denial to this day). But more importantly there was a big implication for development and customisation of SharePoint. This all happened mid-way through the life-cycle of SharePoint 2010 and that version was not fully architected for cloud. First and foremost, Microsoft were not about to transfer the problem of dodgy 3rd party customisations onto their servers. Recall that they were getting heaps of support calls that were not core SharePoint issues, but caused by custom code written by 3rd parties. Imagine that in a cloud scenario when multiple clients share the same servers. This means that that one clients dodgy code could have a detrimental effect on everybody else, affecting SLA’s while not solving the core problem Microsoft were having of wearing the cost and blame via tech support calls for problems not of their doing.

So with Office365, Microsoft had little choice but to disallow the dominant approach to SharePoint customisation that had been used for a long time. Developers could no longer use the methods they had come to know and love if a client wanted to use Office 365. This meant that the consultancies who employed them would have to change their approach too, and customers would have to adjust their expectations as well. Office365 was now a much more restricted environment than the freedom of on-premises.

Is it little wonder then, that one of Microsoft’s big focus areas for SharePoint 2013 was to come up with a way to readdress the balance. They needed a customisation model that allowed a consistent approach, whether you chose to keep SharePoint on-premises, or move off to the cloudy world of Office 365. As you will see in the next post, that is not a simple challenge,. The magnitude of change required was going to be significant and some pain was going to have to happen all around.

Coming next…

So with that background set, I will spend time in the next post explaining the apps model in non technical terms, explaining how it helps to address all of the above issues. This is actually quite a challenge, but with the right dodgy metaphors, its definitely possible. 🙂 Then we will take a more critical viewpoint of the apps model, and finally, see what this whole saga tells us about how we should be thinking about SharePoint in the future…

thanks for reading

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Help me visualise the pros and cons of hybrid SharePoint 2013…

Send to Kindle

Like it or not, there is a tectonic shift going on in the IT industry right now, driven primarily by the availability of a huge variety of services hosted in the cloud. Over the last few years, organisations are increasingly procuring services that are not hosted locally, much to the chagrin of many an server hugging IT guy who understandably, sees various risks with entrusting your fate to someone else.

We all know that Microsoft had a big focus on trying to reach feature parity between on-premises SharePoint 2013 and Office365. In other words, with cloud computing as a centrepiece of their strategy, Microsoft’s SharePoint 2013 aim was for stuff that both works on premises, but also also works on Office 365 without too much modification. While SharePoint 2013 made significant inroads into meeting this goal (apps model developers might beg to differ), the big theme to really emerge was that feature parity was a relatively small part of the puzzle. What has happened since the release of SharePoint 2013, is that many organisations are much more interested in hybrid scenarios. That is, utilising on-premises SharePoint along with cloud hosted SharePoint and its associated capabilities like OneDrive and Office Web Applications.

So while it is great that SharePoint online can do the same things as on-prem, it all amounts to naught if they cannot integrate well together. Without decent integration, we are left with a lot of manual work to maintain what is effectively two separate SharePoint farms and we all know what excessive manual maintenance brings over time…

Microsoft to their credit have been quick to recognise that hybrid is where the real action is at, and have been addressing this emerging need with a ton of published material, as well as adding new hybrid functionality with service packs and related updates. But if you have read the material, you can attest that there is a lot of it and it spans many topic areas (authentication alone is a complex area in itself). In fact, the sheer volume and pace of material released by Microsoft show that hybrid is a huge and very complex topic, which begs a really critical question…

Where are we now at with hybrid? Is it a solid enough value proposition for organisations?

This is a question that a) I might be able to help you answer and b) you can probably help me answer…

Visualising complex topics…

A few months back, I started issue mapping all of the material I could get my hands on related to hybrid SharePoint deployments. If you are not aware, Issue mapping is a way of visualising rationale and I find it a brilliant personal learning tool. It allows me to read complex articles and boil them down to the core questions, answers, pros and cons of the various topics. The maps are easy to read for others, and they allow me to make my critical thinking visible. As a result, clients also like these maps because they provide a single integrated place where they can explore topics in an engaging, visual way, instead of working their way through complex whitepapers.

If you wish to jump straight in and have a look around, click here to access my map on Hybrid SharePoint 2013 deployments. You will need to sign in using a facebook or gmail ID to do so. But be sure to come back and read the rest of this post, as I need your help…

But for the rest of you, if you are wondering what my hybrid SharePoint map looks like, without jumping straight in, check out the screenshots below. The tool I am using is called Glyma (glimmer), which allows these maps to use developed and consumed using SharePoint itself. First up, we have a very simple map, showing the topic we are discussing.

image

If you click the plus sign next to the “Hybrid SharePoint deployments” idea node, we can see that I mapped all of the various hybrid pros and cons I have come across in my readings and discussions. Given that hybrid SharePoint is a complex topic, there are lots of pros and cons as shown in the partial image below…

image

Many of the pros and cons can be expanded further, which delves deeper into the topics. A single click expands one node level, and a double click expands the entire branch. To illustrate, consider the image below. One of the cons is around many of the search related caveats with hybrid that can easily trip people up. I have expanded the con node and the sub question below it.  Also notice hat one of the idea nodes has an attachment icon. I will get to that in a moment…

image

As I mentioned above, one of the idea nodes titled “SPO search sometimes has delays on low long it takes for new content to appear in the index” has an attachment icon as well as more nodes below it. Let’s click that attachment icon and expand that node. It turns out that I picked this up when I read Chris O’Brien’s excellent article and so I have embedded his original article to that node. Now you can read the full detail of his article for yourself, as well as understand how his article fits into a broader context.

image

It is not just written content either. If I move further up the map, you will see some nodes have video’s tagged to them. When Microsoft released the videos to 2014’s Vegas conference, I found all sorts of interesting nuggets of information that was not in the whitepapers. Below is an example of how I tagged one of the Vegas video’s to one of my nodes.

image

 

A call to action…

SharePoint hybrid is a very complex topic and right now, has a lot of material scattered around the place. This map allows people, both technical and non technical, to grasp the issue in a more strategic, bigger picture way, while still providing the necessary detail to aid implementation.

I continually update this map as I learn more about this topic from various sources, and that is where you come in. If you have had to work around a curly issue, or if you have had a massive win with a hybrid deployment, get in touch and let me know about it. It can be a reference to an article, a skype conversation or anything, The Glyma system can accommodate many sources of information.

More importantly, would you like to help me curate the map on this topic? After all, things move fast the SharePoint community rarely stands still. So If you are up to speed on this topic or have expertise to share, get in touch with me. I can give you access to this map to help with its ongoing development. With the right meeting of the minds, this map could turn into an incredible valuable information resource to a great many people.

So get in touch if you want to put your expertise out there…

 

Thanks for reading

 

 

Paul Culmsee

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

Confession of a (post) SharePoint architect… “Thou shalt NOT”

This entry is part 10 of 10 in the series confessions
Send to Kindle

Hi all

*sigh*

This post comes to you during my reality check of returning to work after the bliss of 1 month of vacation in New Zealand. After walking on a glacier, racing around in jetboats and relaxing in volcanic hot springs, the thought of writing SharePoint blog posts isn’t exactly filling me with excitement right now. But nevertheless I am soldiering on, because as Ruven Gotz frequently tells his conference attendees – I do it because I love you all.

Now this is article ten (blimey!) in a series of posts about my insights of being a cross between a SharePoint architect and facilitator/sensemaker. In case this is your first time reading this series, I highly recommend that you go back to the beginning as we have covered a lot of ground to get to here. Inspired by the late, great Russell Ackoff, I used his notion of f-laws – sometimes inconvenient truths about what I think is critical for successful SharePoint delivery. At this point in proceedings, we have covered 6 f-laws across 9 articles.

The next f-law we are going to cover is a bit of a mouthful. Are you ready?

F-Law 7: The degree of governance strictness is inversely proportional to the understanding of the chaos its supposed to prevent

So to explain this f-law, here is a question I often ask clients and conference attendees alike…

What is the opposite of governance?

The answer that most people give to this question is “Chaos”. So what I am implying? Essentially that the stricter you are in terms of managing what you deem to be chaos, the less you actually understand the root causes of chaos in the first place.

Ouch! Really?

To explain, let’s revisit f-law 5, since this is not the first time the theme of chaos has come up in this series. If you recall f-law 5 stated that confidence is the feeling you have until you understand the problem. In that article, I drew the two diagrams below, both of them representing the divergence and convergence process that comes with most projects. The pink box labelled chaos illustrated that before a group can converge to a lasting solution, they have to cross the ‘peak’ of divergence. This is normally a period of some stress and uncertainty – even on quite straightforward projects. But commonly in SharePoint things can get quite chaotic with lots of divergence and very little convergence as shown by the rightmost diagram where there appears to be little convergence.

image  image

“Thou shalt not…”

There are clearly forces at play here… forces that push against convergence and manifest in things like scope creep, unreconciled stakeholder viewpoints and the stress of seeing the best laid plans messed with. The size and shape of the pink ‘chaos’ box reflects the strength of those underlying forces.

To manage this, many (if not most) SharePoint practitioners take a “thou shalt not” approach to SharePoint delivery in an attempt to head things off before they even happen. After the dissection in f-law 6 of how IT people channel Neo and focus on dial tone issues, it is understandable why this approach is taken. Common examples of this sort of thinking are “Thou shalt not use SharePoint Designer” or “Thou shalt use metadata and not folders” or “Thou shalt use the standard site template no matter what.”

These sort of commandments may be completely appropriate, but these is one really important thing to make sure you consider. If having no governance indeed results in chaos, then it stands to reason that we need to understand the underlying divergent forces behind chaos to mitigate chaos and better govern it. In other words, we need look inside pink box labelled chaos and see what the forces are that push against convergence. So lets modify the diagrams above and take a look inside the pink box.

image

For me, there are four forces that govern the amount of chaos in SharePoint projects, namely:

  1. Pace of Change
  2. Problem Wickedness
  3. Technical Complexity
  4. Social Complexity

Let’s examine each one in turn…

Pace of change

image

Remember the saying “The only certainty in life are death and taxes”? Outside of that, the future is always unpredictable. In between SharePoint 2003 and SharePoint 2007, the wave of web 2.0 and social networking broke, forever changing how we collaborate and work with information online. In between SharePoint 2010 and SharePoint 2013, the wave of cloud computing broke, which is slowly but surely changing the way organisations view their IT assets (both systems and people). The implications of this are huge and Microsoft have to align their product to tap into these opportunities. Net result? We all have a heap of new learning to do.

If you read the last post, you might recall that pace of change was a recurring theme when people answered the question about what is hard with SharePoint. But let’s look beyond SharePoint for a second… change happens in many forms and at many scales. At a project level, it may mean a key team member leaves the organisation suddenly. At an organisational level, there might be a merger or departmental restructure. At a global level, events like the Global Financial Crisis forced organisations to change strategic focus very quickly indeed.

The point is that change breeds innovation yet it is relentless and brings about fatigue. Continual learning and relearning is required and even the best laid plans will inevitably be subject to changing circumstances. The key is not to fight change but accept that it will happen and work with it. In terms of SharePoint, this is best addressed by an iterative delivery model that has a high degree of key stakeholder involvement, recognises the learning nature of SharePoint and fosters meaningful collaboration.

Problem “Wickedness”

image

Some problems are notoriously hard to solve because they evoke a lot of diverse, often conflicting viewpoints and it can be difficult even agreeing on what the core problem actually is. F-law 1 examined how we sometimes fixate on the means of governance when the end goal of SharePoint is uncertain. Over-reliance on definitions is the result. F-law 2 looked at how users understanding of a problem changes over time and f-Law 4 looked at the folly in chasing platitudes. The underlying cause for all of these f-laws is often the very nature of the problem you are trying to solve.

You might have heard me talk about wicked problems or read about it in my blog or my book. In short, some problems are exceptionally tough to solve. Just trying to explain the problem can be hard, and analysis-paralysis is common because it seems that each time the problem is examined, a new facet appears which seemingly changes the whole concept of the problem. This phenomenon was named as a wicked problems by Horst Rittel in 1970. One of the most extreme examples right now of a wicked problem in the USA would be the gun control debate since depending on your values and ideology, you would describe the underlying problem in different ways and therefore, the potential solutions offered are equally varied and contentious.

While there are actually many different management gurus who have also come up with alternate names for this sort of complexity (“mess”, “adaptive problem”, “soft systems”), the term wicked problem has become widely used to describe these types of problems. I suspect this is because Rittel listed a bunch of symptoms that suggest when your problem has elements of wickedness. Here are some of the commonly cited ones:

  1. There is no definitive formulation of a wicked problem (defining wicked problems is itself a wicked problem).
  2. The problem is not understood until after the formulation of a solution.
  3. There is no immediate and no ultimate test of a solution to a wicked problem.
  4. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial and error, every attempt counts significantly.
  5. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.
  6. Every wicked problem is essentially unique.
  7. Every wicked problem can be considered to be a symptom of another problem.
  8. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution.
  9. Stakeholders have radically different world views and different frames for understanding the problem.
  10. The constraints that the problem is subject to and the resources needed to solve it change over time.
  11. The problem is never solved definitively

Can you tick off some of those symptoms with SharePoint? I’ll bet you can… I’ll also bet that for other IT projects (say MS Exchange deployments) these symptoms are far less pronounced.

Guess what the implication is of wicked problems. They tend to resist the command-and-control approach of delivery and require meaningful collaboration to get them done. That is kind of funny when you think about it since SharePoint is touted as a collaboration tool yet falls victim to wicked elements. That suggests that there was not enough collaboration to deliver the collaboration platform!

Technical Complexity

image

Technical complexity involves difficulties in fact finding, technical information and the systematic identification and analysis of options and their likely consequences. It is an understatement to say that SharePoint is full of technical complexity. In fact, it is one of the most complex products that Microsoft has ever produced (and that’s before you get to dependencies like SQL Server, IIS, FIM, Federated authentication and the myriad of other things you need to know)

A typical characteristic of technical complexity is information overload in fact finding. There is far too much information to make sense of and as a result, no one person has the cognitive capacity to understand it all. Thus, stakeholders have to rely on each other and, on occasions, rely on outside experts to collaboratively work towards a solution. Technical complexity requires a lot of cognitive load to manage and it is easy to get caught up in the minute detail and lose the all important bigger picture.

Problems also arise when different technical experts come to opposite conclusions. This gives rise to the last and most insidious divergent force underlying SharePoint chaos and the governance that goes with it – social complexity.

Social Complexity

image

The first three symptoms tend to create a perfect storm of complexity, since we have a situation where there is a lot of uncertainty. Many people hate this because unknown unknowns creates fear if sufficient trust does not exist between all key parties. Developing trust is made all the harder because we are all a product of our experiences, with different values, cultural beliefs, personality styles and biases so reconciling different world views on various issues can be difficult. Then you have the issue that many organisations have a blame culture and people position themselves to avoid it. This my friends is social complexity and I know that all of you live this sort of stuff everyday.

Yet under the right circumstances, groups can be remarkably intelligent and are often smarter than the smartest people in group. Groups do not need to be dominated by exceptionally intelligent people in order to be smart – in fact diversity of group makeup is a much more important factor than individual IQ. Without this diversity, groups are less likely to arrive at a good answer to a given problem because they are likely to fall into groupthink. Groupthink is when highly cohesive groups make unsound decisions due to group pressures, ignoring possible alternatives. Every management team that only wants to hear the good news is likely to have fallen foul of groupthink. The same applies to dismissing all SharePoint governance except for the dial tone stuff.

However, there is an inherent paradox here:

  • The more parties involved in a collaboration, the more socially complex;
  • The more different these parties are, the more diverse, the more socially complex; and
  • This creates tension, resentment and lack of communication and a strong desire to go back to business as usual

This paradox between diversity and harmony is the toughest aspect of the four forces to tackle.

Key takeaways…

Way back in the very first post I stated that a key job of a SharePoint architect is to architect the conditions by which SharePoint is delivered. By this I mean that the architect has to grease the gears of collaboration between stakeholders and provide an environment that has the safety and structure for people to raise their issues, speak their truths and not get penalised for improving their understanding of the problem.

To enable this to happen, we need to tackle all four of the forces behind chaos that we have covered here. In short, if you focus governance efforts only on one of the forces you will simply inflame the others. Accordingly, the final diagram illustrates the key takeaway from f-Law 7. Since the four forces behind chaos push out and create divergence, our governance efforts needs to push back. But the important thing to note is the direction of my arrows. It is not necessarily appropriate to provide a direct counterforce via the “thou shalt not” type of all-or-nothing approach. Governance always has to steer towards the solution. The end always drives the means and not the other way around! Arbitrarily imposing such restrictions is often done without due consideration of the end in mind and therefore gets in the way of the steering process. This is why my arrows point inwards, but always push toward the solution on the right.

 

image

In this context, it can be seen why envisioning, stakeholder and goal alignment is so critical. Without it, its too easy for governance to become a self fulfilling prophecy. So when you look at your own projects, draw my divergence/convergence diagram and estimate how big the pink box is. If you sense that there is divergence, then look at where your gaps are and make sure you create the conditions that help mitigate all of the forces and not just one one of them.

Thanks for reading

Paul Culmsee

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud isn’t the problem–Part 6: The pros and cons of patriotism

This entry is part 6 of 6 in the series Cloud
Send to Kindle

Hi all and welcome to my 6th post on the weird and wonderful world of cloud computing. The recurring theme in this series has been to point out that the technological aspects of cloud computing have never really been the key issue. Instead, I feel It is everything else around the technology, ranging from immature process, through to the effects of the industry shakeout and consolidation, through to the adaptive change required for certain IT roles. To that end, in the last post, we had fun at the expense of server huggers and the typical defence mechanisms they use to scare the rest of the organization into fitting into their happy-place world of in-house managed infrastructure. In that post I made a note on how you can tell an IT FUD defence because risk averse IT will almost always try use their killer argument up-front to bury the discussion. For many server huggers or risk averse IT, the killer defence is US Patriot Act Issue.

Now just in case you have never been hit with the “…ah but what about the Patriot Act?” line and have no idea what the Patriot Act is all about, let me give you a nice metaphor. It is basically a legislative version of the “Men in Black” movies. Why Men in Black? Because in those movies, Will Smith and Tommy Lee Jones had the ability to erase the memories of anyone who witnessed any extra-terrestrial activity with that silvery little pen-like device. With the Patriot Act, US law enforcement now has a similar instrument. Best of all, theirs doesn’t need batteries – it is all done on paper.

image

In short, the Patriot Act provides a means for U.S. law enforcement agencies, to seek a court order allowing access to the personal records of anyone without their knowledge, provided that it is in relation to an anti-terrorism investigation. This act applies to pretty much any organisation who has any kind of presence in the USA and the rationale behind introducing it was to make it much easier for agencies to conduct terrorism investigations and better co-ordinate their efforts. After all, in the reflection and lessons learnt from the 911 tragedy, the need for for better inter-agency co-ordination was a recurring theme.

The implication of this act is for cloud computing should be fairly clear. Imagine our friendly MIB’s Will Smith (Agent J) and Tommy Lee Jones (Agent K) bursting into Google’s headquarters, all guns blazing, forcing them to hand over their customers data. Then when Google staff start asking too many questions, they zap them with the memory eraser gizmo. (Cue Tommy Lee jones stating “You never saw us and you never handed over any data to us.” )

Scary huh? It’s the sort of scenario that warms the heart of the most paranoid server hugger, because surely no-one in their right mind could mount a credible counter-argument to that sort of risk to the confidentiality and integrity of an organisations sensitive data.

But at the end of the day, cloud computing is here to stay and will no doubt grow. Therefore we need to unpack this issue and see what lies behind the rhetoric on both sides of the debate. Thus, I decided to look into the Patriot act a bit further to understand it better. Of course, it should be clear here that I am not a lawyer, and this is just my own opinions from my research and synthesis of various articles, discussion papers and interviews. My personal conclusion is that all the hoo-hah about the Patriot Act is overblown. Yet in stating this, I have to also state that we are more or less screwed anyway (and always were). As you will see later in this post, there are great counter arguments that pretty much dismantle any anti-cloud arguments that are FUD based, but be warned – in using these arguments, you will demonstrate just how much bigger this thing is beyond cloud computing and get a sense of the broader scale of the risk.

So what is the weapon?

The first thing we have to do is understand some specifics about the Patriot Act’s memory erasing device. Within the vast scope of the act, the two areas for greatest concern in relation to data is the National Security Letter and the Section 215 order. Both provide authorities access to certain types of data and I need to briefly explain them:

A National Security Letter (NSL) is a type of subpoena that permits certain law enforcement agencies to compel organisations or individuals to provide certain types of information like financial and credit records, telephone and ISP records (Internet searches, activity logs, etc). Now NSL’s existed prior to the Patriot Act, but the act loosened some of the controls that previously existed. Prior to the act, the information being sought had to be directly related a foreign power or the agent of a foreign power – thereby protecting US citizens. Now, all agencies have to do is assert that the data being sought is relevant in some way to any international terrorism or foreign espionage investigations.

Want to see what a NSL looks like? Check this redacted one from wikipedia.

A Section 215 Order is similar to an NSL in that it is an instrument that law enforcement agencies can use to obtain data. It is also similar to NSL’s in that it existed prior to the Patriot Act – except back then it was called a FISA Order – named after the Foreign Intelligence Surveillance Act that enacted it. The type of data available under a Section 215 Order is more expansive than what you can eke out of an NSL, but a Section 215 Order does require a judge to let you get hold of it (i.e. there is some judicial oversight). In this case, the FBI obtains a 215 order from the Foreign Intelligence Surveillance Court which reviews the application. What the Patriot Act did different to the FISA Order was to broaden the definition of what information could be sought. Under the Patriot Act, a Section 215 Order can relate to “any tangible things (including books, records, papers, documents, and other items).” If these are believed to be relevant to an authorised investigation they are fair game. The act also eased the requirements for obtaining such an order. Previously, the FBI had to present “specific articulable facts” that provided evidence that the subject of an investigation was a “foreign power or the agent of a foreign power.” From my reading, now there is no requirement for evidence and the reviewing judge therefore has little discretion. If the application meets the requirements of Section 215, they will likely issue the order.

So now that we understand the two weapons that are being wielded, let’s walk through the key concerns being raised.

Concern 1: Impacted cloud providers can’t guarantee that sensitive client data won’t be turned over to the US government

CleverWorkArounds short answer:

Yes this is dead-set true and it has happened already.

CleverWorkArounds long answer:

This concern stems from the “loosening” of previous controls on both NSL’s and Section 215 Orders. NSL’s for example, require no probable cause or judicial oversight at all, meaning that the FBI can issue these at their own volition. Now it is important to note that they could do this before the Patriot Act came into being too, but back then the parameters for usage was much stricter. Section 215 Orders on the other hand, do have judicial oversight, but that oversight has also been watered down. Additionally the breadth of information that can be collected is now greater. Add to that the fact that both NSL’s and Section 215 Orders almost always include a compulsory non-disclosure or “gag” order, preventing notification to the data owner that this has even happened.

This concern is not only valid but it has happened and continues to happen. Microsoft has already stated that it cannot guarantee customers would be informed of Patriot Act requests and furthermore, they have also disclosed that they have complied with Patriot Act requests. Amazon and Google are in the same boat. Google also have also disclosed that they have handed data stored in European datacenters back to U.S. law enforcement.

Now some of you – particularly if you live or work in Europe – might be wondering how this could happen, given the European Union’s strict privacy laws. Why is it that these companies have complied with the US authorities regardless of those laws?

That’s where the gag orders come in – which brings us onto the second concern.

Concern 2: The reach of the act goes beyond US borders and bypasses foreign legislation on data protection for affected providers

CleverWorkArounds short answer:

Yes this is dead-set true and it has happened already.

CleverWorkArounds long answer:

The example of Google – a US company – handing over data in its EU datacentres to US authorities, highlights that the Patriot Act is more pervasive than one might think. In terms of who the act applies to, a terrific article put out by Alex C. Lakatos put it really well when he said.

Furthermore, an entity that is subject to US jurisdiction and is served with a valid subpoena must produce any documents within its “possession, custody, or control.” That means that an entity that is subject to US jurisdiction must produce not only materials located within the United States, but any data or materials it maintains in its branches or offices anywhere in the world. The entity even may be required to produce data stored at a non-US subsidiary.

Think about that last point – “non-US subsidiary”.  This gives you a hint to how pervasive this is. So in terms of jurisdiction and whether an organisation can be compelled to hand over data and be subject to a gag order, the list is expansive. Consider these three categories:

  • – US based company? Absolutely: That alone takes out Apple, Amazon, Dell, EMC (and RSA), Facebook, Google, HP, IBM, Symantec, LinkedIn, Salesforce.com, McAfee, Adobe, Dropbox and Rackspace
  • – Subsiduary company of a US company (incorporated anywhere else in the world)? It seems so.
  • – Non US company that has any form of US presence? It also seems so. Now we are talking about Samsung, Sony, Nokia, RIM and countless others.

The crux of this argument about bypassing is the gag order provisions. If the US company, subsidiary or regional office of a non US company receives the order, they may be forbidden from disclosing anything about it to the rest of the organisation.

Concern 3: Potential for abuse of Patriot Act powers by authorities

CleverWorkArounds short answer:

Yes this is true and it has happened already.

CleverWorkArounds long answer:

Since the Patriot Act came into place, there was a significant marked increase in the FBI’s use of National Security Letters. According to this New York Times article, there were 143,000 requests  between 2003 to 2005. Furthermore, according to a report from the Justice Department’s Inspector General in March 2007, as reported by CNN, the FBI was guilty of “serious misuse” of the power to secretly obtain private information under the Patriot Act. I quote:

The audit found the letters were issued without proper authority, cited incorrect statutes or obtained information they weren’t supposed to. As many as 22% of national security letters were not recorded, the audit said. “We concluded that many of the problems we identified constituted serious misuse of the FBI’s national security letter authorities,” Inspector General Glenn A. Fine said in the report.

The Liberty and Security Coalition went into further detail on this. In a 2009 article, they list some of the specific examples of FBI abuses:

  • – FBI issued NSLs when it had not opened the investigation that is a predicate for issuing an NSL;
  • – FBI used “exigent letters” not authorized by law to quickly obtain information without ever issuing the NSL that it promised to issue to cover the request;
  • – FBI used NSLs to obtain personal information about people two or three steps removed from the subject of the investigation;
  • – FBI has used a single NSL to obtain records about thousands of individuals; and
  • – FBI retains almost indefinitely the information it obtains with an NSL, even after it determines that the subject of the NSL is not suspected of any crime and is not of any continuing intelligence interest, and it makes the information widely available to thousands of people in law enforcement and intelligence agencies.

Concern 4: Impacted cloud providers cannot guarantee continuity of service during investigations

CleverWorkArounds short answer:

Yes this is dead-set true and it has happened already.

CleverWorkArounds long answer:

An oft-overlooked side effect of all of this is that other organisations can be adversely affected. One aspect of cloud computing scalability that we talked about in part 1 is that of multitenancy. Now consider a raid on a datacenter. If cloud services are shared between many tenants, innocent tenants who had nothing whatsoever to do with the investigation can potentially be taken offline. Furthermore, the hosting provider may be gagged from explaining to these affected parties what is going on. Ouch!

An example of this happening was reported in the New York TImes in mid 2011 and concerned Curbed Network, a New York blog publisher. Curbed, along with some other companies, had their service disrupted after an F.B.I. raid on their cloud providers datacenter. They were taken down for 24 hours because the F.B.I.’s raid on the hosting provider seized three enclosures which, unfortunately enough, included the gear they ran on.

Ouch! Is there any coming back?

As I write this post, I wonder how many readers are surprised and dismayed by my four risk areas. The little security guy in me says If you are then that’s good! It means I have made you more aware than you were previously which is a good thing. I also wonder if some readers by now are thinking to themselves that their paranoid server huggers are right?

To decide this, let’s now examine some of the the counter-arguments of the Patriot Act issue.

Rebuttal 1: This is nothing new – Patriot Act is just amendments to pre-existing laws

One common rebuttal is that the Patriot Act legislation did not fundamentally alter the right of the government to access data. This line of argument was presented in August 2011 by Microsoft legal counsel Jeff Bullwinkel in Microsoft Australia’s GovTech blog. After all, it was reasoned, the areas frequently cited for concern (NSL’s and Section 215/FISA orders) were already there to begin with. Quoting from the article:

In fact, U.S. courts have long held that a company with a presence in the United States is obligated to respond to a valid demand by the U.S. government for information – regardless of the physical location of the information – so long as the company retains custody or control over the data. The seminal court decision in this area is United States v. Bank of Nova Scotia, 740 F.2d 817 (11th Cir. 1984) (requiring a U.S. branch of a Canadian bank to produce documents held in the Cayman Islands for use in U.S. criminal proceedings)

So while the Patriot Act might have made it easier in some cases for the U.S. government to gain access to certain end-user data, the right was always there. Again quoting from Bullwinkel:

The Patriot Act, for example, enabled the U.S. government to use a single search warrant obtained from a federal judge to order disclosure of data held by communications providers in multiple states within the U.S., instead of having to seek separate search warrants (from separate judges) for providers that are located in different states. This streamlined the process for U.S. government searches in certain cases, but it did not change the underlying right of the government to access the data under applicable laws and prior court decisions.

Rebuttal 2: Section 215’s are not often used and there are significant limitations on the data you can get using an NSL.

Interestingly, it appears that the more powerful section 215 orders have not been used that often in practice. The best article to read to understand the detail is one by Alex Lakatos. According to him, less than 100 applications for section 215 orders were made in 2010. He says:

In 2010, the US government made only 96 applications to the Foreign Intelligence Surveillance Courts for FISA Orders granting access to business records. There are several reasons why the FBI may be reluctant to use FISA Orders: public outcry; internal FBI politics necessary to obtain approval to seek FISA Orders; and, the availability of other, less controversial mechanisms, with greater due process protections, to seek data that the FBI wants to access. As a result, this Patriot Act tool poses little risk for cloud users.

So while section 215 orders seem less used, NSL’s seem to be used a dime a dozen – which I suppose is understandable since you don’t have to deal with a pesky judge and all that annoying due process. But the downside of NSL’s from a law enforcement point of view is that the the sort of data accessible via the NSL is somewhat limited. Again quoting from Lakatos (with emphasis mine):

While the use of NSLs is not uncommon, the types of data that US authorities can gather from cloud service providers via an NSL is limited. In particular, the FBI cannot properly insist via a NSL that Internet service providers share the content of communications or other underlying data. Rather [.] the statutory provisions authorizing NSLs allow the FBI to obtain “envelope” information from Internet service providers. Indeed, the information that is specifically listed in the relevant statute is limited to a customer’s name, address, and length of service.

The key point is that the FBI has no right to content via an NSL. This fact may not stop the FBI from having a try at getting that data anyway, but it seems that savvy service providers are starting to wise up to exactly what information an NSL applies to. This final quote from the Lakato article summarises the point nicely and at the same time, offers cloud providers a strategy to mitigate the risk to their customers.

The FBI often seeks more, such as who sent and received emails and what websites customers visited. But, more recently, many service providers receiving NSLs have limited the information they give to customers’ names, addresses, length of service and phone billing records. “Beginning in late 2009, certain electronic communications service providers no longer honored” more expansive requests, FBI officials wrote in August 2011, in response to questions from the Senate Judiciary Committee. Although cloud users should expect their service providers that have a US presence to comply with US law, users also can reasonably ask that their cloud service providers limit what they share in response to an NSL to the minimum required by law. If cloud service providers do so, then their customers’ data should typically face only minimal exposure due to NSLs.

Rebuttal 3: Too much focus on cloud data – there are other significant areas of concern

This one for me is a perverse slam-dunk counter argument that puts the FUD defence of a server hugger back in its box. The reason it is perverse is that it opens up the debate that for some server huggers, may mean that they are already exposed to the risks they are raising. You see, the thing to always bear in mind is that the Patriot Act applies to data, not just the cloud. This means that data, in any shape or form is susceptible in some circumstances if a service provider exercises some degree of control over it. When you consider all the applicable companies that I listed earlier in the discussion like IBM, Accenture, McAfee, EMC, RIM and Apple, you then start to think about the other services where this notion of “control” might come into play.

What about if you have outsourced your IT services and management to IBM, HP or Accenture? Are they running your datacentres? Are your executives using Blackberry services? Are you using an outsourced email spam and virus scanning filter supplied by a security firm like McAfee? Using federated instant messaging? Performing B2B transactions with a US based company?

When you start to think about all of the other potential touch-points where control over data is exercised by a service provider, things start to look quite disturbing. We previously established that pretty much any organisation with a US interest (whether US owned or not), falls under Patriot Act jurisdiction and may be gagged from disclosing anything. So sure. . .cloud applications are a potential risk, but it may well be that any one of these companies providing services regarded as “non cloud” might receive an NSL or section 215 order with a gag provision, ordering them to hand over some data in their control. In the case of an outsourced IT provider, how can you be sure that the data is not straight out of your very own datacenter?

Rebuttal 4: Most other countries have similar laws

It also turns out that many other jurisdictions have similar types of laws. Canada, the UK, most countries in the EU, Japan and Australia are some good examples. If you want to dig into this, examine Clive Gringa’s article on the UK’s Regulation of Investigatory Powers Act 2000 (RIPA) and an article published by the global law firm Linklaters (a SharePoint site incidentally), on the legislation of several EU countries.

In the UK, RIPA governs the prevention and detection of acts of terrorism, serious crime and “other national security interests”. It is available to security services, police forces and authorities who investigate and detect these offenses. The act regulates interception of the content of communications as well as envelope information (who, where and when). France has a bunch of acts which I won’t bore you too much with, but after 911, they instituted act 2001-1062 of 15 November 2001 which strengthens the powers of French law enforcement agencies. Now agencies can order anyone to provide them with data relevant to an inquiry and furthermore, the data may relate to a person other than the one being subject to the disclosure order.

The Linklaters article covers Spain and Belgium too and the laws are similar in intent and power. They specifically cite a case study in Belgium where the shoe was very much on the other foot. US company Yahoo was fined for not co-operating with Belgian authorities.

The court considered that Yahoo! was an electronic communication services provider (ESP) within the meaning of the Belgian Code of Criminal Procedure and that the obligation to cooperate with the public prosecutor applied to all ESPs which operate or are found to operate on Belgian territory, regardless of whether or not they are actually established in Belgium

I could go on citing countries and legal cases but I think the point is clear enough. Smile

Rebuttal 5: Many countries co-operate with US law enforcement under treaties

So if the previous rebuttal argument that other countries have similar regimes in place is not convincing enough, consider this one. Lets assume that data is hosted by a major cloud services provider with absolutely zero presence in, or contacts with, the United States. There is still a possibility that this information may still be accessible to the U.S. government if needed in connection with a criminal case. The means by which this can happen is via international treaties relation to legal assistance. These are called Mutual Assistance Legal Treaties (MLAT).

As an example, US and Australia have had a longstanding bilateral arrangement. This provides for law enforcement cooperation between the two countries and under this arrangement, either government can potentially gain access to data located within the territory of the other. To give you an idea of what such a treaty might look like consider the scope of the Australia-US one. The scope of assistance is wide and I have emphasised the more relevant ones:

  • (a) taking the testimony or statements of persons;
  • (b) providing documents, records, and other articles of evidence;
  • (c) serving documents;
  • (d) locating or identifying persons;
  • (e) transferring persons in custody for testimony or other purposes;
  • (f) executing requests for searches and seizures and for restitution;
  • (g) immobilizing instrumentalities and proceeds of crime;
  • (h) assisting in proceedings related to forfeiture or confiscation; and
  • (i) any other form of assistance not prohibited by the laws of the Requested State.

For what its worth, if you are interested in the boundaries and limitations of the treaty, it states that the “Central Authority of the Requested State may deny assistance if”:

  • (a) the request relates to a political offense;
  • (b) the request relates to an offense under military law which would not be an offense under ordinary criminal law; or
  • (c) the execution of the request would prejudice the security or essential interests of the Requested State.

Interesting huh? Even if you host in a completely independent country, better check the treaties they have in place with other countries.

Rebuttal 6: Other countries are adjusting their laws to reduce the impact

The final rebuttal to the whole Patriot Act argument that I will cover is that things are moving fast and countries are moving to mitigate the issue regardless of the points and counterpoints that I have presented here. Once again I will refer to an article from Alex Lakatos, who provides a good example. Lakatos writes that the EU may re-write their laws to ensure that it would be illegal for the US to invoke the Patriot Act in certain circumstances.

It is anticipated, however, that at the World Economic Forum in January 2012, the European Commission will announce legislation to repeal the existing EU data protection directive and replace it with more a robust framework. The new legislation might, among other things, replace EU/US Safe Harbor regulations with a new approach that would make it illegal for the US government to invoke the Patriot Act on a cloud-based or data processing company in efforts to acquire data held in the European Union. The Member States’ data protection agency with authority over the company’s European headquarters would have to agree to the data transfer.

Now Lakatos cautions that this change may take a while before it actually turns into law, but nevertheless is something that should be monitored by cloud providers and cloud consumers alike.

Conclusion

So what do you think? Are you enlightened and empowered or confused and jaded? Smile

I think that the Patriot Act issue is obviously a complex one that is not well served by arguments based on fear, uncertainty and doubt. The risks are real and there are precedents that demonstrate those risks. Scarily, it doesn’t make much digging to realise that those risks are more widespread than one might initially consider. Thus, if you are going to play the Patriot Act card for FUD reasons, or if you are making a genuine effort to mitigate the risks, you need to look at all of the touch points where service provider might exercise a degree of control. They may not be where you think they are.

In saying all of this, I think this examination highlights some strategy that can be employed by cloud providers and cloud consumers alike. Firstly, If I were a cloud provider, I would state my policy about how much data will be given when confronted by an NSL (since that has clear limitations). Many providers may already do this, so to turn it around to the customer, it is incumbent on cloud consumers to confirm with the providers as to where they stand. I don’t know if there is that much value in asking a cloud provider if they are exempt from the reach of the Patriot Act. Maybe its better to assume they are affected and instead, ask them how they intend to mitigate their customers downlevel risks.

Another obvious strategy for organisations is to encrypt data before it is stored on cloud infrastructure. While that is likely not going to be an option in a software as a service model like Office 365, it is certainly an option in the infrastructure and platform as a service models like Amazon and Azure. That would reduce the impact of a Section 215 Order being executed as the cloud provider is unlikely going to have the ability to decrypt the data.

Finally (and to sound like a broken record), a little information management governance would not go astray here. Organisations need to understand what data is appropriate for what range of cloud services. This is security 101 folks and if you are prudent in this area, cloud shouldn’t necessarily be big and scary.

Thanks for reading

Paul Culmsee

www.hereticsguidebooks.com

www.sevensigma.com.au

p.s Now do not for a second think this article is exhaustive as this stuff moves fast. So always do your research and do not rely on an article on some guys blog that may be out of date before you know it.

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle

The cloud isn’t the problem–Part 5: Server huggers and a crisis of identity

This entry is part 5 of 6 in the series Cloud
Send to Kindle

Hi all

Welcome to my fifth post that delves into the irrational world of cloud computing. After examining the not-so-obvious aspects of Microsoft, Amazon and the industry more broadly, its time to shift focus a little. Now the appeal of the cloud really depends on your perspective. To me, there are three basic motivations for getting in on the act…

  1. I can make a buck
  2. I can save a buck
  3. I can save a buck (and while I am at it, escape my pain-in-the-ass IT department)

If you haven’t guessed it, this post will examine #3, and look at what the cloud means for the the perennial issue of the IT department and business disconnect. I recently read an article over at CIO magazine where they coined the term “Server Huggers” to describe the phenomenon I am about to describe. So to set the flavour for this discussion, let me tell you about the biggest secret in organisational life…

We all have an identity crisis (so get over it).

In organizations, there are roles that I would call transactional (i.e. governed by process and clear KPI’s) and those that are knowledge-based (governed by gut feel and insight). Whilst most roles actually entail both of these elements, most of us in SharePoint land are the latter. In fact we actually spend a lot of time in meeting rooms “strategizing” the solutions that our more transactionally focused colleagues will be using to meet their KPI’s. Beyond SharePoint, this also applies to Business Analysts, Information Architects, Enterprise Architects, Project Managers and pretty much anyone with the word “senior”, “architect”, “analyst”  or “strategic” in their job title.

But there is a big, fat, elephant in the “strategizing room” of certain knowledge worker roles that is at the root of some irrational organisational behaviour. Many of us are suffering a role-based identity crisis. To explain this, lets pick a straw-man example of one of the most conflicted roles of all right now: Information Architects.

One challenge with the craft of IA is pace of change, since IA today looks very different from its library and taxonomic roots. Undoubtedly, it will look very different ten years from now too as it gets assailed from various other roles and perspectives, each believing their version of rightness is more right. Consider this slightly abridged quote from Joshua Porter:

Worse, the term “information architecture” has over time come to encompass, as suggested by its principal promoters, nearly every facet of not just web design, but Design itself. Nowhere is this more apparent than in the latest update of Rosenfeld and Morville’s O’Reilly title, where the definition has become so expansive that there is now little left that isn’t information architecture […] In addition, the authors can’t seem to make up their minds about what IA actually is […] (a similar affliction pervades the SIGIA mailing list, which has become infamous for never-ending definition battles.) This is not just academic waffling, but evidence of a term too broadly defined. Many disciplines often reach out beyond their initial borders, after catching on and gaining converts, but IA is going to the extreme. One technologist and designer I know even referred to this ever-growing set of definitions as the “IA land-grab”, referring to the tendency that all things Design are being redefined as IA.

You can tell when a role is suffering an identity crisis rather easily too. It is when people with the current role start to muse that the title no longer reflects what they do and call for new roles to better reflect the environment they find themselves in. Evidence for this exists further in Porter’s post. Check out the line I marked with bold below:

In addition, this shift is already happening to information architects, who, recognizing that information is only a byproduct of activity, increasingly adopt a different job title. Most are moving toward something in the realm of “user experience”, which is probably a good thing because it has the rigor of focusing on the user’s actual experience. Also, this as an inevitable move, given that most IAs are concerned about designing great things. IA Scott Weisbrod, sees this happening too: “People who once identified themselves as Information Architects are now looking for more meaningful expressions to describe what they do – whether it’s interaction architect or experience designer

So while I used the example of Information Architects as an example of how pace of change causes an identity crisis, the advent of the cloud doesn’t actually cause too many IA’s (or whatever they choose to call themselves) to lose too much sleep. But there are other knowledge-worker roles that have not really felt the effects of change in the same way as their IA cousins. In fact, for the better part of twenty years one group have actually benefited greatly from pace of change. Only now is the ground under their feet starting to shift, and the resulting behaviours are starting to reflect the emergence of an identity crisis that some would say is long overdue.

IT Departments and the cloud

At a SharePoint Saturday in 2011, I was on a panel and we were asked by an attendee what effect Office 365 and other cloud based solutions might have on a traditional IT infrastructure role. This person asking was an infrastructure guy and his question was essentially around how his role might change as cloud solutions becomes more and more mainstream. Of course, all of the SharePoint nerds on the panel didn’t want to touch that question with a bargepole and all heads turned to me since apparently I am “the business guy”. My reply was that he was sensing a change – commoditisation of certain aspects of IT roles. Did that mean he was going to lose his job? Unlikely, but nevertheless when  change is upon us, many of us tend to place more value on what we will lose compared to what we will gain. Our defence mechanisms kick in.

But lets take this a little further: The average tech guy comes in two main personas. The first is the tech-cowboy who documents nothing, half completes projects then loses interest, is oblivious to how much they are in over their head and generally gives IT a bad name. They usually have a lot of intellectual intelligence (IQ), but not so much emotional intelligence (EQ). Ben Curry once referred to this group as “dumb smart guys.” The second persona is the conspiracy theorist who had to clean up after such a cowboy. This person usually has more skills and knowledge than the first guy, writes documentation and generally keeps things running well. Unfortunately, they also can give IT a bad name. This is because, after having to pick up the pieces of something not of their doing, they tend to develop a mother hen reflex based on a pathological fear of being paged at 9pm to come in and recover something they had no part in causing. The aforementioned cowboys rarely last the distance and therefore over time, IT departments begin to act as risk minimisers, rather than business enablers.

Now IT departments will never see it this way of course, instead believing that they enable the business because of their risk minimisation. Having spent 20 years being a paranoid conspiracy theorist, security-type IT guy, I totally get why this happens as I was the living embodiment of this attitude for a long time. Technology is getting insanely complex while users innate ability to do some really risky and dumb is increasing. Obviously, such risk needs to be managed and accordingly, a common characteristic of such an IT department is the word “no” to pretty much any question that involves introducing something new (banning iPads or espousing the evils of DropBox are the best examples I can think of right now). When I wrote about this issue in the context of SharePoint user adoption back in 2008, I had this to say:

The mother hen reflex should be understood and not ridiculed, as it is often the user’s past actions that has created the reflex. But once ingrained, the reflex can start to stifle productivity in many different ways. For example, for an employee not being able to operate at full efficiency because they are waiting 2 days for a helpdesk request to be actioned is simply not smart business. Worse still, a vicious circle emerges. Frustrated with a lack of response, the user will take matters into their own hands to improve their efficiency. But this simply plays into the hands of the mother hen reflex and for IT this reinforces the reason why such controls are needed. You just can’t trust those dog-gone users! More controls required!

The long term legacy of increasing technical complexity and risk is that IT departments become slow-moving and find it difficult to react to pace of change. Witness the number of organisations still running parts of their business on Office 2003, IE6 and Windows XP. The rest of the organisation starts to resent using old tools and the imposition of process and structure for no tangible gain. The IT department develops a reputation of being difficult to deal with and taking ages to get anything done. This disconnect begins to fester, and little by little both IT and “the business” develop a rose-tinged view of themselves (which is known as groupthink) and a misguided perception of the other.

At the end of the day though, irrespective of logic or who has the moral high ground in the debate, an IT department with a poor reputation will eventually lose. This is because IT is no longer seen as a business enabler, but as a cost-center. Just as organisations did with the IT outsourcing fad over the last decade, organisational decision makers will read CIO magazine articles about server huggers look longingly to the cloud, as applications become more sophisticated and more and more traditional vendors move into the space, thus legitimising it. IT will be viewed, however unfairly, as a burden where the cost is not worth the value realised. All the while, to conservative IT, the cloud represents some of their worst fears realised. Risk! risk! risk! Then the vicious circle of the mother-hen reflex will continue because rogue cloud applications will be commissioned without IT knowledge or approval. Now we are back to the bad old days of rogue MSAccess or SharePoint deployments that drives the call for control based governance in the first place!

<nerd interlude>

Now to the nerds reading this post who find it incredibly frustrating that their organisation will happily pump money into some cloud-based flight of fancy, but whine when you want to upgrade the network, I want you to take take note of this paragraph as it is really (really) important! I will tell you the simple reason why people are more willing to spend more money on fluffy marketing than IT. In the eyes of a manager who needs to make a profit, sponsoring a conference or making the reception area look nice is seen as revenue generating. Those who sign the cheques do not like to spend capital on stuff unless they can see that it directly contributes to revenue generation! Accordingly, a bunch of servers (and for that matter, a server room) are often not considered expenditure that generates revenue but are instead considered overhead! Overhead is something that any smart organisation strives to reduce to remain competitive. The moral of the story? Stop arguing cloud vs. internal on what direct costs are incurred because people will not care! You would do much better to demonstrate to your decision makers that IT is not an overhead. Depending on how strong your mother hen reflex is and how long it has been in place, that might be an uphill battle.

</nerd interlude>

Defence mechanisms…

Like the poor old Information Architect, the rules of the game are changing for IT with regards to cloud solutions. I am not sure how it will play out, but I am already starting to see the defence mechanisms kicking in. There was a CIO interviewed in the “Server Huggers” article that I referred to earlier (Scott Martin) who was hugely pro-cloud. He suggested that many CIO’s are seeing cloud solutions as a threat to the empire they have built:

I feel like a lot of CIOs are in the process of a kind of empire building.  IT empire builders believe that maintaining in-house services helps justify their importance to the company. Those kinds of things are really irrational and not in the best interest of the company […] there are CEO’s who don’t know anything about technology, so their trusted advisor is the guy trying to protect his job.

A client of mine in Sydney told me he enquired to his IT department about the use of hosted SharePoint for a multi-organisational project and the reply back was a giant “hell no,” based primarily on fear, uncertainty and doubt. With IT, such FUD is always cloaked in areas of quite genuine risk. There *are* many core questions that we must ask cloud vendors when taking the plunge because to not do so would be remiss (I will end this post with some of those questions). But the key issue is whether the real underlying reason behind those questions is to shut down the debate or to genuinely understand the risks and implications of moving to the cloud.

How can you tell an IT department is likely using a FUD defence? Actually, it is pretty easily because conservative IT is very predictable – they will likely try and hit you with what they think is their slam-dunk counter argument first up. Therefore, they will attempt to bury the discussion with the US Patriot Act Issue. I’ve come across this issue and and Mark Miller at FPWeb mentioned to me that this comes up all the time when they talk about SharePoint hosting to clients. (I am going to cover the Patriot Act issue in the next post because it warrants a dedicated post).

If the Patriot Act argument fails to dent unbridled cloud enthusiasm, the next layer of defence is to highlight cloud based security (identity, authentication and compliance) as well as downtime risk, citing examples such as the September outage of Office 365, SalesForce.com’s well publicized outages, the Amazon outage that took out Twitter, Reddit, Foursquare, Turntable.fm, Netflix and many, many others. The fact that many IT departments do not actually have the level of governance and assurance of their systems that they aspire to will be conveniently overlooked. 

Failing that, the last line of defence is to call into question the commercial viability of cloud providers. We talked about the issues facing the smaller players in the last post, but It is not just them. What if the provider decides to change direction and discontinue a service? Google will likely be cited, since it has a habit of axing cloud based services that don’t reach enough critical mass (the most recent casualty is Google health being retired as I write this).  The risk of a cloud provider going out of business or withdrawing a service is a much more serious risk than when a software supplier fails. At least when its on premise you still have the application running and can use it.

Every FUD defence is based on truth…

Now as I stated above, all of the concerns listed above are genuine things to consider before embarking on a cloud strategy. Prudent business managers and CIOs must weigh the pros and cons of cloud offering before rushing into a deployment that may not be appropriate for their organisation. Equally though, its important to be able to see through a FUD defence when its presented. The easiest way to do this is do some of your own investigations first.

To that end, you can save yourself a heap of time by checking out the work of Richard Harbridge. Richard did a terrific cloud talk at the most recent Share 2011 conference. You can view his slide deck here and I recommend really going through slides 48-81. He has provided a really comprehensive summary of considerations and questions to ask. Among other things, he offered a list of questions that any organisation should be asking providers of cloud services. I have listed some of them below and encourage you to check out his slide deck as it is really comprehensive and covers way more than what I have covered here.

Security Storage Identity & Access
Who will have access to my data?
Do I have full ownership of my data?
What type of employee / contractor screening you do, before you hire them?
How do you detect if an application is being attacked (hacked), and how is that
reported to me and my employees?
How do you govern administrator access to the service?
What firewalls and anti-virus technology are in place?
What controls do you have in place to ensure safety for my data while it is
stored in your environment?
What happens to my data if I cancel my service?
Can I archive environments?
Will my data be replicated to any other datacenters around the world (If
yes, then which ones)?
Do you offer single sign-on for your services?
Active directory integration?
Do all of my users have to rely on solely web based tools?
Can users work offline?
Do you offer a way for me to run your application locally and how quickly I can revert to the local installation?
Do you offer on-premise, web-based, or mixed environments?
     
Reliability & Support Performance  
What is your Disaster Recovery and Business Continuity strategy?
What is the retention period and recovery granularity?
Is your Cloud Computing service compliant with [insert compliance regime here]?
What measures do you provide to assist compliance and minimize legal risk?
What types of support do you offer?
How do you ensure we are not affected by upgrades to the service?
What are your SLAs and how do you compensate when it is not met?
How fast is the local network?
What is the storage architecture?
How many locations do you have and how are they connected?
Have you published any benchmark scores for your infrastructure?
What happens when there is over subscription?
How can I ensure CPU and memory are guaranteed?
 

Conclusion and looking forward…

For some organisations, the lure of cloud solutions is very seductive. From a revenue perspective, it saves a lot of capital expenditure. From a time perspective, it can be deployed very quickly and and from a maintenance perspective, takes the burden away from IT. Sounds like a winner when put that way. But the real issue is that the changing cloud paradigm potentially impacts the wellbeing of some IT professionals and IT departments because it calls into question certain patterns and practices within established roles. It also represents a loss of control and as I said earlier, people often place a higher value on what they will lose compared to what they will gain.

Irrespective of this, whether you are a new age cloud loving CIO or a server hugger, any decision to move to the cloud should be about real business outcomes. Don’t blindly accept what the sales guy tells you. Understand the risks as well as the benefits. Leverage the work Richard has done and ask the cloud providers the hard questions. Look for real world stories (like my second and third articles in this series) which illustrate where the services have let people down.

For some, cloud will be very successful. For others, the gap between expectations and reality will come with a thud.

Thanks for reading

Paul Culmsee

www.sevensigma.com.au

www.hereticsguidebooks.com

 Digg  Facebook  StumbleUpon  Technorati  Deli.cio.us  Slashdot  Twitter  Sphinn  Mixx  Google  DZone 

No Tags

Send to Kindle