Controlling your Azure Functions Code

With any coding project it is important to understand how to manage your code assets.

This is good practice to undertake when your project is small enough so it is easy, and it becomes vital when the volume of code increases and you have many people working on it.

App Services in Microsoft Azure have a long history of of version control through a number of popular source control systems, such as Visual Studio Team Services and GitHub, and a number of file-based solutions, such as Dropbox and OneDrive.

Azure Functions were introduced at //build/ in 2016 as the new kid on the block in App Services, and whilst they have recently been removed from Web + Mobile in favour of Compute, they are still part of the core App Service model in Microsoft Azure.

Because of this, they have a lot of the capabilities around source control, and just as importantly continuous deployment, as the underlying platform service.

For this post, I though it would be useful to check out the structure of a Function App and how you can easily set up continuous deployment.

I will use GitHub for source control, and GitHub Desktop to manage my local repository, and cover the use of a file-based solution also, which in this case will be Dropbox.

First I need to create a new Function App and put some functions in it. Strictly speaking putting the functions in to start with doesn’t need to happen as you can deploy directly and immediately from source control without having the function stubs, but it’s useful to first understand the folder structure of a Function App as that is important when you set things up.

A Function App sits in the same file structure as a normal App Service app, with a number of folders off the root. You can use a Debug Console in Kudu to have a browse around the full structure.

KuduRoot

The real work goes on under the site folder and it’s under here you’ll find wwwroot, which is the main folder that contains all the functions themselves.

KuduWWW

The important part about the wwwroot folder is that the contents are the root for any source control you set up and not the wwwroot folder itself.

I have already created a couple of functions to show some general structure, one containing a timer trigger using C#, and one containing a timer trigger using Node.JS.

Each folder contains the function itself, and possibly a project.json file for C# that allows NuGet package management, or a package.json file for Node.JS for npm package management.

NuGet packages are restored and stored off the main root of the Function App, whilst npm packages are restored and stored under the specific function to which they apply.

So now we know structure, how do we setup source control. Let’s start with doing this for GitHub.

From within Kudu you can easily download a zip file that contains the contents of a folder, so we grab that and unzip to a local folder.

WWWDownload

Using GitHub Desktop we create a new local repository and then put the contents of zipped wwwroot folder in it.

This is the important bit, we need to have the repository pointing to the files and folders that are within the wwwroot folder, not the wwwroot folder itself.

LocalGit

We now sync this up to our GitHub account and we’re ready to set things up in the Function App.

RemoteGit

Within the Function App we go to App Service Settings, and choose Deployment Options.

DeploymentOptions

We need to hook up our GitHub account, which is just a simple authentication step, and then select the repo we’re interested in.

GitHubDeploy

And that’s really all there is to it.

If we make a change to the local repository and then push, we can see the code is changed accordingly. You’ll also notice that the code view in the browser shows that the file is read only because it is being updated in source control.

UpdatedCode

You get a bunch of options for looking at what has been happening with deployments and a chance to redeploy if necessary.

Redeploy

So what about file-based deployments. If we copy our code up to Dropbox, again remembering that we need to use the folder structure UNDER wwwroot, and not wwwroot itself, then the process is largely the same.

First we unhook the GitHub deployment, and then attach our deployment process to Dropbox. Again we do the authorisation dance, and choose the folder containing our code. It is important to note, the folder that you’re looking for is under [DropboxRoot]\Apps\Azure\[FunctionAppName]. So that is where you need to stick you wwwroot folder contents.

If we open our code from Dropbox directly, then when we do a Save, we would expect the code to be pushed directly to our Function App. However, I’ve found that you have to manually initiate a sync from the Deployment options blade.

UpdatedCodeDropbox

So, again pretty simple to set up and configure.

Now obviously there is no error checking here, or deployment slots like you have with Web Apps, so some caution should be exercised.

Does it work the other way? If we change our code in the online editor does our source code repository get updated?

Well, no it doesn’t I’m afraid because once you’re hooked up and deploying you get the read only message.

So there you have it, the file structure of a Function App is pretty straightforward and taking control of your code is simple enough if you follow the basic rules about which folder level you point to, and just to be 100% sure, that is the wwwroot folder contents, NOT the wwwroot folder itself.

Happy functioning!

IoT Hub from the Command Line

If you’re like me, in other words old enough to have spent large periods of time working at a command prompt, then you’ll always be looking for command line tools to get things done.

If you’re like me and spending lots of time doing things with IoT Hub, then wouldn’t it be nice to have a command line tool for that.

I spend time in the Azure Portal and using tools like Device Explorer to manage devices and check connectivity but was looking for a CLI for these tasks when I came across iothub-explorer.

iothub-explorer is a Node.js application so before you can have a play you’ll need to head on over to Node.js website and download it if you don’t have it installed.

You use the node package manager (npm) to install the command line interface tool.

npm install -g iothub-explorer@latest

This will install it globally so you’ll be able to have a good play from your command prompt of choice, I like to use PowerShell.

If you want to examine the dependencies, just issue the following from a command prompt.

npm list -g --depth=1

This will list all the things that are also installed as part of iothub-explorer.

iothub-explorer@1.0.8
  ├── azure-event-hubs@0.0.2
  ├── azure-iot-common@1.0.8
  ├── azure-iot-device@1.0.8
  ├── azure-iothub@1.0.10
  ├── bluebird@3.4.1
  ├── colors-tmpl@1.0.0
  ├── nopt@3.0.6
  ├── prettyjson@1.1.3
  └── uuid@2.0.2

If you run iothub-explorer on its own or with the help command line argument you get a list of supported commands.

iothub-commandsSo as you can see you can do an awful lot with this tool.

The easiest way of having a play from a PowerShell command line is to store your connection string in a variable.

$conn = [YOUR CONNECTION STRING]

You can then start a new session with your IoT Hub by simply logging in to it.

iothub-explorer login $conn

By default this gives you a session lasting one hour. If you need a longer period the command has a –-duration argument that takes the number of seconds you need.

Once you’re logged in you’re live to interact with your IoT Hub in lots of the same ways you can from Device Explorer, except this time from the command line which is pretty cool, and of course scriptable!

If you want to take a look at the devices you currently have registered just get a list. Since you previously logged in you don’t need to provide the connection string, so think of all the key strokes you’ll have left.

iothub-explorer list

This returns all the information about each device, but again there is a handy command line argument to limit the level of information, just use -–display and comma separate the properties you want. The argument is used in a few places, so I’d suggest issuing without, taking a look at what there is and then just grabbing what you need in future.

iothub-explorer list --display="deviceId, connectionState"

If you just want a single device use “get” instead of “list” and use the deviceId.

You can also create devices (for instance by including some JSON that defines your device) and retrieve a SAS token for a device.

If you’re really interested in checking how IoT Hub works then the most interesting commands allow monitoring of events sent by a device, and also sending cloud-to-device messages directly. To top even that though, you can also monitor the feedback queue and check for acknowledgement messages from the device which is very cool.

So how do we do that?

First let’s take a look at monitoring events from devices. You’ll need a console application to send messages to your device, that’s just boilerplate code and I’ve covered that to a degree before so I’ll not repeat that here. Once you’ve hooked that up and are sending messages, monitoring from the command line is simple.

iothub-explorer $conn monitor-events [deviceId]

You’ll notice we need to pass the connection string this time, this is one command that still requires it, but I assume that will change in a later version.

Once issued, the command just waits until messages are received at the device endpoint.

iothub-monitor.png

For simulating messages send to a device from the cloud you’ll need another console application that is pretending to be a device in a receive loop. Again that’s boilerplate that I’ve covered before.

You’ll need the console application running because the whole point is to check that messages are received and an acknowledgement is sent. For this one you need 2 PowerShell command windows and don’t forget to set the connection string in the new window.

In one window issue the following and it’ll pause listening to the feedback queue on the IoT Hub.

iothub-explorer retrieve

You can add an argument, –messages=n where n is the number of messages it’ll wait to receive before stopping, otherwise it’ll just wait until you quit.

In the other window, you need to send a message to the cloud endpoint for the device which will then be received by the console application. You’ll want to request an acknowledgement, after all that is what we’re trying to show here!

For the purpose of ease, I’ve gone and created a message variable containing some simple JSON that I’m sending.

iothub-explorer send [deviceId] $msg --ack=full

So in this we’ve requested a full acknowledgement.

iothub-send.png

You’ll notice we get a message ID, this is what we use to see the feedback message.

iothub-receive.png

So we can see we have the same ID so we can correlate these two events which is pretty amazing.

You can see with a simple command line tool we are able interact with IoT Hub and monitor events from devices, and both send messages to a device from the cloud and receive feedback when that message has been received by the device.

I love iothub-explorer, I can see me using this all the time.

I hope you enjoy using it as much as I do!

Fun with Azure Functions and the Emotion API

I was speaking recently at the Perth MS Cloud Computing User Group about Azure Functions.

I wanted to do a demo heavy presentation for a change including some live coding and one of the scenarios I was keen to cover was Cortana Intelligence Suite.

For the demo I thought it would be good to create a function that used a blob trigger, send the uploaded object to be analysed by Cortana Intelligence and stored the results in a DocumentDB collection.

The main reason for this was to demonstrate not only basic functionality but also how to reference other functions and to use package restore.

The thing I like about Azure Functions is that this sort of scenario is actually really trivial to wire up.

Azure Function Setup

I used a dynamic App Service Plan which is a great way of getting scale on a pay as you go basis.

Once you’ve created a Function App it is straightforward to create a basic function. As mentioned, I went for a blob trigger in C# as I wanted to process uploaded images.

A basic blob trigger function looks like the following.

using System;
public static void Run(string myBlob, TraceWriter log)
{
    log.Info($"C# Blob trigger function processed: {myBlob}");
}

This is not really very useful as we want to bring in an object that contains all the information about our uploaded blob. For this we change our input to from a string to an ICloudBlob.

We need to set up our output. I won’t go through how to do that here as the documentation covers it very well. We want to send documents to DocumentDB once we’ve set up our connection, so for that we need to use IAsyncCollector as our output and change the method to be async.

So we get:

using System;
public static async Task Run(Microsoft.WindowsAzure.Storage.ICloudBlob myBlob, IAsyncCollector<object> outputDocument, TraceWriter log)
{
    log.Info($"C# Blob trigger function processed: {myBlob}");
}

Good, so now we have a method signature that is useful!

Cortana Intelligence Suite

I decided to use the Emotion API for my testing, it’s a great API that tells you what the percentages of emotions are in the faces of people in photos.

You can test the API yourself, you just need to sign up and get an API key, then provide a URI for your image, which in our case is the URI of our uploaded blob, and voila!

I wanted to check my acting abilities by taking some photos of expressions I was making and check whether I should make the move to Hollywood!

To add this to your Resource Group, search for “Cognitive Services API” in the Azure Portal, click Create like normal, and then choose the API type you’re interested in. As mentioned, I chose Emotion API.

Azure Function Next Steps

We have the basics for our function but adding a DocumentDB endpoint, and getting the function signature set up correctly. If we run it as it is it will work, but won’t be terribly useful.

Once we’ve got our Emotion API key we need to store that in our Application Settings. To do that, click the “Function app settings” above the code and click the “Go to App Service Settings” button. Go to “Application settings” in the resultant Settings blade, and add a new App setting.

We need to use some assemblies that are included in the Functions deployment but that need to be referenced. To do that we use “#r” to bring them in.

#r "Microsoft.WindowsAzure.Storage"
#r "System.Web"
#r "System.Runtime"
#r "System.Threading.Tasks"
#r "System.IO"

We also need to bring in the assemblies required by the Emotion API. These are not included in standard Functions set of assemblies so we need to do a package restore for those.

For this we create a project.json file:

{
  "frameworks": {
    "net46":{
      "dependencies": {
        "Microsoft.ProjectOxford.Emotion": "1.0.251"
      }
    }
   }
}

If you’re familiar with App Services, you can do this in a couple of ways:

  • Go to Visual Studio Online from the App Service Tools menu and create the file in place in the main folder of the function
  • Create in the editor of your choice, and upload to the main folder using the Debug console in Kudu

Once you upload the file, you’ll see in the Log Streaming Service window below your code.

The final part of the puzzle is to complete the code required to take the uploaded blog, send the information to the Emotion API and then create a document and send to the output.

The final code should look like the following:

#r "Microsoft.WindowsAzure.Storage"
#r "System.Web"
#r "System.Runtime"
#r "System.Threading.Tasks"
#r "System.IO"
#r "Newtonsoft.Json"

using System;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage.Blob;
using System.Web.Configuration;
using Microsoft.ProjectOxford.Common;
using Microsoft.ProjectOxford.Emotion;
using Microsoft.ProjectOxford.Emotion.Contract;

public static async Task Run(ICloudBlob myBlob, IAsyncCollector<object> outputDocument, TraceWriter log)
{
    log.Info($"C# Blob trigger function processed: {myBlob}");
    
    var apiKey = WebConfigurationManager.AppSettings["EMOTION_API_KEY"];

    EmotionServiceClient emotionServiceClient = new EmotionServiceClient(apiKey);
    Emotion[] emotions = await emotionServiceClient.RecognizeAsync(myBlob.Uri.ToString());

    var photo = new PhotoResult
    {
        Uri = myBlob.Uri.ToString(),
        Name = myBlob.Name,
        NoMatches = emotions.Length,
        ProcessTime = DateTime.UtcNow,
        Results = emotions
    };

    await outputDocument.AddAsync(photo);
} 

public class PhotoResult
{
    public string Uri;
    public string Name;
    public int NoMatches;
    public DateTime ProcessTime;
    public Emotion[] Results;
}

So looking at this, we grab our key from our AppSettings, create an EmotionServiceClient and then call RecognizeAsync passing in the URI of the blob information we’ve passed in.

The API returns an array that contains details about faces that have been identified (FaceRectangle) and matching emotions (Scores). It’s an array because the API actually matches all faces it finds in the image and scores each one.

I created a simple class as I wanted to store some additional information, and this is serialised when we send it through to DocumentDB. This magic happens in the final line where we call AddSync and pass in our newly created class.

And that’s it, we’ve created a nice simple mechanism for checking the emotions of people in photos that are uploaded.

Testing my Acting

So first I uploaded the following picture:

Happy

It’s my happy face and the API concurs.

"Scores": {
  "Anger": 4.162134e-10,
  "Contempt": 1.47855415e-13,
  "Disgust": 3.8750686e-10,
  "Fear": 5.99909066e-13,
  "Happiness": 1,
  "Neutral": 4.58070127e-12,
  "Sadness": 3.19319154e-10,
  "Surprise": 1.40573334e-11
}

Next I tried for surprise.

Surprise

This returned the following scores:

"Scores": {
  "Anger": 0.0241219755,
  "Contempt": 0.00002152401,
  "Disgust": 0.00158446026,
  "Fear": 0.0162605,
  "Happiness": 0.000183772121,
  "Neutral": 0.00540139666,
  "Sadness": 0.000415531016,
  "Surprise": 0.95201087
}

Not bad, a good amount of surprise with a small amount of Anger and Fear thrown in.

Next up was sadness:

Sad

I tried to think about a puppy that had lost it’s favourite toy for this one!

"Scores": {
  "Anger": 0.00167506724,
  "Contempt": 0.00182426057,
  "Disgust": 0.00735369138,
  "Fear": 0.0000129641885,
  "Happiness": 1.29625617e-8,
  "Neutral": 0.002562766,
  "Sadness": 0.9865707,
  "Surprise": 4.980104e-7
}

Again, a good result, I’m clearly an empathetic guy.

Finally I wanted to try anger.

Angry

For this I wanted to think about how angry I was that someone would take a toy away from a puppy.

"Scores": {
  "Anger": 0.4741387,
  "Contempt": 0.0000462272656,
  "Disgust": 0.0145976059,
  "Fear": 0.0000404144557,
  "Happiness": 0.5108594,
  "Neutral": 0.0000221199662,
  "Sadness": 0.0002525145,
  "Surprise": 0.0000430380351
}

Only 47% angry, guess I’m just not an angry type of guy. In fact somehow I’m a bit more happy than angry in this, which I can only attribute to upturning of the mouth.

Anyway, clearly I’ve missed my calling a career in films awaits, although I think I’ll have to stick to comedy!

Conclusion

I wanted to test out the Cortana Intelligence Suite APIs and I wanted to do a more complex scenario using Azure Functions.

I chose this complexity because I also wanted to demonstrate how to use references and package restores and non-standard inputs and outputs.

For this I chose a blob trigger that called the Emotion API and stored the result in DocumentDB.

This is a task that I can see could have some real world applications, including gauging the reaction of people in any number of situations since the API works out values for all faces it finds in a picture.

So now go and have a play with Azure Functions and have a play with the range or APIs in the Cortana Intelligence Suite, there is some great and directly application stuff in there.

Azure IoT Hub End-to-End

I recently had the pleasure of talking again at the Perth MS Cloud Computing User Group about the Microsoft Azure IoT Hub.

For the presentation I wanted to make sure I spoke about the Internet of Things as a subject, as a technology and in particular I wanted to provide a simple end-to-end demonstration.

Having not had many opportunities to play with actual devices before, I’ve mostly demonstrated IoT Hub using console apps, I thought it would be a good opportunity to have a look at a device.

For this I got myself a Raspberry Pi 2, and the GrovePi+ Starter Kit from Dexter Industries.

I went this route for a couple of reasons. First, the Raspberry Pi 2 runs Windows 10 IoT Core, and second the GrovePi+ has a reasonably well supported C# managed library for IoT Core for the sensors available both on GitHub and Nuget.

I figured that whilst I wanted to play around I also wanted to be able to build out an end-to-end demonstration quickly.

So, armed with my device and an Azure subscription I came up with the following demonstration architecture.

Demonstration Architecture

The purpose behind the demonstration is to simulate a building management solution whereby devices send temperatures and alerts are sent to and managed by a device that is acting as a sprinkler management system.

iothub-arch

This is a pretty straightforward simulation using a console application to measure the temperature of rooms. The console application that simulates this temperature device can be triggered to simulate a fire, and then triggered again to then linearly reduce the temperature.

The telemetry is sent to IoT Hub where two Azure Stream Analytics jobs do the required work.

  1. One job is a pass through query that simply takes the input telemetry from IoT Hub using a specific consumer group and moves it in to an Azure Storage Table. The purpose of this job is to archive all the data and allow for easy visualisation, for instance using Power BI.
  2. The second job looks for the maximum temperature within a 10 second tumbling window and sends the result to an Azure Service Bus Queue.

Once on the queue an Azure WebJob does the backend work of reading the information, checking for conditions and acting if certain conditions are met. In this way it acts as the IoT Business Logic layer for my backend IoT application. This business logic looks for certain temperature events and sends a message to IoT Hub when they are reached.

I then have my Raspberry Pi 2 and GrovePi+ listening for messages and acting accordingly.

Device Setup

In order to use my Raspberry Pi 2 for my project I first needed to install Windows 10 IoT Core. You can use the Windows 10 IoT Core Dashboard for this, but I found the easiest way was to download the IoT Core release image for Rasberry Pi 2, install it and use the tools provided. Both these can be obtained from the Windows IoT Downloads and Tools page.

Once installed it is a simple task to flash your SD card for your Pi.

iothub-imagehelp

Once this is finished and the Pi booted and you’ve connected an Ethernet cable you can use the IoT Core Watcher application to get the IP address and then browse to the device using the browser of your choice.

iothub-corewatch

iothub-browser

From here if you have a hardware compatible WiFi dongle you can set up WiFi access, and change password and device name if you wish.

The IP address is important so make a note of that.

I installed my GrovePi+ and the sensors I wanted and was then ready to code.

My GrovePi+ was setup to use the RGB LCD screen, a buzzer and button that I use to reset the device.

iothub-pisetup

Visual Studio 2015 Setup

To make Visual Studio 2015 play well with your device, you need to  make sure you install the Windows IoT Core templates and the Universal Windows (UWP) templates.

When you create a Blank App (Universal Windows) you need to make sure you set it up to target the correct architecture and remote device. This is where you’ll need that IP address you made a note of earlier.

iothub-vs

IoT Hub Setup

Once you’ve visited the Azure Portal and created an IoT Hub, you can use the managed SDK to create and register devices. This is great if you’re bootstrapping device registration, but if you want to just create some basic devices and do some testing is overkill.

If you download the Device Explorer you can manage devices, manage SAS tokens, and receive and send data to registered devices all within the confines of a simple Windows application. This is the lowest level of entry for setup.

For my purposes, I wanted to create devices for 5 rooms plus one for my sprinkler management system that would be receiving alerts.

iothub-devexp

Room Temperature Device Setup

To simulate a temperature sensor I created a simple console application that listened for key inputs.

When started this room would simulate a specific room and send an event that represented a temperature of 23 degrees plus a random value between 0.0 and 1.0.

Pressing the Up key would increase the temperature until it was above 400.0 and then pressing the Down key would reduce the temperature until it was below 50.0.

deviceClient = DeviceClient.CreateFromConnectionString(room.connectionString);

var rnd = new Random();
var root = 23.0;

do
{
   while (!Console.KeyAvailable)
   {
      SendToIoTHub(new DeviceData { id = room.id, date = DateTime.UtcNow, temperature = root + rnd.NextDouble() });
      Thread.Sleep(1000);
   }
} while (Console.ReadKey(true).Key != ConsoleKey.UpArrow);

do
{
   var seed = 2.0;
   while (!Console.KeyAvailable)
   {
      temperature = root + seed + rnd.NextDouble();
      SendToIoTHub(new DeviceData { id = room.id, date = DateTime.UtcNow, temperature = temperature });
      Thread.Sleep(1000);
      if (seed < 400.0) seed *= 2.5; 
   } 
} while (Console.ReadKey(true).Key != ConsoleKey.DownArrow); 

do 
{ 
   var seed = 30.0; 
   while (!Console.KeyAvailable) 
   { 
      var rndVal = rnd.NextDouble(); 
      temperature = temperature - seed + (rndVal > 0.5 ? rndVal : -rndVal);
      SendToIoTHub(new DeviceData { id = room.id, date = DateTime.UtcNow, temperature = temperature });
      Thread.Sleep(1000);
      if (temperature < 50.0) seed = 0.0;
    }
} while (Console.ReadKey(true).Key != ConsoleKey.LeftArrow);

This allows the simulation of, for instance a fire, and then the action of the sprinkler system reducing the temperature by putting out the fire.

Whilst this is simple code, the code that does the actual interaction with IoT Hub is even simpler.

static async void SendToIoTHub(object data)
{
   var messageString = JsonConvert.SerializeObject(data);
   var message = new Message(Encoding.UTF8.GetBytes(messageString));

   await deviceClient.SendEventAsync(message);
   Console.WriteLine("Sending: {1}", DateTime.Now, messageString);
}

Since the console application is representing a device and not a backend application process, DeviceClient is used to send the data.

Azure Stream Analytics Setup

To represent the pass through and maximum temperature queries two Azure Stream Analytics jobs where created with very basic queries.

SELECT
    id, date, temperature
INTO
    allmessages
FROM
    iothub
SELECT
    id, max(temperature), System.TIMESTAMP as date
INTO
    servicebusqueue
FROM
    iothub
GROUP BY 
    id, TumblingWindow(Second, 10)

Clearly these queries could be more complex, and there could be more based on any number of IoT Hub consumer groups to provide input to a range of tooling such as HDInsight, or Machine Learning algorithms as required.

Azure WebJob Setup

Once the maximum temperature data has been pushed on to a Service Bus Queue a simple Azure WebJob is triggered and reacts to certain conditions.

public static void ProcessQueueMessage([ServiceBusTrigger("maxtemp")] string message, TextWriter log)
{
   if (zonetemps == null) zonetemps = new Dictionary<string, double>();
   if (zoneactive == null) zoneactive = new Dictionary<string, bool>();

   var command = JsonConvert.DeserializeObject(message);
   var zone = command.id;
   var temp = command.max;

   if (!zonetemps.ContainsKey(zone))
   {
       zonetemps.Add(zone, temp);
   }
   else
   {
       zonetemps[zone] = temp;
   }

   if (!zoneactive.ContainsKey(zone)) zoneactive.Add(zone, false);

   if (zonetemps[zone] > 400.0 && !zoneactive[zone])
   {
       zoneactive[zone] = true;
       var sprinkler = new SprinklerCommand { deviceName = command.id, temperature = command.max, activate = true };
       SendToIoTHub(sprinkler);
   }
   else if (zonetemps[zone] < 60.0 && zoneactive[zone])
   {
       zoneactive[zone] = false;
       var sprinkler = new SprinklerCommand { deviceName = command.id, temperature = command.max, activate = false };
       SendToIoTHub(sprinkler);
   }
}

First, some state is created so we don’t constantly send data back, and then the temperature of the maximum temperature event is read along with the room it has occurred in. If that has already triggered an event for that room is moves on otherwise it sends data back to IoT Hub.

Again the code that does the actual work of sending the data to IoT Hub is very simple.

static async void SendToIoTHub(object data)
{
   var messageString = JsonConvert.SerializeObject(data);
   var message = new Message(Encoding.UTF8.GetBytes(messageString));
   message.Ack = DeliveryAcknowledgement.Full;
   message.MessageId = Guid.NewGuid().ToString();

   var serviceClient = ServiceClient.CreateFromConnectionString("[IoT Hub Connection String]");
   await serviceClient.SendAsync("sprinkler", message);

   await serviceClient.CloseAsync();
}

Since the WebJob is representing a backend application process and not a device, ServiceClient is used to send the data.

Universal Windows App Setup

The application that is deployed to the Rasperry Pi needs to poll for data on the IoT Hub and respond to events received.

public MainPage()
{
   this.InitializeComponent();
   deviceClient = DeviceClient.CreateFromConnectionString("[IoT Hub Connection String]");

   ReceiveIoTHub();
}

private static async void ReceiveIoTHub()
{
   var buzzer = deviceFactory.Buzzer(Pin.DigitalPin2);
   var button = deviceFactory.ButtonSensor(Pin.DigitalPin4);
   var rgb = deviceFactory.RgbLcdDisplay();
   rgb.SetBacklightRgb(200, 125, 0);
   rgb.SetText("Sprinklers\nonline");

   while (true)
   {
      try
      {
         if (button.CurrentState == GrovePi.Sensors.SensorStatus.On)
         {
            rgb.SetBacklightRgb(200, 125, 0);
            rgb.SetText("Sprinklers\nonline");
            buzzer.ChangeState(GrovePi.Sensors.SensorStatus.Off);
         }

         Message receivedMessage = await deviceClient.ReceiveAsync();
         if (receivedMessage == null) continue;

         var message = JsonConvert.DeserializeObject(Encoding.UTF8.GetString(receivedMessage.GetBytes()));

         if (message.activate)
         {
            rgb.SetBacklightRgb(250, 0, 0);
            rgb.SetText("Zone: " + message.deviceName + "\nTemp: " + message.temperature.ToString());
            buzzer.ChangeState(GrovePi.Sensors.SensorStatus.On);
         }
         else
         {
            rgb.SetBacklightRgb(0, 75, 200);
            rgb.SetText("Zone: " + message.deviceName + "\nFire: Out");
            buzzer.ChangeState(GrovePi.Sensors.SensorStatus.Off);
         }

         await deviceClient.CompleteAsync(receivedMessage);
      }
      catch (Exception e)
      {
         var msg = e.Message;
      }
   }
}

Again the code that does the actual polling is minimal and most of the code is actually acting on received data.

Pulling It Together

Starting the Stream Analytics jobs is simple and they can be started to start listening for events arriving in IoT Hub from Now to ensure only new data is processed.

iothub-asa

The Azure WebJob can be published to Azure or run locally in debug mode. Once started it listens to trigger events on the Service Bus Queue and reacts accordingly. If the event doesn’t require intervention it is essentially dropped.

iothub-webjob

Starting an instance of the Room temperature device application, a room can be chosen and events are then sent. Using the Up arrow the temperature can be increased and then using the Down arrow decreased to simulate a fire and fire intervention.

iothub-room

On the device when the application first starts, the screen lights up and indicates that the device is ready.

iothub-online

Results

Data is being constantly sent to IoT Hub by the console applications that represent the room temperature devices.

The WebJob picks up events and checks whether the maximum temperature provided is above the allowed threshold before the sprinkler device is activated and sends a command message to the device via IoT Hub.

If the maximum temperature provided is below the allowed threshold and a device has already been activated, it can be assumed the issue has passed and a the WebJob then sends a command message to the device via IoT Hub

If data is received by the device activates an alarm, the screen changes to red and information is displayed. Additionally the buzzer is activated to alert any operator of the issue.

iothub-fire

If data is received by the device clears an alarm, the screen changes to blue and information is displayed. Additionally the buzzer is deactivated.

iothub-clear

Obviously since the screen is only two lines, it is only capable of reacting to a single room event at a time, but this is just a demonstration.

Conclusion

Azure IoT Hub is part of the Azure Platform as a Service offering. By combining it with other platform services such as Stream Analytics, Service Bus Queues and WebJobs it is very quick and easy to create simple simulation scenarios for proving out ideas.

Other than simulation code, and excusing exception handling (!!), this can all be achieved in a very small number of lines of code because the connectivity and interactions between the platform services can be leveraged and a basic solution quickly designed, delivered and proved.

IoT Hub is a truly compelling service in Azure especially when taken in connection with other services. It would be a relatively simple task to flesh out the demonstration to include trend analysis, data aggregations, storage, searching and state management facilities all still within the platform. This could be achieved for instance using services such as HDInsight, Machine Learning, DocumentDB and Service Fabric.

What do you think would make a great solution using these bits of Azure and stitching them together?

 

Data Movement in Australia using Azure Data Factory

UPDATE: This slipped a little under the radar but the Data Movement Service was announced as being available in Australia on March 8th, so just go ahead and use it as it is intended!

One of the key activities for enabling data analytics of large scale datasets is the movement of data from one location to another to allow for further processing.

Azure Data Factory has a Copy activity that allows you specify a source and sink for data to be moved. The (nearly) globally available Data Movement Service performs the move based on the location of the data sink.

So if you have data in East US and need to move to North Europe, the Data Movement Service in North Europe will perform the move no matter where your Data Factory template is located.

The one exception to this is Australia. Currently there is no Data Movement Service in Australia so if you try to move data from Australia East to Australia East for instance, the Copy activity will fail.

Since data sovereignty is a real issue for Australian businesses a solution is required.

Any time one of the locations of data is on-premises (including an Azure VM) the Data Management Gateway is used to move the data irrespective of where the data sink is.

As a test, I wondered if the issue around there be no Data Movement Service in Australia could take advantage of this.

Environment

In order to test the solution the following is required:

  1. Source Storage Account containing a simple CSV file (make sure this file exists before deploying the Pipeline)
  2. Sink Storage Account (can be the same account with a different container)
  3. Azure VM to act as intermediate storage
  4. Simple Azure Data Factory to perform the data movement

Setup

To copy data via an Azure VM it needs to be running the Data Management Gateway. Once installed the rest of the set up is straightforward.

Storage Account

The simplest way of testing the movement is to create a single storage account in one of the Australia datacentres, in our case Australia Southeast. NOTE: In order to use the Australia region you need an Azure subscription registered to an Australian credit card.

The storage account created has 2 containers, one for input and one for output.

ADF-Move-StorSetup

The idea here is to move the file between the two containers. Doing this directly leads to an error, so a staging folder on an Azure VM is required.

Once the VM is running and has the Data Management Gateway installed, the Data Factory can be created and tested.

Linked Services

We need a linked service to represent the storage account:

{
    "name": "StorageLinkedService",
    "properties": {
        "description": "",
        "hubName": "adfmovetest_hub",
        "type": "AzureStorage",
        "typeProperties": {
            "connectionString": "DefaultEndpointsProtocol=https;AccountName=[ACCOUNT_NAME];AccountKey=[ACCOUNT_KEY]"
        }
    }
}

And for the Azure VM connected storage:

{
    "name": "OnPremisesFileServerLinkedService",
    "properties": {
        "description": "",
        "hubName": "adfmovetest_hub",
        "type": "OnPremisesFileServer",
        "typeProperties": {
            "host": "localhost",
            "gatewayName": "testgateway",
            "userId": "",
            "password": "",
            "encryptedCredential": "[REMOVED]"
        }
    }
}

Datasets

The input and output blob containers for the files are identical accept for the container folderPath (input shown below):

{
    "name": "InputBlob",
    "properties": {
        "structure": [
            {
                "name": "firstname",
                "type": "String"
            },
            {
                "name": "lastname",
                "type": "String"
            }
        ],
        "published": false,
        "type": "AzureBlob",
        "linkedServiceName": "StorageLinkedService",
        "typeProperties": {
            "fileName": "people.csv",
            "folderPath": "input",
            "format": {
                "type": "TextFormat"
            }
        },
        "availability": {
            "frequency": "Day",
            "interval": 1
        },
        "external": true,
        "policy": {}
    }
}

Likewise the on premises (or Azure VM) storage differs only in the linkedServiceName and folderPath:

{
    "name": "Staging",
    "properties": {
        "structure": [
            {
                "name": "firstname",
                "type": "String"
            },
            {
                "name": "lastname",
                "type": "String"
            }
        ],
        "published": false,
        "type": "FileShare",
        "linkedServiceName": "OnPremisesFileServerLinkedService",
        "typeProperties": {
            "fileName": "people.csv",
            "folderPath": "c:\\staging"
        },
        "availability": {
            "frequency": "Day",
            "interval": 1
        },
        "external": false,
        "policy": {}
    }
}

Pipeline

To move the data, we need to create a simple pipeline that contains 2 Copy activities:

  1. Copy dataset from “input” container to staging folder on premises
  2. Copy dataset from staging folder on premises to “output” container
{
    "name": "TestAustraliaMove",
    "properties": {
        "description": "Test if you can use a staging folder to do a file move in Australia",
        "activities": [
            {
                "type": "Copy",
                "typeProperties": {
                    "source": {
                        "type": "BlobSource"
                    },
                    "sink": {
                        "type": "FileSystemSink",
                        "writeBatchSize": 0,
                        "writeBatchTimeout": "00:00:00"
                    }
                },
                "inputs": [
                    {
                        "name": "InputBlob"
                    }
                ],
                "outputs": [
                    {
                        "name": "Staging"
                    }
                ],
                "policy": {
                    "timeout": "01:00:00",
                    "concurrency": 1,
                    "executionPriorityOrder": "NewestFirst",
                    "style": "StartOfInterval",
                    "retry": 3
                },
                "scheduler": {
                    "frequency": "Day",
                    "interval": 1
                },
                "name": "BlobToFile",
                "description": ""
            },
            {
                "type": "Copy",
                "typeProperties": {
                    "source": {
                        "type": "FileSystemSource"
                    },
                    "sink": {
                        "type": "BlobSink",
                        "writeBatchSize": 0,
                        "writeBatchTimeout": "00:00:00"
                    }
                },
                "inputs": [
                    {
                        "name": "Staging"
                    }
                ],
                "outputs": [
                    {
                        "name": "OutputBlob"
                    }
                ],
                "policy": {
                    "timeout": "01:00:00",
                    "concurrency": 1,
                    "executionPriorityOrder": "NewestFirst",
                    "style": "StartOfInterval",
                    "retry": 3
                },
                "scheduler": {
                    "frequency": "Day",
                    "interval": 1
                },
                "name": "FileToBlob",
                "description": ""
            }
        ],
        "start": "2016-01-18T23:59:00Z",
        "end": "2016-01-19T23:59:59Z",
        "isPaused": false,
        "hubName": "adfmovetest_hub",
        "pipelineMode": "Scheduled"
    }
}

You can see there are 2 activities:

  1. BlobToFile has a BlobSource and FileSystemSink
  2. FileToBlob has a FileSystemSource and BlobSink

Once the activity is deployed it will execute based on the start and end dates specified.

Result

When all the activities have run we would expect to see all datasets in the Data Factory diagram to show as green indicating success.

ADF-Move-Pipeline

As we can see, all the lights are green.

If we look at the staging folder on the Azure VM we can see the file:

ADF-Move-LocalDiskFile

Likewise when we look at the output container in our storage account we can see the file has been moved:

ADF-Move-BlobOutput

Conclusion

The Data Movement Service is currently unavailable in the Australia region which limits the ability to move data between platform services within a region where data sovereignty is a real issue.

In order to achieve data movement and still stay within the Australia region, an Azure VM can be used to provide a staging location making use of the Data Management Gateway to perform the actual data copy activity.

Azure Data Factory and “on-premises” Azure VMs

A question came up recently on the MSDN Forum for Azure Data Factory around whether or not using an Azure VM would count as a cloud or on-premises resource when it comes to billing.
Checking the pricing for Azure Data Factory you can see that price for Data Movement is different depending on the source location of the data, so where the data is has quite an impact on cost.
So is an Azure VM considered as a cloud location or an on-premises location.
I thought I’d do a quick test to confirm.

Environment

In order to understand how the Data Movement Service sees an Azure VM some setup is required.

 

  1. Create an Azure VM; I already had a Windows Server 2016 CTP4 one so I reused that
  2. Create an Azure Storage account that will act as the Sink for the data
  3. Create a simple Azure Data Factory that contains a Copy activity to move data from the VM to the Storage account

Setup

To allow data to be moved from an on-premises File System sink in Azure Data Factory you need to use the Data Management Gateway on your server.
When the server is a virtual machine in Azure the process is the same, so for the first part of the environment it is pretty straightforward, you need to download, install and run the gateway. Once up and running you’ll expect to see something like the following.

 

ADF-OnPremTest-DMG
For the Azure Data Factory, a number of artefacts need to be created:

 

  1. Data Management Gateway that will provide access to the VM
  2. Linked Service to a File system
  3. Linked Service to an Azure Blob
  4. Dataset representing source data
  5. Dataset representing sink data
  6. Pipeline containing a Copy activity
You also need some very basic data to be move, which can be a simple CSV file containing a couple of items.

Data Management Gateway

The Data Management Gateway is very straightforward and just follows the usual pattern for an on-premises service as it just creates an endpoint on the server.

{
    "name": "OnPremisesFileServerLinkedService",
    "properties": {
        "description": "",
        "hubName": "adfiaastest_hub",
        "type": "OnPremisesFileServer",
        "typeProperties": {
            "host": "localhost",
            "gatewayName": "IAASTEST",
            "userId": "",
            "password": "",
            "encryptedCredential": "[REMOVED]"
        }
    }
}
When you set up an on-premises Linked Service you can either store the credentials for the server directly in the configuration (NOTE: the password is always replaced by asterisks when display), or use an encrypted credential.

Azure Storage Linked Service

Once you’ve created an Azure storage account, you need to create a container. This can be done directly in the Azure portal or through a number of other tools such a Azure Management Studio, Cloud Portam or indeed Visual Studio.

{
    "name": "StorageLinkedService",
    "properties": {
        "description": "",
        "hubName": "adfiaastest_hub",
        "type": "AzureStorage",
        "typeProperties": {
            "connectionString": "DefaultEndpointsProtocol=https;AccountName=[STORAGEACCT];AccountKey=[STORAGEKEY]"
        }
    }
}

Datasets

As this is a test the dataset used for the test data is extremely simple.
 

For the File System file:

{
    "name": "OnPremisesFile",
    "properties": {
        "published": false,
        "type": "FileShare",
        "linkedServiceName": "OnPremisesFileServerLinkedService",
        "typeProperties": {
            "fileName": "people.csv",
            "folderPath": "c:\\adfiaastest"
        },
        "availability": {
            "frequency": "Day",
            "interval": 1
        },
        "external": true,
        "policy": {}
    }
}

And for Blob Storage:

{
    "name": "AzureBlobDatasetTemplate",
    "properties": {
        "structure": [
            {
                "name": "firstname",
                "type": "String"
            },
            {
                "name": "lastname",
                "type": "String"
            }
        ],
        "published": false,
        "type": "AzureBlob",
        "linkedServiceName": "StorageLinkedService",
        "typeProperties": {
            "fileName": "people.csv",
            "folderPath": "output",
            "format": {
                "type": "TextFormat"
            }
        },
        "availability": {
            "frequency": "Day",
            "interval": 1
        }
    }
}

Copy Activity Pipeline

Since we are only moving data our pipeline only contains a single Copy activity.

{
    "name": "PipelineTemplate",
    "properties": {
        "description": "Testing IaaS VM",
        "activities": [
            {
                "type": "Copy",
                "typeProperties": {
                    "source": {
                        "type": "FileSystemSource"
                    },
                    "sink": {
                        "type": "BlobSink",
                        "writeBatchSize": 0,
                        "writeBatchTimeout": "00:00:00"
                    }
                },
                "inputs": [
                    {
                        "name": "OnpremisesFile"
                    }
                ],
                "outputs": [
                    {
                        "name": "AzureBlobDatasetTemplate"
                    }
                ],
                "policy": {
                    "timeout": "01:00:00",
                    "concurrency": 1
                },
                "scheduler": {
                    "frequency": "Day",
                    "interval": 1
                },
                "name": "OnpremisesFileSystemtoBlob",
                "description": "copy activity"
            }
        ],
        "start": "2015-12-26T00:00:00Z",
        "end": "2015-12-28T00:00:00Z",
        "isPaused": false,
        "hubName": "adfiaastest_hub",
        "pipelineMode": "Scheduled"
    }
}

Once completed, having a look at the result in the Diagram blade for the data factory should show similar to the following:

ADF-OnPremTest-Pipeline

Result

Once the Data Management Gateway and Azure Storage has been linked and a file uploaded to the blob container, the factory should execute and move the data as expected.
This is confirmed by quickly checking the storage container.

ADF-OnPremTest-Storage

After checking the process has been successful, I examined my subscription to see what Data Factory charges had been incurred. NOTE: It takes a few hours for new charges to show.

ADF-OnPremTest-Bill

Conclusion

Looking at the charges incurred during execution of the data movement activity, it can be seen that whilst we are essentially running a cloud service in the form of an Azure Virtual Machine, the data movement activity is showing as an On Premises move.

It should be noted that the Azure Virtual Machine in this case was one created in the new portal.

Microsoft Integration Roadmap – My Op-ed

Microsoft has recently published their Integration Roadmap to provide insight in to the direction of their key integration technologies:
  • BizTalk Server
  • Microsoft Azure BizTalk Services (MABS)
  • Microsoft Azure Logic Apps and Azure App Service
There are some great summaries are out there by Kent Weare, Saravana Kumar and Daniel Toomey and having read through the document, I thought I’d capture some of my thoughts of what it contains.

BizTalk Server

Having used BizTalk for over 10 years I was keen to see what direction the platform was going to take. There have been naysayers for many years saying the platform is dead, so it is good that Microsoft have announced a new version is coming later in 2016.

Some disappointing news for me on this though is that for the most part this is really yet another platform alignment release.
That said, the addition of more robustness around high availability and better support for this on Azure IaaS is welcome and should see a batch of new customers being able to leverage this.
There is certainly some indication of future releases, although not as strong a commitment as outlined at the BizTalk Summit 2015 in London.

BizTalk Services

BizTalk Services has been the elephant in the room since Logic Apps went in to preview. There are a number of API Apps that encapsulate a lot the functionality provided by BizTalk Services so it is quite telling that the roadmap says that any new development should target using Logic Apps and these API Apps rather than BizTalk Services.

It doesn’t take much reading between the lines to see that at some point MABS will be sunsetted. I hope it is fair to assume that a migration path from MABS to Logic Apps will be provided either directly by Microsoft or via a partner.

Logic Apps, App Service and Azure Stack

Since the preview release, Logic Apps have undergone a number of revisions around functionality and tooling and the roadmap lays out the path ahead for when they come out of preview.

Along with this we are going to see new connectors and general availability of Azure Stack.
Azure Stack provides App Services on-premises, and will provide organisations with at least some of the agility, resilience and scalability that the core Azure iPaaS platform provides.

New Features != Evolution

Taking the point about Azure Stack, Microsoft has announced a convergence between cloud and on-premises solutions for integration.

I’d take this a step further I think. For a long time the BizTalk community has had to field questions on when and if the BizTalk Server platform will move forward technically. Whatever form this discussion has taken it has long been assumed that over time the platform would evolve.
By converging cloud and on-premises with Azure Stack, thereby providing potential self-service to integration solutions in your own hosting environment, and with the release of PowerApps recently, a means to creating just in time data driven applications is pretty much at hand.
Since there has been little actual evolution of the core BizTalk platform I wonder if over the next couple of years workloads will move to Azure Stack instead. After all, this would provide the swiftest way to then leverage the core iPaaS platform in future and reduce dependency on hard to find BizTalk skills.
One telling comment in the roadmap discussion for me was:
Alongside our Azure Stack investments, we are actively working on adding more BizTalk Server capabilities to Logic Apps.
 Is this scene setting, after all the team that is responsible for BizTalk Server is the same team that is responsible for Logic Apps, they have finite resources?

Conclusion

It’s great to see Microsoft continuing to invest in the future of integration, hardly surprising given Hybrid Integration is seen as a key approach to moving workloads to the cloud.

Have they gone far enough?
I think for now the combination of a core robust platform in BizTalk Server that is moving forward, albeit slowly, and a solution that offers parity between cloud and on-premises provides a great springboard for the near future of integration.
And for me that is one key takeaway, the roadmap provides a vision for the near future but I would like to have seen a longer term (i.e., beyond 2016) vision, that is what would allow us as IntegrationProfessionals™ to ensure we don’t take a customer down a dark alley.