Click here to Skip to main content
15,867,568 members
Articles / DevOps

Serverless - DevOps Little Helper

Rate me:
Please Sign up or sign in to vote.
5.00/5 (10 votes)
29 Apr 2019CPOL18 min read 17.4K   52   16   6
Why not use serverless computing to perform maintenance tasks in Azure DevOps?

Serverless

Contents

Introduction

In the recent years, the term serverless became more and more popular. It was only a matter of time before a serverless challenge would appear on Code Project. Luckily, this time, I have a viable idea that I want to present.

Many of my projects are currently done via Azure DevOps. Personally, I think the platform is great as it offers a complete and uniform package for dealing with code projects. It has boards, repos, a full CI/CD solution, and even a package manager integrated (e.g., NPM, NuGet, ...). Normally, we do a lot of things with packages - building smaller libraries that are then aggregated in services.

A very common problem is the following: We have a set of commonly used libraries that are not yet stabilized and thus see a lot of changes (usually non-breaking, but still important to be rolled out as soon as possible). After a PR to the library is accepted and the package has been built, we need to update all consuming services to the latest version of this library. This is quite time-intensive and not very efficient.

In short, for each service, we need to:

  • Pull the (latest state of the) development branch
  • Create a new branch for the reference update (feature branch)
  • Update the reference to the changed library
  • Stage / commit the change
  • Push the feature branch
  • Create a pull request for the feature branch into the development branch

In this article, we will automate the full process using an Azure Function. Serverless computing for the win!

Background

In 2014, Amazon started a new service called Lambda, which allowed developers to provide simple functions as full computational resources. There was no need to manage a server, add or maintain a runtime, or select a plan. The cost was solely based on the number of calls of the published functions.

It did not take long until many competitors came up with similar services. Furthermore, open-source projects began to support a function-driven approach as well. Two new terms have been born: FaaS (function-as-a-service) and serverless. The former describes the deployment of single functions to do everything without having to publish some docker image or full runtimes. The latter describes the usage of a BaaS (backend-as-a-service) system where computing resources are fully hidden from the customer.

While FaaS may be one of the models used in serverless for bringing functionality online, serverless is not required for FaaS. Indeed, most users of FaaS will still maintain their servers or are in charge of many decisions to guarantee the reliability of the hosted services. Most of the time FaaS is about unification and ease / speed of deployment, not about serverless.

Serverless - Wait a Bit?

Selling serverless for providing functionality in the cloud is a bit like advertising some new smartphone as handless. Obviously, at the end of the day, some real hardware needs to work to run some computations. There is no way around that.

Serverless Comic

© commitstrip - all rights reserved.

If we ask the PM of AWS Lambda what comes to his mind when he hears "serverless", we get the following answer:

For me, serverless means activities/responsibilities related to servers are no longer on your radar. A serverless solution for doing something you would have previously used servers for would check (at least) these four boxes - simple but relevant primitives, scaling just happens, you never pay for idle, and reliability/availability is built in.

There are some real nice perks in this statement. Reliability? Everyone wants that! No unnecessary pay? Sounds too good to be true! Scalability? Why not! Needless to say that there is always a downside to each upside.

According to Wikipedia, serverless brings some other advantages to the table.

Serverless computing can simplify the process of deploying code into production. Scaling, capacity planning and maintenance operations may be hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned servers at all.

Wow! Why aren't we only using serverless computing? It turns out that serverless has its preferred use cases, but that many "classical" problem domains are not at all well suited for this kind of computation.

Indeed, for applications that are mostly idle, serverless can be a true life saver, unless the application has some very special demands. For an application that is permanently called serverless may be a step back economically, however, due to the reliability and scalability still be viable. It's important that the scalability factor is not mixed up with performance, which is usually considered worse than an equivalent serverful runtime.

All in all, for most cases, there is no white-and-black here, it depends on multiple variables to determine if serverless is the right way to crack a specific problem. The currently quite strong vendor lock-in, privacy, and security drawbacks of serverless solutions do not help to make decisions easier.

What we know for sure is that serverless is quite good as glue for sticking two services (especially running within the same cloud or platform) together. In our example project, we will glue together two pieces in our Azure DevOps setup.

Actually, since web hooks should only have periodically (unpredictable) demands, having them run in a serverless architecture is wonderful. Also, a web hook can be thought of as the web equivalent to an extension / plugin in classical software. Most of the time, plugins also just connect one piece of software to another piece - very lightweight and very task focused. If we develop a web hook similarly, we immediately see that using FaaS in a serverless environment makes sense.

Battle Plan

Our goal is to write a web hook that triggers a code change (in form of a pull request) when a certain build finishes (which would create, e.g., a NuGet package).

Let's see a quick diagram on what we are actually after:

The usage diagram of DevOps Little Helper

To achieve this, the following battle plan should be followed, which allows us to finish this task (or project) in incremental steps.

  1. Test Local (dummy)
  2. Test local (fixed data)
  3. Test local (with environment variable)
  4. Publish (deploy first-time)
  5. Test online (fixed data, environment variable)
  6. Set trigger in Azure DevOps
  7. Test local (variable data)
  8. Update (deploy follow-up)
  9. Test online (variable data)
  10. Full integration test from Azure DevOps

So we'll start with a dummy Azure Function that is locally tested to understand with what we are dealing here. Then we'll provide some code how our Azure Function should work (just not very flexible / hard coded). Afterwards, we make it a little bit more flexible by using environment variables.

At this point, we are ready to publish a first draft. Subsequently, we can perform the local test online to see what needs to be done for deployment. Right after this step works fine, we will integrate Azure DevOps by creating a subscription on the published web hook.

Finally, we only need to perform real-world tests using the fully flexible implementation. Sounds easy, right? So let's start with the basics!

Azure Function with C#

We start with Visual Studio. Actually, these days, we can be fairly similar in terms of productivity using Visual Studio Code, but using Visual Studio gives us the best experience (nearly) out of the box. I will use Visual Studio 2017, but the steps should be fairly similar in Visual Studio 2019. Make sure to have the Azure Development Tools installed.

First Steps

Luckily, there is already a template for creating a new Azure Function using C# as programming language. We can set the boilerplate to "HTTP trigger" (which gives us an endpoint that can be invoked from anywhere) and set the access rights to function. This way, a code is required to trigger the function. This code should only be known by the service calling the web hook (and for debugging purposes - us):

Create new Azure Function Project

This is almost it. So far, Visual Studio only gave us a nice boilerplate to start with - but more importantly, it already connected it to the right debugging tools. Let's run the application by pressing F5...

Start of Azure Function Debug CLI

Maybe to our surprise, a command prompt opens with a nice ASCII art of the Azure Function logo. Apparently, somebody had a great weekend with some beer and lots of time!

After a little while, the local instance is fully started and ready to receive requests. We can now set breakpoints or pause the application for modifications.

In the CLI, this looks as follows:

Azure Function Debug CLI Ready to Receive

Note the port that is 7071. We will need it to trigger a request.

Right now, since we did not change any line of code in the boilerplate, the function is set to allow GET and POST requests. We can create a simple request using Postman, which is a little application perfectly suited for testing (or manually using) APIs.

Azure Function Trigger Endpoint

This is the moment where we need to get a beer and start coding. You can replace beer with the (cold) beverage of your choice. Being a bavarian by nature - for me, the choice is fairly simple.

DevOps Helper Class

The Function in Azure Function only marks that a simple function is used as handler for individual requests - it does not constrain us to stay within the same function. Actually, we can use whatever libraries, classes, and other assets that we want. We should use the same coding patterns and techniques to write maintainable practical code as usual.

We start now with a simple Helper class (yes, potentially not the best name - feel free to rename to something more appropriate in your case). This class should handle all the interaction with Azure DevOps.

We don't want to start from zero with the - frankly - huge API behind Azure DevOps. Therefore, we will use an existing library that provides a nice abstraction on top of the existing RESTful API. The official package is called Microsoft.TeamFoundationServer.Client, using still the old name.

The following code already almost does everything - no worries, we will go over the most important lines.

C#
using Microsoft.Azure.WebJobs.Host;
using Microsoft.TeamFoundation.SourceControl.WebApi;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace DevOpsLittleHelper
{
    internal class Helper
    {
        private readonly GitHttpClient _gitClient;

        public Helper(String pat)
        {
            var creds = new VssBasicCredential(String.Empty, pat);

            // Connect to Azure DevOps Services
            var connection = new VssConnection(new Uri(collectionUri), creds);

            // Get a GitHttpClient to talk to the Git endpoints
            _gitClient = connection.GetClient<GitHttpClient>();
        }

        public async Task<Int32> UpdateReferenceAndCreatePullRequest()
        {
            var repo = await _gitClient.GetRepositoryAsync
                         (projectName, repoName).ConfigureAwait(false);
            var commits = await _gitClient.GetCommitsAsync
                              (repo.Id, new GitQueryCommitsCriteria
            {
                ItemVersion = new GitVersionDescriptor
                {
                    Version = baseBranchName,
                    VersionType = GitVersionType.Branch,
                },
            }, top: 1).ConfigureAwait(false);
            var lastCommit = commits.FirstOrDefault()?.CommitId;
            var path = "SmartHotel360.PublicWeb/SmartHotel360.PublicWeb.csproj";
            var item = await _gitClient.GetItemContentAsync
                       (repo.Id, path, includeContent: true).ConfigureAwait(false);
            var oldContent = await GetContent(item).ConfigureAwait(false);
            var newContent = oldContent.Replace(
                $"<PackageReference 
                   Include=\"Microsoft.AspNetCore.All\" Version=\"2.0.0\" />",
                $"<PackageReference 
                   Include=\"Microsoft.AspNetCore.All\" Version=\"2.0.1\" />");
            var push = CreatePush(lastCommit, path, newContent);
            await _gitClient.CreatePushAsync(push, repo.Id).ConfigureAwait(false);
            var pr = CreatePullRequest();
            var result = await _gitClient.CreatePullRequestAsync
                         (pr, repo.Id).ConfigureAwait(false);
            return result.PullRequestId;
        }
    }
}

In the code above, we create a small class that has a single field yielding access to a "Git client", where the Azure DevOps API library can work against.

The client is created using a Personal Access Token (PAT) from Azure DevOps. The security token is rather sensitive, but quite useful for such simple triggers. Be sure to not give / show this token to anyone!

The real meat of this class is the UpdateReferenceAndCreatePullRequest method. Here, we have all the formerly described steps in code form:

  • Get a repository to potentially change
  • Get the id of the last commit as a reference
  • Get the content of the csproj file
  • Update the reference(s) in the csproj file
  • Create a new commit / branch with the updated csproj file
  • Create a PR for merging the created branch / changes back in

As outlined in the battle plan, the provided method is "static", i.e., currently we are not dealing with a variable amount of references, name of references, or their versions. Also, we assume a change must always happen.

Nevertheless, later the shown part will be the core algorithm - just with a bit more cases and flexibility in mind. Is that all, i.e., everything we need?

If we would copy and paste this code, nothing would yet work. The class also should feature some constants (or fields - if we want to make it more variable):

C#
const String collectionUri = "https://your-name.visualstudio.com/";
const String projectName = "your-project";
const String repoName = "your-repo";
const String baseBranchName = "master";
const String newBranchName = "feature/auto-ref-update";

These values determine where the source files should be read from and what the name of the new branch for creating a PR should be. We could also, e.g., randomize the name of the new branch with some guid or similar.

Another thing that was missing from the code above is the method for creating the Git Push details. The following simple function deals with this:

C#
private static GitPush CreatePush(String commitId, String path, String content) => new GitPush
{
    RefUpdates = new List<GitRefUpdate>
    {
        new GitRefUpdate
        {
            Name = GetRefName(newBranchName),
            OldObjectId = commitId,
        },
    },
    Commits = new List<GitCommitRef>
    {
        new GitCommitRef
        {
            Comment = "Automatic reference update",
            Changes = new List<GitChange>
            {
                new GitChange
                {
                    ChangeType = VersionControlChangeType.Edit,
                    Item = new GitItem
                    {
                        Path = path,
                    },
                    NewContent = new ItemContent
                    {
                        Content = content,
                        ContentType = ItemContentType.RawText,
                    },
                }
            },
        }
    },
};

Finally, the details for the Git Pull Request also need to be created. Another simple function that does that looks as follows:

C#
private GitPullRequest CreatePullRequest() => new GitPullRequest
{
    Title = "Automatic Reference Update",
    Description = "Updated the reference / automatic job.",
    TargetRefName = GetRefName(baseBranchName),
    SourceRefName = GetRefName(newBranchName),
};

Great! Now the only thing missing is two small helpers - one to convert a standard branch into a ref name and another helper to get the content from a stream.

C#
private static String GetRefName(String branchName) => $"refs/heads/{branchName}";

private static async Task<String> GetContent(Stream item)
{
    using (var ms = new MemoryStream())
    {
        await item.CopyToAsync(ms).ConfigureAwait(false);
        var raw = ms.ToArray();
        return Encoding.UTF8.GetString(raw);
    }
}

At this point, our Azure Function itself looks similar to the following code:

C#
[FunctionName("UpdateRepositories")]
public static async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Function, 
       "post", Route = null)] HttpRequest req, TraceWriter log)
{
    log.Info("Processing request ...");

    var helper = new Helper("***********");
    var prId = await helper.UpdateReferenceAndCreatePullRequest(log).ConfigureAwait(false);

    return new OkObjectResult(new
    {
        id = prId,
        message = $"Pull Request #{prId} created.",
    });
}

Let's try running it to see some result! Again, we start the debugging mode and trigger the function via Postman.

Azure Function Successfully Created

It's quite ugly that the PAT is hardcoded. Ideally, we should store it via environment variables or some other mechanism (e.g., obtaining it directly from Azure Key Vault).

Let's insert the following line in our Azure Function:

C#
var pat = Environment.GetEnvironmentVariable("DEVOPS_PAT") ?? 
      throw new ArgumentException("Missing environment variable DEVOPS_PAT");

This allows us to construct the helper like new Helper(pat). But how to test with the environment variable(s)? Visual Studio got us covered.

Azure Function Set Environment

This environment variable also needs to be set (manually) during our first deployment (initial publish) later. The environment variables are considered development only by Visual Studio. We should also avoid committing the file where the environment variables are stored as the PAT (sorry for being repetitive here) is very sensitive information. Never publish it anywhere!

If we did everything right, Azure DevOps is already showing us a nicely created pull request in the web app.

Azure DevOps New Pull Request

Publish Azure Function

Publishing the Azure Function project can be done directly from within Visual Studio. Personally, I find the publish process via Visual Studio more straight forward / faster / simpler than in the Azure Portal.

The first step is to hit the project and call "Publish". Then we select "Azure Function" as target. Obviously, we could also overwrite an existing one - but in this case (first-time) we want to start with a new one.

Publish New Azure Function

We need fill out all the details. Quite in contrast to what "serverless" means, we need to select some "plan" (associated with sizes of machines) and other details, that do everything except hiding infrastructure details. In my opinion, this classifies Azure Functions as exclusively FaaS and not serverless, whereas AWS Lambda is truly serverless and FaaS. But who am I to judge... Let's continue, shall we?

As our Azure Function just represents the glue between two services, we reject any offered database choice - right now, we live in pure logic and not data.

Azure Function Create Details

Pressing "Create" will provision all the required services for us. Consequently, this step takes quite some time (about 5-10 minutes depending on various factors - including size of our Azure Function app and speed of our Internet connection). Finally, we are greeted with a special screen that gives us a summary about the new Azure Function.

Azure Function Publish Completed

Now we can test our online (and active) Azure Function with Postman again. Receiving the same result again (don't forget to set the right environment variable for the PAT - otherwise, we'll receive an internal server error, code 500!).

Using the Azure Portal, we can see (and change) the available environment settings. This dialog should be familiar from the standard app service.

Environment Setting for Azure Function

It's time to connect Azure DevOps by creating a subscription.

Azure DevOps Subscription Setup

We start by adding a new subscription in Azure DevOps for our project. Just clicking on service hooks / "create subscriptions" will open a new dialog for adding the web hook we've just deployed.

Service Hooks Setting in Azure DevOps

There are multiple choices (premade configurations for many popular services). In our case, we want to go for the most flexible and powerful solution. We want a standard web hook.

Service Hook Selection in Azure DevOps

The trigger we set to the build pipelines we want to monitor. Remember: The build pipelines should finish the release of a certain NuGet package. The package reference is what we want to update in selected repositories.

In the example screenshot below, we set the pipeline to a single value - we can also monitor all build pipelines. Since multiple service hooks can be set up, there is no need to needlessly poke our Azure Function.

Azure DevOps New Service Hook Trigger Setup

In the action, we have to set the URL to our Azure Function. This URL should include the code query parameter for authentication of Azure DevOps against our Azure Function. We can provide additional security via, e.g., headers that we demand. In our case, we feel pretty well guarded with just the code parameter.

The remaining details can be left as-is. We want to have the full response to get a maximum of information.

Azure DevOps New Service Hook Action Setup

Finally, after we have set up the service hook, we should test it. This will send a dummy request to our Azure Function.

The output of this test looks as follows. Importantly, the content is delivered in the form of a JSON that contains all post build relevant information.

We should use the content to continue making our solution more flexible.

http
Method: POST
URI: https://devopslittlehelper.azurewebsites.net/api/UpdateRepositories?code=****
HTTP Version: 1.1
Headers:
{
    Content-Type: application/json; charset=utf-8
}
Content:
{
    "subscriptionId": "00000000-0000-0000-0000-000000000000",
    "notificationId": 1,
    "id": "4a5d99d6-1c75-4e53-91b9-ee80057d4ce3",
    "eventType": "build.complete",
    "publisherId": "tfs",
    "message": {
        "text": "Build ConsumerAddressModule_20150407.2 succeeded",
        "html": "Build ... succeeded",
        "markdown": "Build [ConsumerAddressModule_20150407.2]
         (https://fabrikam-fiber-inc.visualstudio.com/web/build.aspx?
          pcguid=5023c10b-bef3-41c3-bf53-686c4e34ee9e&builduri=
          vstfs%3a%2f%2f%2fBuild%2fBuild%2f3) succeeded"
    },
    "detailedMessage": {
        "text": "Build ConsumerAddressModule_20150407.2 succeeded",
        "html": "Build ... succeeded",
        "markdown": "Build [ConsumerAddressModule_20150407.2]
         (https://fabrikam-fiber-inc.visualstudio.com/web/build.aspx?
          pcguid=5023c10b-bef3-41c3-bf53-686c4e34ee9e&builduri=vstfs%3a%2f%2f
          %2fBuild%2fBuild%2f3) succeeded"
    },
    "resource": {
        "uri": "vstfs:///Build/Build/2",
        "id": 2,
        "buildNumber": "ConsumerAddressModule_20150407.1",
        "url": "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/
                71777fbc-1cf2-4bd1-9540-128c1c71f766/_apis/build/Builds/2",
        "startTime": "2015-04-07T18:04:06.83Z",
        "finishTime": "2015-04-07T18:06:10.69Z",
        "reason": "manual",
        "status": "succeeded",
        "dropLocation": "#/3/drop",
        "drop": {
            "location": "#/3/drop",
            "type": "container",
            "url": "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/
                    _apis/resources/Containers/3/drop",
            "downloadUrl": "https://fabrikam-fiber-inc.visualstudio.com/
             DefaultCollection/_apis/resources/Containers/3/drop?/
             api-version=1.0&$format=zip&downloadFileName=ConsumerAddressModule_20150407.1_drop"
        },
        "log": {
            "type": "container",
            "url": "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/
                   _apis/resources/Containers/3/logs",
            "downloadUrl": "https://fabrikam-fiber-inc.visualstudio.com/
                           _apis/resources/Containers/3/logs?api-version=1.0&
                        $format=zip&downloadFileName=ConsumerAddressModule_20150407.1_logs"
        },
        "sourceGetVersion": "LG:refs/heads/master:600c52d2d5b655caa111abfd863e5a9bd304bb0e",
        "lastChangedBy": {
            "displayName": "Normal Paulk",
            "url": "https://fabrikam-fiber-inc.visualstudio.com/_apis/
                    Identities/d6245f20-2af8-44f4-9451-8107cb2767db",
            "id": "d6245f20-2af8-44f4-9451-8107cb2767db",
            "uniqueName": "fabrikamfiber16@hotmail.com",
            "imageUrl": "https://fabrikam-fiber-inc.visualstudio.com/
                         DefaultCollection/_api/_common/identityImage?
                         id=d6245f20-2af8-44f4-9451-8107cb2767db"
        },
        "retainIndefinitely": false,
        "hasDiagnostics": true,
        "definition": {
            "batchSize": 1,
            "triggerType": "none",
            "definitionType": "xaml",
            "id": 2,
            "name": "ConsumerAddressModule",
            "url": 
            "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/
            71777fbc-1cf2-4bd1-9540-128c1c71f766/_apis/build/Definitions/2"
        },
        "queue": {
            "queueType": "buildController",
            "id": 4,
            "name": "Hosted Build Controller",
            "url": 
            "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/_apis/build/Queues/4"
        },
        "requests": [
            {
                "id": 1,
                "url": "https://fabrikam-fiber-inc.visualstudio.com/
                DefaultCollection/71777fbc-1cf2-4bd1-9540-128c1c71f766/_apis/build/Requests/1",
                "requestedFor": {
                    "displayName": "Normal Paulk",
                    "url": "https://fabrikam-fiber-inc.visualstudio.com/
                    _apis/Identities/d6245f20-2af8-44f4-9451-8107cb2767db",
                    "id": "d6245f20-2af8-44f4-9451-8107cb2767db",
                    "uniqueName": "fabrikamfiber16@hotmail.com",
                    "imageUrl": "https://fabrikam-fiber-inc.visualstudio.com/
                    DefaultCollection/_api/_common/
                          identityImage?id=d6245f20-2af8-44f4-9451-8107cb2767db"
                }
            }
        ]
    },
    "resourceVersion": "1.0",
    "resourceContainers": {
        "collection": {
            "id": "..."
        },
        "account": {
            "id": "..."
        },
        "project": {
            "id": "..."
        }
    },
    "createdDate": "2019-04-28T22:47:44.6491834Z"
}

Flexible DevOps Helper

Let's recap what's left from the battle plan mentioned earlier:

  • Test local (variable data)
  • Update (deploy follow-up)
  • Test online (variable data)
  • Full integration test from Azure DevOps

Indeed, we only need to make our solution a bit more flexible and avoid using hardcoded references and such.

The first action is that we should not only go against a single repository, but against all possible repositories. Furthermore, we should use a dynamic package name and version. After all, we do not only want to update a constant package, but any kind of package in the future. Also, we want to support the latest package version and not just some package version we've just selected.

The same applies to the base branch. We now select the base branch from the repository's default branch. The final code looks as follows:

C#
public async Task<List<Int32>> UpdateReferencesAndCreatePullRequests
      (String packageName, String packageVersion)
{
    var results = new List<Int32>();
    var allRepositories = await _gitClient.GetRepositoriesAsync
                           (_projectId).ConfigureAwait(false);
    Log($"Received repository list: ${String.Join
                      (", ", allRepositories.Select(m => m.Name))}.");

    foreach (var repo in allRepositories)
    {
        var pr = await UpdateReferencesAndCreatePullRequest
        (repo.Name, repo.DefaultBranch, packageName, packageVersion).ConfigureAwait(false);

        if (pr.HasValue)
        {
            results.Add(pr.Value);
        }
    }

    return results;
}

Our update method has changed quite a bit. We now conditionally create the pull request - only if we find a proper file that also has the right reference and needs to be updated.

C#
public async Task<Int32?> UpdateReferencesAndCreatePullRequest
(String repoName, String baseBranchName, String packageName, String packageVersion)
{
    var repo = await _gitClient.GetRepositoryAsync(_projectId, repoName).ConfigureAwait(false);
    Log($"Received info about repo ${repoName}.");

    var versionRef = GetVersionRef(baseBranchName);
    var baseCommitInfo = GetBaseCommits(versionRef);
    var commits = await _gitClient.GetCommitsAsync
                 (repo.Id, baseCommitInfo, top: 1).ConfigureAwait(false);
    var lastCommit = commits.FirstOrDefault()?.CommitId;
    Log($"Received info about last commits (expected 1, got {commits.Count}).");

    var items = await _gitClient.GetItemsAsync(_projectId, repo.Id, 
    versionDescriptor: versionRef, recursionLevel: 
                      VersionControlRecursionType.Full).ConfigureAwait(false);
    var changes = await GetChanges(repo.Id, packageName, packageVersion, 
                        versionRef, items).ConfigureAwait(false);
    return await CreatePullRequestIfChanged(repo.Id, changes, 
                        lastCommit, baseBranchName).ConfigureAwait(false);
}

The heart of the whole method is the GetChanges method.

C#
private async Task<List<GitChange>> GetChanges
(Guid repoId, String packageName, String packageVersion, 
GitVersionDescriptor versionRef, IEnumerable<GitItem> items)
{
    var changes = new List<GitChange>();

    foreach (var item in items)
    {
        if (item.Path.EndsWith(".csproj"))
        {
            var itemRef = await _gitClient.GetItemContentAsync
            (repoId, item.Path, includeContent: true, versionDescriptor: 
                               versionRef).ConfigureAwait(false);
            var oldContent = await itemRef.GetContent().ConfigureAwait(false);
            var newContent = ReplaceInContent(oldContent, packageName, packageVersion);

            if (!String.Equals(oldContent, newContent))
            {
                changes.Add(CreateChange(item.Path, newContent));
                Log($"Item content of {item.Path} received and changed.");
            }
        }
    }

    return changes;
}

Here, we select all csproj files to be closer inspect and potentially changed.

The rest is pretty much as before. We create the pull request (this time only if we have changes and potentially with multiple changed files) and return the id. The new method collects all PR ids from the different repositories and returns them for completeness.

Production Update

The source code of the full sample is available at GitHub.

Using the Code

You can just fork the code and make your own adjustments. The solution works under the following assumptions:

  • The trigger in Azure DevOps is a "build succeeded" trigger for a finished build job
  • The referenced URL contains a name parameter yielding the package reference to update (currently only a single package can be updated per installed webhook)
  • Only NuGet packages (and C# .NET SDK project files .csproj) are supported
  • When the build succeeded, the latest package is already available via the (Azure DevOps) NuGet feed

All adjustments can be done via the Constants.cs file. There are two environment variables:

Variable Required? Description
DEVOPS_ORGA No, has fallback The organization / name of the Azure DevOps account
DEVOPS_PAT Yes The Personal Access Token with access to the NuGet feed and repositories

Conclusion

Using Azure Functions provides us a great way of sticking two systems together. In this case, we extend the basic functionality of Azure DevOps with a way to automatically update references to used common libraries in their consuming service repositories. This alone is a great help and keeps us focused on developing solutions instead of updating references all the time.

There are boundaries to what serverless can do. It's certainly not the answer to everything, but a great addition in face of the right problem. Setting up a web hook for extending the functionality of an existing system is certainly a good fit.

Even though Azure Functions are advertised as serverless, we may see one or the other rough edge in this product. There are multiple interactions that show us directly the truth: Azure Functions are just another abstraction layer on top of app services (which sit on top of virtual machines). The initial setup and operational side are super familiar - only the exact runtime has been decided for us.

Points of Interest

There are multiple ways to solve this. In NPM, we can also avoid locking the versions and set the common libraries to "latest". Then a simple multi trigger on Azure DevOps would be sufficient to re-build the service using the latest version of the library without requiring any pull request or code change. Nevertheless, the shown explicit way also has some advantages and may be used to solve other problems as well.

Tell me where you see this technique shine (or why you think its a total overkill and does not make sense at all...)!

References

History

  • v1.0.0 | Initial release | 30.04.2019
  • v1.1.0 | Added diagram and table of contents | 30.04.2019
  • v1.2.0 | Added Azure Function environment setup and downloads | 30.04.2019
  • v1.3.0 | Added "Using the Code" section | 30.04.2019

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Chief Technology Officer
Germany Germany
Florian lives in Munich, Germany. He started his programming career with Perl. After programming C/C++ for some years he discovered his favorite programming language C#. He did work at Siemens as a programmer until he decided to study Physics.

During his studies he worked as an IT consultant for various companies. After graduating with a PhD in theoretical particle Physics he is working as a senior technical consultant in the field of home automation and IoT.

Florian has been giving lectures in C#, HTML5 with CSS3 and JavaScript, software design, and other topics. He is regularly giving talks at user groups, conferences, and companies. He is actively contributing to open-source projects. Florian is the maintainer of AngleSharp, a completely managed browser engine.

Comments and Discussions

 
Questionhow extremely useful Pin
Sacha Barber2-May-19 1:10
Sacha Barber2-May-19 1:10 
AnswerRe: how extremely useful Pin
Florian Rappl4-May-19 13:04
professionalFlorian Rappl4-May-19 13:04 
Yep its the same product - just rebranded.

Personally, I like it a lot. I think its very developer focused and provides a complete (or unified) experience. In each direction (e.g., VCS, CI/CD, Boards, ...) its not the best in class, but as a complete package its hard to miss.

(One of my personal highlights is the API - of course nearly all solutions of an API, but the breadth and depth is really good imho.)

GeneralRe: how extremely useful Pin
Sacha Barber7-May-19 11:12
Sacha Barber7-May-19 11:12 
QuestionBravo! Bravo! Pin
Javier Carrion30-Apr-19 8:40
Javier Carrion30-Apr-19 8:40 
AnswerRe: Bravo! Bravo! Pin
Florian Rappl30-Apr-19 11:48
professionalFlorian Rappl30-Apr-19 11:48 
GeneralRe: Bravo! Bravo! Pin
Javier Carrion30-Apr-19 12:46
Javier Carrion30-Apr-19 12:46 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.