Click here to Skip to main content
15,121,597 members
Articles / Programming Languages / C++
Technical Blog
Posted 4 Nov 2016

Tagged as

Stats

6.6K views
3 bookmarked

Continuous Delivery with TFS / VSTS – Server Configuration as Code with PowerShell DSC

Rate me:
Please Sign up or sign in to vote.
5.00/5 (1 vote)
4 Nov 2016CPOL17 min read
I suspect I’m on reasonably safe ground when I venture to suggest that most software engineers developing applications for Windows servers (and the organisations they work for) have yet to make the leap from just writing the application code to writing both the application code and the code th

I suspect I’m on reasonably safe ground when I venture to suggest that most software engineers developing applications for Windows servers (and the organisations they work for) have yet to make the leap from just writing the application code to writing both the application code and the code that will configure the servers the application will run on. Why do I suggest this? It’s partly from experience in that I’ve never come across anyone developing for the Windows platform who is doing this (or at least they haven’t mentioned it to me) and partly because up until fairly recently Microsoft haven’t provided any tooling for implementing configuration as code (as this engineering practice is sometimes referred to). There are products from other vendors of course but they tend to have their roots in the Linux world and use languages such as Ruby (or DSLs based on Ruby) which is probably going to seriously muddy the waters for organisations trying to get everyone up to speed with PowerShell.

This has all changed relatively recently with the introduction of PowerShell DSC, Microsoft’s solution for implementing configuration as code on Windows (and other platforms as it happens). With PowerShell DSC (and related technologies) the configuration of servers is expressed as programming code that can be versioned in source control. When a change is required to a server the code is updated and the new configuration is then applied to the server. This process is usually idempotent, ie the configuration can be applied repeatedly and will always give the same result. It also won’t generate errors if the configuration is already in the desired state. Through version control we can audit how a configuration changes over time and being code it can be applied as required to ensure server roles in different environments, or multiple instances of the same server role in the same environment, have a consistent configuration.

So ostensibly Windows server developers now have no excuse not to start implementing configuration as code. But if we’ve managed so far without this engineering practice why all the fuss now? What benefit is it going to bring to the table? The key benefit is that it’s a cure for that age-old problem of servers that might start life from a build script, but over the months (and possibly years) different technicians make necessary tweaks here and there until one day the server becomes a unique work of art that nobody could ever hope to reproduce. Server backups become critical and everyone dreads the day that the server will need to be upgraded or replaced.

If your application is very simple you might just get away with this state of affairs – not that it makes it right or a best practice. However if your application is constantly evolving with concomitant configuration changes and / or you are going down the microservices route then you absolutely can’t afford to hand-crank the configuration of your servers. Not only is the manual approach very error prone it’s also hugely time-consuming, and has no place in a world of continuous delivery where shortening lead times and increasing reliability and repeatability is the name of the game.

So if there’s no longer an excuse to implement configuration as code on the Windows platform why isn’t there a mad rush to adopt it? In my view, for most mid-size IT departments working with existing budgets and staffing levels and an existing landscape of hand-cranked servers it’s going to be a real slog to switch the configuration of a live estate to being managed by code. Once you start thinking about the complexities of analysing exiting servers (some of which might have been around for years and which might have all sorts of bespoke applications running on them) combined with devising a system of managing scores or even hundreds of servers it’s clear that a task of this nature is almost certainly going to require a dedicated team. And despite the potential benefits that configuration as code promises most mid-size IT departments are likely to struggle to stand-up such a team.

So if it’s going to be hard how does an organisation get started with configuration as code and PowerShell DSC? Although I don’t have anywhere near all of the answers it is already clear to me that if your organisation is in the business of writing applications for Windows servers then you need to approach the problem from both ends of the server spectrum. At the far end of the spectrum is the live estate where server ‘drift’ needs to be controlled using PowerShell DSC’s ‘pull’ mode. This is where servers periodically reach out to a central repository to pull their ‘true’ configuration and make any adjustments accordingly. At the near end of the spectrum are the servers that form the continuous delivery pipeline which need to have configuration changes applied to them just before a new version of the application gets deployed to them. Happily PowerShell has a ‘push’ mode which will work nicely for this purpose. There is also the live deployment situation. Here, live servers will need to have configuration changes pushed to them before application deployment takes place and then will need to switch over to pull mode to keep them true.

The way I see things at the moment is that PowerShell DSC pull mode is going to be hard to implement at scale because of the lack of tooling to manage it. Whilst you could probably manage a handful of servers in pull mode using PowerShell DSC script files, any more than a handful is going to cause serious pain without some kind of management framework such as the one that is available for Chef. The good news though is that getting started with PowerShell DSC push mode for configuring servers that comprise the deployment pipeline as part of application development activities is a much more realistic prospect.

Big Picture Time

I’m not going to be able to cover everything about making PowerShell DSC push mode work in one blog post so it’s probably worth a few words about the bigger picture. One key concept to establish early on is that the code that will configure the server(s) that an application will reside on has to live and change alongside the application code. At the very least the server configuration code needs to be in the same version control branch as the application code and frequently it will make sense for it to be part of the same Visual Studio solution. I won’t be presenting that approach in this blog post and instead will concentrate on the mechanics of getting PowerShell DSC push mode working and writing the configuration code that enables the Contoso University sample application (which requires IIS and SQL Server) to run. In a future post I’ll have the code in the same Visual Studio solution as the Contoso University sample application and will explain how to build an artefact that is then deployed by the release management tooling in TFS / VSTS prior to deploying the application.

For anyone who has come across this post by chance it is part of my ongoing series about Continuous Delivery with TFS / VSTS, and you may find it helpful to refer to some of the previous posts to understand the full context of what I’m trying to achieve. I should also mention that this post isn’t intended to be a PowerShell DSC tutorial and if you are new to the technology I have a Getting Started post here with a link collection of useful learning resources. With all that out of the way let’s get going!

Getting Started

Taking the Infrastructure solution from this blog post as a starting point (available as a code release at my Infrastructure repo on GitHub, final version of this post’s code here) add a new PowerShell Script Project called ConfigurationScripts. To this new project add a new PowerShell Script file called ContosoUniversity.ps1 and add a hash table and empty Configuration block called WebAndDatabase as follows:

$configurationData = 
@{
    AllNodes = 
    @(
        @{
            NodeName = 'PRM-DAT-AIO'
            Roles = @('Web', 'Database')
        }		
    )
}

Configuration WebAndDatabase
{
}

We’re going to need an environment to deploy in to so using the techniques described in previous posts (here and here) create a PRM-DAT-AIO server that is joined to the domain. This server will need to have Windows Management Framework 5.0 installed – a manual process as far as this particular post is concerned but something that is likely to need automating in the future.

To test a basic working configuration we’ll create a folder on PRM-DAT-AIO to act as the IIS physical path to the ContosoUniversity web files. Add the following lines of code to the beginning of the configuration block:

Import-DscResource –ModuleName PSDesiredStateConfiguration

Node $AllNodes.Where({$_.Roles -contains 'Web'}).NodeName
{
    File WebSiteFolder
    {
        Ensure = "Present"
        Type = "Directory"
        DestinationPath = "C:\inetpub\ContosoUniversity"
    }
}

To complete the skeleton code add the following lines of code to the end of ContosoUniversity.ps1:

WebAndDatabase -ConfigurationData $configurationData -OutputPath C:\Dsc\Mof -Verbose

Start-DSCConfiguration -Path C:\Dsc\Mof -Wait -Verbose -Force

The code contained in ContosoUniversity.ps1 should now be as follows:

$configurationData = 
@{
    AllNodes = 
    @(
        @{
            NodeName = 'PRM-DAT-AIO'
            Roles = @('Web', 'Database')
        }		
    )
}

Configuration WebAndDatabase
{

    Import-DscResource –ModuleName PSDesiredStateConfiguration

    Node $AllNodes.Where({$_.Roles -contains 'Web'}).NodeName
    {
        File WebSiteFolder
        {
            Ensure = "Present"
            Type = "Directory"
            DestinationPath = "C:\inetpub\ContosoUniversity"
        }
    }
}
WebAndDatabase -ConfigurationData $configurationData -OutputPath C:\Dsc\Mof -Verbose

Start-DSCConfiguration -Path C:\Dsc\Mof -Wait -Verbose -Force

Although you can create this code from any developer workstation you need to ensure that you can run it from a workstation that is joined to the same domain as PRM-DAT-AIO and has a folder called C:\Dsc\Mof. In order to keep authentication simple I’m also assuming that you are logged on to your developer workstation with domain credentials that allow you to perform DSC operations on PRM-DAT-AIO. Running this code will create a PRM-DAT-AIO.mof file in C:\Dsc\Mof which will deploy to PRM-DAT-AIO and create the folder. Magic!

Installing Resource Modules Locally

To do anything much more sophisticated than create a folder we’ll need to import resources to our local workstation from the PowerShell Gallery. We’ll be working with xWebAdministration and xSQLServer and they can be installed locally as follows:

Install-Module xWebAdministration -Force -Verbose
Install-Module xSQLServer -Force -Verbose

These same commands will also install the latest version of the resources if a previous version exists. Referencing these resources in our configuration script seems to have changed with the release of DSC 5.0 and versioning information is a requirement. Consequently, these resources are referenced in the configuration as follows:

Import-DscResource -ModuleName @{ModuleName="xWebAdministration";ModuleVersion="1.10.0.0"}
Import-DscResource -ModuleName  @{ModuleName="xSQLServer";ModuleVersion="1.5.0.0"}

Obviously change the above code to reference the version of the module that you actually install. The resources are continually being updated with new versions and this requires a strategy to upgrade on a periodic basis.

Making Resource Modules Available Remotely

Whilst the additions in the previous section allows us to create advanced configurations on our developer workstation these configurations are not going to run against target nodes since as things stand the target nodes don’t know anything about custom resources (as opposed to resources such as PSDesiredStateConfiguration which ship with the Windows Management Framework). We can fix this by telling the Local Configuration Manager (LCM) of target nodes where to get the custom resources from. The procedure (which I’ve adapted from Nana Lakshmanan’s blog post) is as follows:

  • Choose a server in the domain to host a fileshare. I’m using my domain controller (PRM-CORE-DC) as it’s always guaranteed to be available under normal conditions. Create a folder called C:\Dsc\DscResources (Dsc purposefully repeated) and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscResources.
  • Custom resources need to be zipped in a format required by DSC the pull protocol. The PowerShell to do this for version 1.10 of xWebAdministration and 1.5 of xSQLServer (using a local C:\Dsc\Resources folder) is as follows:
    Find-Module xWebAdministration | Save-Module -Path C:\Dsc\Resources -Verbose -Force
    Compress-Archive -Path C:\Dsc\Resources\xWebAdministration\1.10.0.0\* -DestinationPath \\prm-core-dc\DscResources\xWebAdministration_1.10.0.0.zip -Verbose -Force
    New-DscChecksum -Path \\prm-core-dc\DscResources\xWebAdministration_1.10.0.0.zip -OutPath \\prm-core-dc\DscResources -Verbose -Force
    
    Find-Module xSQLServer | Save-Module -Path C:\Dsc\Resources -Verbose -Force
    Compress-Archive -Path C:\Dsc\Resources\xSQLServer\1.5.0.0\* -DestinationPath \\prm-core-dc\DscResources\xSQLServer_1.5.0.0.zip -Verbose -Force
    New-DscChecksum -Path \\prm-core-dc\DscResources\xSQLServer_1.5.0.0.zip -OutPath \\prm-core-dc\DscResources -Verbose -Force

    Of course depending on the frequency of your having to do this to cope with updates and the number of resources you end up working with you probably want to re-write all this up in to some sort of reusable package.
  • With the packages now in the right format in the fileshare we need to tell the LCM of target nodes where to look. We do this by creating a new configuration decorated with the [DscLocalConfigurationManager()] attribute:
    [DscLocalConfigurationManager()]
    Configuration LocalConfigurationManager
    {
        Node $AllNodes.NodeName
        {
            Settings
            {
                RefreshMode = 'Push'
                AllowModuleOverwrite = $True
                # A configuration Id needs to be specified, known bug
                ConfigurationID = '3a15d863-bd25-432c-9e45-9199afecde91'
                ConfigurationMode = 'ApplyAndAutoCorrect'
                RebootNodeIfNeeded = $True    
            }
    
            ResourceRepositoryShare FileShare
            {
                SourcePath = '\\prm-core-dc\DscResources\'
            }
        }
    }
    LocalConfigurationManager -ConfigurationData $configurationData -OutputPath C:\Dsc\Mof -Verbose

    The Settings block is used to set various properties of the LCM which are required in order for configurations we’ll be writing to run. The ResourceRepositoryShare block obviously specifies the location of the zipped resource packages.
  • The final requirement is to add the line of code (Set-DscLocalConfigurationManager -Path C:\Dsc\Mof -Verbose) to apply the LCM settings.

The revised version of ContosoUniversity.ps1 should now be as follows:

$configurationData = 
@{
    AllNodes = 
    @(
        @{
            NodeName = 'PRM-DAT-AIO'
            Roles = @('Web', 'Database')
        }		
    )
}

Configuration WebAndDatabase
{
    Import-DscResource –ModuleName PSDesiredStateConfiguration
    Import-DscResource -ModuleName @{ModuleName="xWebAdministration";ModuleVersion="1.10.0.0"}
    Import-DscResource -Module  @{ModuleName="xSQLServer";ModuleVersion="1.5.0.0"}

    Node $AllNodes.Where({$_.Roles -contains 'Web'}).NodeName
    {
        File WebSiteFolder
        {
            Ensure = "Present"
            Type = "Directory"
            DestinationPath = "C:\inetpub\ContosoUniversity"
        }
    }
}
WebAndDatabase -ConfigurationData $configurationData -OutputPath C:\Dsc\Mof -Verbose

[DscLocalConfigurationManager()]
Configuration LocalConfigurationManager
{
    Node $AllNodes.NodeName
    {
        Settings
        {
            RefreshMode = 'Push'
            AllowModuleOverwrite = $True
            # A configuration Id needs to be specified, known bug
            ConfigurationID = '3a15d863-bd25-432c-9e45-9199afecde91'
            ConfigurationMode = 'ApplyAndAutoCorrect'
            RebootNodeIfNeeded = $True    
        }

        ResourceRepositoryShare FileShare
        {
            SourcePath = '\\prm-core-dc\DscResources\'
        }
    }
}
LocalConfigurationManager -ConfigurationData $configurationData -OutputPath C:\Dsc\Mof -Verbose

Set-DscLocalConfigurationManager -Path C:\Dsc\Mof -Verbose
Start-DSCConfiguration -Path C:\Dsc\Mof -Wait -Verbose -Force

At this stage we now have our complete working framework in place and we can begin writing the configuration blocks that colectively will leave us with a server that is capable of running our Contoso University application.

Writing Configurations for the Web Role

Configuring for the web role requires consideration of the following factors:

  • The server features that are required to run your application. For Contoso University that’s IIS, .NET Framework 4.5 Core and ASP.NET 4.5.
  • The mandatory IIS configurations for your application. For Contoso University that’s a web site with a custom physical path.
  • The optional IIS configurations for your application. I like things done in a certain way so I want to see an application pool called ContosoUniversity and the Contoso University web site configured to use it.
  • Any tidying-up that you want to do to free resources and start thinking like you are configuring NanoServer. For me this means removing the default web site and default application pools.

Although you’ll know if your configurations have generated errors how will you know if they’ve generated the desired result? The following ‘debugging’ options can help:

  • I know that the home page of Contoso University will load without a connection to a database, so I copied a build of the website to C:\inetpub\ContosoUniversity on PRM-DAT-AIO so I could test the site with a browser. You can download a zip of the build from here although be aware that AV software might mistakenly regard it as malware.
  • The IIS management tools can be installed on target nodes whilst you are in configuration mode so you can see graphically what’s happening. The following configuration does the trick:
    WindowsFeature IISTools
    {
        Ensure = "Present"
        Name = "Web-Mgmt-Tools"
    }
  • If you are testing with a local version of Internet Explorer make sure you turn off Compatibility View or your site may render with odd results. From the IE toolbar choose Tools > Compatibility View Settings and uncheck Display intranet sites in Compatibility View.

Whilst you are in configuration mode the following resources will be of assistance:

  • The xWebAdministration documentation on GitHub: https://github.com/PowerShell/xWebAdministration.
  • The example files that ship with xWebAdministration: C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\n.n.n.n\Examples.
  • A Google search for xWebAdministration.

The configuration settings required to meet my requirements stated above are as follows:

Node $AllNodes.Where({$_.Roles -contains 'Web'}).NodeName
{
    # Configure for web server role
    WindowsFeature DotNet45Core
    {
        Ensure = 'Present'
        Name = 'NET-Framework-45-Core'
    }
    WindowsFeature IIS
    {
        Ensure = 'Present'
        Name = 'Web-Server'
    }
     WindowsFeature AspNet45
    {
        Ensure = "Present"
        Name = "Web-Asp-Net45"
    }

    # Configure ContosoUniversity
    File ContosoUniversity
    {
        Ensure = "Present"
        Type = "Directory"
        DestinationPath = "C:\inetpub\ContosoUniversity"
    }
    xWebAppPool ContosoUniversity
    {
        Ensure = "Present"
        Name = "ContosoUniversity"
        State = "Started"
        DependsOn = "[WindowsFeature]IIS"
    }
    xWebsite ContosoUniversity
    {
        Ensure = "Present"
        Name = "ContosoUniversity"
        State = "Started"
        PhysicalPath = "C:\inetpub\ContosoUniversity"
        BindingInfo = MSFT_xWebBindingInformation
        {
            Protocol = 'http'
            Port = '80'
            HostName = 'prm-dat-aio'
            IPAddress = '*'
        }
        ApplicationPool = "ContosoUniversity"
        DependsOn = "[xWebAppPool]ContosoUniversity"
    }

    # Configure for development mode only
    WindowsFeature IISTools
    {
        Ensure = "Present"
        Name = "Web-Mgmt-Tools"
    }

    # Clean up the uneeded website and application pools
    xWebsite Default
    {
        Ensure = "Absent"
        Name = "Default Web Site"
    }
    xWebAppPool NETv45
    {
        Ensure = "Absent"
        Name = ".NET v4.5"
    }
    xWebAppPool NETv45Classic
    {
        Ensure = "Absent"
        Name = ".NET v4.5 Classic"
    }
    xWebAppPool Default
    {
        Ensure = "Absent"
        Name = "DefaultAppPool"
    }
    File wwwroot
    {
        Ensure = "Absent"
        Type = "Directory"
        DestinationPath = "C:\inetpub\wwwroot"
        Force = $True
    }
}

There is one more piece of the jigsaw to finish the configuration and that’s amending the application pool to use a domain account that has permissions to talk to SQL Server. That’s a more advanced topic so I’m dealing with it later.

Writing Configurations for the Database Role

Configuring for the SQL Server database role is slightly different from the web role since we need to install SQL Server which is a separate application. The installation files need to be made available as follows:

  • Choose a server in the domain to host a fileshare. As above I’m using my domain controller. Create a folder called C:\Dsc\DscInstallationMedia and and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscInstallationMedia.
  • Download a suitable SQL Server ISO image to the server hosting the fileshare – I used en_sql_server_2014_enterprise_edition_with_service_pack_1_x64_dvd_6669618.iso from MSDN Subscriber Downloads.
  • Mount the ISO and copy the contents of its drive to a folder called SqlServer2014 created under C:\Dsc\DscInstallationMedia.

In contrast to configuring for the web role there are fewer configurations required for the database role. There is a requirement to supply a credential though and for this I’m using the Key Vault technique described in this post. This gives rise to new code within and preceding the configuration hash table as follows:

# Authentication details are abstracted away in a PS module
Set-AzureRmAuthenticationForMsdnEnterprise

$vaultname = 'prmkeyvault'
$domainAdminPassword = Get-AzureKeyVaultSecret –VaultName $vaultname –Name DomainAdminPassword
$SecurePassword = ConvertTo-SecureString -String $domainAdminPassword.SecretValueText -AsPlainText -Force
$domainAdministratorCredential = New-Object System.Management.Automation.PSCredential ("PRM\graham", $SecurePassword)

$configurationData = 
@{
    AllNodes = 
    @(
        @{
            NodeName = 'PRM-DAT-AIO'
            Roles = @('Web', 'Database')
            AppPoolUserName = 'PRM\CU-DAT'
            PSDscAllowDomainUser = $true
            PSDscAllowPlainTextPassword = $true
            DomainAdministratorCredential = $domainAdministratorCredential
        }		
    )
}

For a server such as the one we are configuring where the database is on the same machine as the web server and only the database engine is required there are just two configuration blocks needed to install SQL Server. For more complicated scenarios the following resources will be of assistance:

  • The xSQLServer documentation on GitHub: https://github.com/PowerShell/xSQLServer.
  • The example files that ship with xSQLServer: C:\Program Files\WindowsPowerShell\Modules\xSQLServer\n.n.n.n\Examples.
  • A Google search for xSQLServer.

The configuration settings required for the single server scenario are as follows:

Node $AllNodes.Where({$_.Roles -contains 'Database'}).NodeName
{
    WindowsFeature "NETFrameworkCore"
    {
        Ensure = "Present"
        Name = "NET-Framework-Core"
    }
    xSqlServerSetup "SQLServerEngine"
    {
        DependsOn = "[WindowsFeature]NETFrameworkCore"
        SourcePath = "\\prm-core-dc\DscInstallationMedia"
        SourceFolder = "SqlServer2014"
        SetupCredential = $Node.DomainAdministratorCredential
        InstanceName = "MSSQLSERVER"
        Features = "SQLENGINE"
    }

    # Configure for development mode only
    xSqlServerSetup "SQLServerManagementTools"
    {
        DependsOn = "[WindowsFeature]NETFrameworkCore"
        SourcePath = "\\prm-core-dc\DscInstallationMedia"
        SourceFolder = "SqlServer2014"
        SetupCredential = $Node.DomainAdministratorCredential
        InstanceName = "NULL"
        Features = "SSMS,ADV_SSMS"
    }
}

In order to assist with ‘debugging’ activities I’ve included the installation of the SQL Server management tools but this can be omitted when the configuration has been tested and deemed fit for purpose. Later in this post we’ll manually install the remaining parts of the Contoso University application to prove that the installation worked but for the time being you can run SQL Server Management Studio to see the database engine running in all its glory!

Amending the Application Pool Identity

The Contoso University website is granted access to the database via a domain account that firstly gets configured as the Identity for the website’s application pool and then gets configured as a SQL Server login associated with a user which has the appropriate permissions to the database. The SQL Server configuration is taken care of by a permissions script that we’ll come to shortly, and the immediate task is concerned with amending the Identity property of the ConsosoUniversity application pool so that it references a domain account.

Initially this looked like it was going to be painful since xWebAdministration doesn’t currently have the ability to configure the inner workings of application pools. Whilst investigating the possibilities I had the good fortune to come across a fork of xWebAdministration on the PowerShell.org GitHub site where those guys have created a module which does what we want. I need to introduce a slight element of caution here since the fork doesn’t look like it’s under active development. On the other hand maybe there are no major issues that need fixing. And if there are and they aren’t going to get fixed at least the code is there to be forked. Because this fork isn’t in the PowerShell Gallery getting it to work locally is a manual process:

  • Download the code to C:\Dsc\Resources and unblock and extract it. Change the folder name from cWebAdministration-master to cWebAdministration and copy to C:\Program Files\WindowsPowerShell\Modules.
  • In the configuration block reference the module as Import-DscResource –ModuleName @{ModuleName=”cWebAdministration”;ModuleVersion=”2.0.1″}.

The configuration required to make the resource available to target nodes has an extra manual step:

  • In the root of C:\DSC\Resources\cWebAdministration create a folder named 2.0.1 and copy the contents of C:\DSC\Resources\cWebAdministration to this folder.
  • The following code can now be used to package the resource and copy it to the fileshare:
    Compress-Archive -Path C:\Dsc\Resources\cWebAdministration\2.0.1\* -DestinationPath \\prm-core-dc\DscResources\cWebAdministration_2.0.1.zip -Verbose -Force
    New-DscChecksum -Path \\prm-core-dc\DscResources\cWebAdministration_2.0.1.zip -OutPath \\prm-core-dc\DscResources -Verbose -Force

I tend towards using a different domain account for the Identity properties of the website application pools in the different environments that make up the deployment pipeline. In doing so it protects the pipeline form a complete failure if something happens to that domain account – it gets locked-out for example. To support this scenario the configuration block to configure the application pool identity needs to support dynamic configuration and takes the following form:

cAppPool ContosoUniversity
{
    Name = "ContosoUniversity"
    IdentityType = "SpecificUser"
    UserName = $Node.AppPoolUserName
    Password = $Node.AppPoolCredential
    DependsOn = "[xWebAppPool]ContosoUniversity"
}

The dynamic configuration is supported by Key Vault code to retrieve the password of the domain account used to configure the application pool (not shown) and the following additions to the configuration hash table:

AppPoolUserName = 'PRM\CU-DAT'
AppPoolCredential = $appPoolDomainAccountCredential

The code does of course rely on the existence of the PRM\CU-DAT domain account (set so the password doesn’t expire). This is the last piece of configuration, and you can view the final result on GitHub here.

The Moment of Truth

After all that configuration, is it enough to make the Contoso University application work? To find out:

  • If you haven’t already, download, unblock and unzip the ContosoUniversityConfigAsCode package from here, although as mentioned previously be aware that AV software might mistakenly regard it as malware.
  • The contents of the Website folder should be copied (if not already) to C:\inetpub\ContosoUniversity on the target node.
  • Edit the SchoolContext connection string in Web.config if required – the download has the server set to localhost and the database to ContosoUniversity.
  • On the target node run SQL Server Management Studio and install the database as follows:
    • In Object Explorer right-click the Databases node and choose Deploy Data-tier Application.
    • Navigate through the wizard, and at Select Package choose ContosoUniversity.Database.dacpac from the database folder of the ContosoUniversityConfigAsCode download.
    • Move to the next page of the wizard (Update Configuration) and change the Name to ContosoUniversity.
    • Navigate past the Summary page and the DACPAC will be deployed:
      ssms-deploy-dacpac
  • Still in SSMS, apply the permissions script as follows:
    • Open Create login and database user.sql from the Database\Scripts folder in the ContosoUniversityConfigAsCode download.
    • If the pre-configured login/user (PRM\CU-DAT) is different from the one you are using update accordingly, then execute the script.

You can now navigate to http://prm-dat-aio (or whatever your server is called) and if all is well make a mental note to pour a well-deserved beverage of your choosing.

Looking Forward

Although getting this far is certainly an important milestone it’s by no means the end of the journey for the configuration as code story. Our configuration code now needs to be integrated in to the Contoso University Visual Studio solution so that it can be built as an artefact alongside the website and database artefacts. We then need to be able to deploy the configuration before deploying the application – all automated through the new release management tooling that has just shipped with TFS 2015 Update 2 or through VSTS if you are using that. Until next time…

Cheers – Graham

The post Continuous Delivery with TFS / VSTS – Server Configuration as Code with PowerShell DSC appeared first on Please Release Me.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

Graham D Smith
United Kingdom United Kingdom
Dr Graham Smith is a former research scientist who got bitten by the programming and database bug so badly that in 2000 he changed careers to become a full-time software developer. Life moves on and Graham currently manages a team of software engineers and specialises in continuous delivery and application lifecycle management with the Team Foundation Server ecosystem.

Comments and Discussions

 
-- There are no messages in this forum --