Author Archives: Larry House

Creating a PowerShell Cmdlet in C#

Creating a PowerShell Cmdlet in C# is actually a fairly straight forward endeavor. At it’s core you simply create a new class that derives from one of two base classes (Cmdlet or PsCmdlet), add properties to the class to accept your parameters, override one or more methods in the base class to provide your functionality, and decorate the class and properties with a few attributes. In this “How To” article I’m going to create a PowerShell Cmdlet that will add/update my hosts file with host names and IP addresses for each of the ingresses that I have configured in my local Kubernetes cluster running on minikube.

Choosing a Cmdlet Name

In PowerShell Cmdlets are named using a Verb-Noun format. For example Get-ChildItem, or Compress-Archive. Not only is this convention, but PowerShell will yell at you if you don’t follow the convention and use one of the “pre-approved” verbs.

WARNING: The names of some imported commands from the module 'Sample' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-Verb.

If you choose to use a unapproved verb you (and anyone using your module) will be greeted with the big wall of yellow warning text above every time the module is loaded. So what are the approved verbs you can use?

Add, Approve, Assert, Backup, Block, Checkpoint, Clear, Close, Compare, Complete, Compress, Confirm, Connect, Convert, ConvertFrom, ConvertTo, Copy, Debug, Deny, Disable, Disconnect, Dismount, Edit, Enable, Enter, Exit, Expand, Export, Find, Format, Get, Grant, Group, Hide, Import, Initialize, Install, Invoke, Join, Limit, Lock, Measure, Merge, Mount, Move, New, Open, Optimize, Out, Ping, Pop, Protect, Publish, Push, Read, Receive, Redo, Register, Remove, Rename, Repair, Request, Reset, Resize, Resolve, Restart, Restore, Resume, Revoke, Save, Search, Select, Send, Set, Show, Skip, Split, Start, Step, Stop, Submit, Suspend, Switch, Sync, Test, Trace, Unblock, Undo, Uninstall, Unlock, Unprotect, Unpublish, Unregister, Update, Use, Wait, Watch, Write

To make things easier these are all defined in constants in the C# reference library. There are 7 classes with these constants: VerbsCommon, VerbsCommunications, VerbsData, VerbsDiagnostic, VerbsLifecycle, VerbsOther, and VerbsSecurity. Here’s the breakdown by class:

ClassNameVerbs
VerbsCommonAdd, Clear, Close, Copy, Enter, Exit, Find, Format, Get, Hide, Join, Lock, Move, New, Open, Optimize, Pop, Push, Redo, Remove, Rename, Reset, Resize, Search, Select, Set, Show, Skip, Split, Step, Switch, Undo, Unlock, Watch
VerbsCommunicationsConnect, Disconnect, Read, Receive, Send, Write
VerbsDataBackup, Checkpoint, Compare, Compress, Convert, ConvertFrom, ConvertTo, Dismount, Edit, Expand, Export, Group, Import, Initialize, Limit, Merge, Mount, Out, Publish, Restore, Save, Sync, Unpublish, Update
VerbsDiagnosticDebug, Measure, Ping, Repair, Resolve, Test, Trace
VerbsLifecycleApprove, Assert, Complete, Confirm, Deny, Disable, Enable, Install, Invoke, Register, Request, Restart, Resume, Start, Stop, Submit, Suspend, Uninstall, Unregister, Wait
VerbsOtherUse
VerbsSecurityBlock, Grant, Protect, Revoke, Unblock, Unprotect

In the case of my demo project, I’ll be updating the hosts file with the IP addresses of my ingresses in kubernetes, so the “Update” verb seems appropriate here. The noun part of the name is much simpler in that there are really now rules other than it should be a noun. In the case of the demo project I’ve chosen to name it “Update-HostsForIngress”.

Getting Started

Now that we have selected a name for our Cmdlet, let’s get started writing some code! Create a new .Net Framework Class Library project in Visual Studio to contain your Cmdlet. Mine is called DotNetNinja.KubernetesModule and I’m targeting .Net Framework 4.7.2.

Next you’ll need to add a NuGet Package reference to one of the Microsoft.PowerShell.X.ReferenceAssemblies packages depending on which version of PowerShell you want to target. I’ve selected the package Microsoft.PowerShell.5.ReferenceAssemblies because I’m targeting PowerShell 5, but there are packages for PowerShell 3, 4 and 5 available.

Now lets start by adding a class for our Cmdlet named UpdateHostsForIngressCmdlet. (You can name this class anything you want, PowerShell will look at the attributes we’ll be adding shortly to determine the actual Cmdlet name, but it is useful to have the name be easy to tell what Cmdlet it is for, especially when you start building up a library of Cmdlets in one project!). This name follows my personal naming convention for Cmdlet classes which is [Verb][Noun]Cmdlet. Basically take the name of the Cmdlet, remove the hyphen, and append Cmdlet to the end. Here’s the starting boiler-plate code for our Cmdlet class:

    public class UpdateHostsForIngressCmdLet: Cmdlet
    {
        public UpdateHostsForIngressCmdLet()
        {
        }

        public UpdateHostsForIngressCmdLet(IHostsFile hosts, IKubernetesCluster cluster)
        {
        }
        
        protected internal IHostsFile Hosts { get; private set; }

        protected internal  IKubernetesCluster Cluster { get; private set; }
        
        public string[] HostNames { get; set; }

        protected override void BeginProcessing()
        {
        }

        protected override void ProcessRecord()
        {
        }

        protected override void EndProcessing()
        {
        }

        protected override void StopProcessing()
        {
        }
    }

We have a class named UpdateHostsForIngressCmdlet that derives from Cmdlet in System.Management.Automation. (This is the assembly that was added when we added our package reference earlier.) We could also have derived from PsCmdlet, but we don’t need any of the enhanced features of that class (PsCmdlet itself derives from Cmdlet). The biggest use case I have seen for deriving from PsCmdlet is to get access to session storage that allows you to persist info across invocations of your Cmdlet. We won’t need that here, so we’ll just stick with Cmdlet. In order to tell PowerShell that this is a Cmdlet and what it’s name is we need to add a Cmdlet Attribute to the class, specifying two parameters, one for the verb portion of the name and one for the noun portion of the name:

    [Cmdlet(VerbsData.Update, Nouns.HostsForIngress)]
    public class UpdateHostsForIngressCmdLet: Cmdlet
    {
        ...

In the example above I’ve used constants for the names (The built in VerbsData.Update and a custom Nouns.HostsForIngress), but you can just use strings literals if you prefer though I do recommend at least using the built in verbs constants as it prevents you from using an unapproved verb. In addition to the Cmdlet attribute, we should also tell PowerShell about the types we will be outputting from our Cmdlet. We do this by adding an OutputType attribute to the class.

    [Cmdlet(VerbsData.Update, Nouns.HostsForIngress)]
    [OutputType(typeof(HostsUpdateResult))]
    public class UpdateHostsForIngressCmdLet: Cmdlet
    {
        ...

In our class we have two constructors, one default/empty constructor that PowerShell will use to instantiate our class and one that takes the two dependencies we have a parameters. The two dependencies are IHostsfile (which we will use to load and save our hosts file and to manage merging entries for our ingresses) and IKubernetesCluster (which we will use to communicate with our kubernetes cluster to get the information we need about our configured ingresses). Either way the class is constructed/instantiated, we’ll need to make sure the properties Hosts & Cluster get initialized so we can use them later.

    [Cmdlet(VerbsData.Update, Nouns.HostsForIngress)]
    [OutputType(typeof(HostsUpdateResult))]
    public class UpdateHostsForIngressCmdLet: Cmdlet
    {
        public UpdateHostsForIngressCmdLet()
        {
            var services = new ServiceLocator();
            Hosts = services.Get<IHostsFile>();
            Cluster = services.Get<IKubernetesCluster>();
        }

        public UpdateHostsForIngressCmdLet(IHostsFile hosts, IKubernetesCluster cluster)
        {
            Guard.IsNotNull(hosts, nameof(hosts));
            Guard.IsNotNull(cluster, nameof(cluster));
            Hosts = hosts;
            Cluster = cluster;
        }
        ...

In the first constructor I’ve used a ServiceLocator (built on top of Autofac) to instantiate my services. In the second constructor I’ve simply validated the passed in parameters are not null and initialized the service properties with them. This pattern keeps things loosely coupled and should make testing and modification/maintenance easier.

In addition to the two protected properties for our dependencies, we also have a string[] property called HostNames. We’ll use this (optionally if the value is provided) to filter the ingresses we are updating in our hosts file. To tell PowerShell about our parameter property we need to add another attribute, the Parameter Attribute, to the property declaration like so:

        [Parameter(Mandatory = false, Position = 0, ValueFromPipeline = true)]
        public string[] HostNames { get; set; }

Technically all that is required is the attribute itself [Parameter], but I’ve added Mandatory=false so that the parameter isn’t required (if not passed we’ll process all of the ingresses reported by Kubernetes), Position=0 so that our Cmdlet can be invoked with a parameter, but without specifying the name (The first/0th parameter will be mapped to our property), and ValueFromPipeline=true so that we can pipeline in the HostNames if we wish (maybe we’ll want to read them from a file?). With this configuration here are the valid invocations of our command.

# Will update all reported ingresses in the hosts file
Update-HostsForIngress  

# Will update only the specified ingress (kiali.minikube.local) using positional parameter mapping  
Update-HostsForIngress kiali.minikube.local

# Will update only the specified ingress (kiali.minikube.local) using named parameter mapping  
Update-HostsForIngress -HostNames kiali.minikube.local

# Will update all the specified ingresses using positional parameter mapping  
Update-HostsForIngress kiali.minikube.local, dashboard.minikube.local

# Will update all the specified ingresses using named parameter mapping  
Update-HostsForIngress -HostNames kiali.minikube.local, dashboard.minikube.local

# Will update all the specified ingresses using the pipeline as a parameter source  
"kiali.minikube.local", "dashboard.minikube.local" | Update-HostsForIngress

# Alternatively, same as above using a file as the source of the host names
Get-Content .\hostnames.txt | Update-HostsForIngress

Interacting With the PowerShell Session

In order to communicate/output information there are a number of “Write” methods on the Cmdlet base class:

WriteCommandDetailWrites information to the logs.
WriteDebugWrites information out to the debug stream. This is only visible if the -Debug flag is used.
WriteErrorWrites error information.
WriteInformationWrites informational messages out for user consumption.
WriteObjectWrites data objects out to the pipeline stream.
WriteProgressWrites progress information
WriteVerboseWrites writes information to the Verbose stream, This is only visible if -Verbose flag is used.
WriteWarningWrites warning messages for user consumptions. Typically shows as yellow text.

In our Cmdlet we’ll use WriteObject to pass our result objects back to the pipeline. Typically these will just be dumped to the console, but they could be pipelined into another Cmdlet as well. We’ll also use WriteWarning in a couple of cases and we’ll use WriteError in the case that kubectl is not found on the path.

Cmdlet Lifecycle

All that’s left from our initial stubbed out class is the four methods BeginProcessing, ProcessRecord, EndProcessing, StopProcessing. These four methods form the lifecycle of our Cmdlet. Here’s how they work.

BeginProcessing()

This methods is called once per invocation of our Cmdlet. It is intended to be used to initialize things in your Cmdlet like connections, or reading source files etc. We’ll use this method to load our Hosts file.

        protected override void BeginProcessing()
        {
            Hosts.Load();
        }

ProcessRecord()

This method is called once per item being processed. For example if you are pipelining data to your Cmdlet, it will be called once for each item passed by the pipeline. This is typically where you do the bulk of your work in your Cmdlet. In ProcessRecord we will connect to our Kubernetes cluster and get a list of all the currently configured ingresses and upsert the hostnames/IP addresses to our hosts file, merging them with any existing entries.

protected override void ProcessRecord()
        {
            try
            {
                var responses = Cluster.GetIngresses().ToList(); // Get all configured ingresses

                if (responses.Any()) // Only process if kubernetes returns a response
                {
                    // Grab the valid ingress entries for our purposes (only port 80/443 & not the header info)
                    var ingresses = responses.GetValidIngresses().ToList(); 
                    // Filter the ingresses by hostnames if they have been supplied
                    if (HostNames != null && HostNames.Any()) 
                    {
                        ingresses = ingresses.Where(ingress => HostNames.Contains(ingress.Hosts)).ToList();
                    }
                    
                    // Upsert the entries to the hosts file and capture the results for output
                    var results = ingresses.Select(ingress =>
                            Hosts.Upsert(ingress.Address, ingress.Hosts, $" Name: {ingress.Name} Namespace: {ingress.Namespace} EntryDate:{DateTime.Now}"))
                        .OrderBy(result => result.Status)
                        .ThenBy(result => result.HostName);
                    // Output each result as a separate object to allow better pipelining to other Cmdlets
                    foreach (var result in results)
                    {
                        WriteObject(result);
                    }
                    return;
                }
                // Warn the user if Kubernetes did not respond
                WriteWarning("No response was received from kubernetes.  Is your cluster running? (If running minikube locally, try 'minikube status' and/or 'minikube start')");
            }
            catch (Win32Exception ex)
            {
                // Handle the case when kubectl is not installed or is not on the path by showing a warning/error
                if (ex.Message.Contains("The system cannot find the file specified."))
                {
                    WriteWarning("The kubectl command was not found on your system. (Is it installed and on your path?)");
                    WriteError(new ErrorRecord(ex, Errors.KubectlNotFound, ErrorCategory.ResourceUnavailable, Cluster));
                    return;
                }
                // Re-throw any unknown errors
                throw;
            }
        }

EndProcessing()

EndProcessing as you may have guessed is called once processing is completed. This method is also called only once per invocation and is a place to run any clean up code and finalize things before exiting your Cmdlet. We’ll save any pending changes from EndProcessing.

        protected override void EndProcessing()
        {
            Hosts.Save();
        }

StopProcessing()

StopProcessing is called when your Cmdlet terminates prematurely. For example if you start the Cmdlet and then it [ctrl]-[c] then StopProcessing will fire giving you a chance to clean up anything you need to before the Cmdlet exits. We don’t have anything really useful to put in here for our Cmdlet.

Now, in theory, we have a working Cmdlet!

Debugging Your CmdLet

I’ve seen a couple of different ways people have proposed to debug Cmdlets written in C#. Most of them involve a lot of hoop jumping and attaching to processes manually from Visual Studio. By far the best, IMHO, is to configure your project in Visual Studio to debug by launching an external program (powershell.exe) and to pass it a set of command line parameters which among other things automatically loads your module. To do this, in Visual Studio right click on your project and select Properties. In the properties window go to the debug tab and set the following:

Start external program = C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Command line arguments = -NoProfile -NoExit -Command “Import-Module .\DotNetNinja.KubernetesModule.dll”

Your PowerShell path may vary on your system depending on configuration. Notice that I’m pointed to a v1.0 directory? Apparently v1.0 doesn’t mean what you’d think it means. This is the correct path for my local PowerShell 5 install and as far as I have seen this is a pretty standard path unless you made some explicit choices during system set up.

Now if you click the run button or press f5 a PowerShell window should open and your Cmdlet Module should already be loaded. Just set a break-point and invoke your command to debug.

Importing Your Cmdlet

There are a number of ways you can import your custom Cmdlet. The easiest way is just to import the dll directly from the command prompt. Just type Import-Module and the path to your dll.

Import-Module .\DotNetNinja.KubernetesModule.dll

This works fine if the Cmdlet is just for your own personal consumption and you don’t use it very often.

Another way to make it more easily available is to put it a location on your PowerShell Modules Path. You can see all the locations on your path using:

$env:PSModulePath

Typically there is a user specific path @ ~\Documents\WindowsPowerShell\Modules. Create a folder in the modules folder with the same name as your module and place your dll inside there. Now you can load it without specifying the path and it is discoverable using the Get-Module -ListAvailable command.

Using Module Manifest Files

To make your module available without specifying the dll you can create a Module Manifest file (a .psd1 file). The best way to scaffold out a manifest file is to use the New-ModuleManifest Cmdlet. In our case I’ve created one using:

New-ModuleManifest DotNetNinja.KubernetesModule.psd1

By using this command it will automatically generate a unique guid for your module. If you choose to use another means (like copying an existing file), be sure to generate a new guid for for module! Also be sure to set the RootModule so that PowerShell knows what to load. Here is a stripped down version of the one for my module.

#
# Module manifest for module 'DotNetNinja.Kubernetes'
#
# Generated by: DotNetNinja
#
# Generated on: 2020-02-29
#

@{

RootModule = 'DotNetNinja.KubernetesModule.dll'
ModuleVersion = '1.0.0'
GUID = '9b2f7509-1bac-4ea6-8211-798a920baa2c'
Author = 'Larry House'
Copyright = '(c) Larry House. All rights reserved.'
Description = 'PowerShell Cmdlets for managing Kubernetes environments'
PowerShellVersion = '5.0'
CLRVersion = '4.0'
ProcessorArchitecture = 'Amd64'
FunctionsToExport = '*'
CmdletsToExport = '*'
VariablesToExport = '*'
AliasesToExport = '*'
PrivateData = @{

    PSData = @{


    } # End of PSData hashtable

} # End of PrivateData hashtable

}

Source Code

You can get the full source code from GitHub.

git clone https://github.com/DotNet-Ninja/DotNetNinja.PowerShellModules.git

Resources

Running PostgreSql in a Container on Windows 10

Today at work we were setting up a development environment for a .Net Core project using PostgreSql as it’s datastore. We decided that we set up the database server running in a container in the same way I have been running SQL Server (See recent article: Running Microsoft SQL Server in a Container on Windows 10) for the local development environment. Using the docker-compose file from this article as a basis and referring to the documentation for the postgres docker image on Docker Hub we put together a docker-compose file for PostgreSQL that looked similar to this:

version: "3"
services:
  postgres:
    image: "postgres"
    ports:
      - 5432:5432
    environment:
      POSTGRES_USER: "MyUser"
      POSTGRES_PASSWORD: "Password!23"
      POSTGRES_DB: "example"
    volumes: 
      - C:\Docker\PostgreSql\data:/var/lib/postgresql/data

Upon running docker-compose we were greeted with the following output containing an error message:

Creating postgresql_postgres_1 ... done
Attaching to postgresql_postgres_1
postgres_1  | The files belonging to this database system will be owned by user "postgres".
postgres_1  | This user must also own the server process.
postgres_1  |
postgres_1  | The database cluster will be initialized with locale "en_US.utf8".
postgres_1  | The default database encoding has accordingly been set to "UTF8".
postgres_1  | The default text search configuration will be set to "english".
postgres_1  |
postgres_1  | Data page checksums are disabled.
postgres_1  |
postgres_1  | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1  | creating subdirectories ... ok
postgres_1  | selecting dynamic shared memory implementation ... posix
postgres_1  | selecting default max_connections ... 20
postgres_1  | selecting default shared_buffers ... 400kB
postgres_1  | selecting default time zone ... Etc/UTC
postgres_1  | creating configuration files ... ok
postgres_1  | running bootstrap script ... 2020-02-25 02:38:12.326 UTC [80] FATAL:  data directory "/var/lib/postgresql/data" has wrong ownership
postgres_1  | 2020-02-25 02:38:12.326 UTC [80] HINT:  The server must be started by the user that owns the data directory.
postgres_1  | child process exited with exit code 1
postgres_1  | initdb: removing contents of data directory "/var/lib/postgresql/data"
postgresql_postgres_1 exited with code 1

Notice line 19: “FATAL: data directory “/var/lib/postgresql/data” has wrong ownership”. After reading the error message we noted on line 12 it reads “fixing permissions on existing directory /var/lib/postgresql/data … ok”. Also near the top of the output on line 3 it reads “The files belonging to this database system will be owned by user “postgres”.” followed by “This user must also own the server process.”. Interesting…

So after digging around a bit we found that indeed the user “postgres” must own the files in order for the db system to read them and that the container starts up as root. It appears that line 12 is trying to fix the issue, and from what we found online it will… If the data directory is on a Linux file system. Since we are attempting to mount these files from a Windows file system, it appears that “fixing the permissions” fails. No major surprise there. So what is the work around for us poor developers working on Windows machines?

Named Volumes to the Rescue

In order to get this to work we set up a named volume. In this scenario, Docker takes care of handling the files and where they are actually stored, so we don’t readily have access to the files, but we don’t really care all that much. We just want our data to persist and not get blown away when the container gets deleted.

Here is the new (working) docker-compose file with the named volume:

version: "3"
services:
  postgres:
    image: "postgres"
    ports:
      - 5432:5432
    environment:
      POSTGRES_USER: "MyUser"
      POSTGRES_PASSWORD: "Password!23"
      POSTGRES_DB: "example"
    volumes: 
      - psql:/var/lib/postgresql/data

volumes:
  psql:

Using this approach you may want to keep an eye on the named volumes on your system and clean them up when you are no longer using them. To get a list of the volumes on your machine use the following command:

docker volumes ls

That will dump out a list of volumes on your machine that looks something like:

DRIVER              VOLUME NAME
local               600de9fcef37a60b93c410f9e7db6b4b7f9966faf5f6ba067cc6cb55ee851198
local               ae45bfac51d4fb1813bd747cc9af10b7d141cf3affa26d79f46f405ebfa07462
local               b94806ba697f79c7003481f8fd1d65599e532c0e2223800b39a2f90b087d5127
local               d02adf9ab33dfa22e154d25e13c5bb383a5969c19c1dd98cfa2ac8e560d87eb4
local               postgresql_psql

Notice the last entry named “postgresql_psql”? That is the one we just created above. To remove it use the following command (Note: It will not allow you to remove the volume if it is referenced by a container, running or not, so you’ll want to stop and remove the container first):

docker volume rm postgresql_psql

Automatic Binding of Settings Classes to Configuration

I’ve had the idea to do this for a while now. It usually pops back into my head when I start a new project and I have to read configuration information into the application. Microsoft’s IOptions<T> is nice, but there is still a bit of ceremony in having to bind each class to it’s configuration in Startup.cs. It seemed like I should just be able to tell the system in some light-weight, unobtrusive way where to get this configuration from and be done with it.

The Dream

So what is this magical “light-weight, unobtrusive way” you speak of? Well, I’m glad you asked! My thought was to create a NuGet package so that I could just plug it into any project, add an attribute to my settings class(es), and set a single constructor parameter that was the section of the configuration to bind it to. Something like this:

    [AutoBind("Features")]
    public class Features
    {
        public bool IsCoolFeatureEnabled { get; set; }
        public int CoolFeatureCount { get; set; }
        public bool IsUberFeatureEnabled { get; set; }
        public bool IsSecretFeatureEnabled { get; set; }
    }

In my magical world that should automatically bind to a section in my configuration named “Features”.

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "Features": {
    "IsCoolFeatureEnabled": true,
    "CoolFeatureCount": 4,
    "IsUberFeatureEnabled": true,
    "IsSecretFeatureEnabled": false 
  },
  "AllowedHosts": "*"
}

Then, for example, I could inject an instance of “Features” into a controller and use them to control the availability of features in my application (for example). In this case I’m just going to pass the settings (“Features”) to my view to control the layout of the page like so:

    public class HomeController : Controller
    {
        private readonly ILogger<HomeController> _logger;
        private readonly Features _features;

        public HomeController(ILogger<HomeController> logger, Features features)
        {
            _logger = logger??throw new ArgumentNullException(nameof(logger));
            _features = features ?? throw new ArgumentNullException(nameof(features));
        }

        public IActionResult Index()
        {
            return View(_features);
        }

        ...
    }

The Startup Code (“The Glue”)

The only “glue” to hold all this together is a little one-liner in the Startup.cs file’s ConfigureServices method, that you only have to make once. You don’t have to go back to the startup each time you want to configure a new class for configuration settings.

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddAutoBoundConfigurations(Configuration).FromAssembly(typeof(Program).Assembly);
        services.AddControllersWithViews();
    }

The AddAutoBoundConfigurations(…) method sets up a builder with your configuration root. Each time you call FromAssembly(…) using the fluent API it will scan that assembly for any classes with the AutoBind attribute, create an instance, bind it to your configuration, and configure it as a singleton for dependency injection.

The fluent API also exposes a Services property which will allow you to chain back into the services fluent API to continue your setup if you wish to like this.

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddAutoBoundConfigurations(Configuration)
                .FromAssembly(typeof(Program).Assembly)
                .Services
                .AddControllersWithViews();
    }

Wrapping it up – Back to the View

I’ve created a view that uses the “Features” settings to enable features (OK, they’re just div tags on the page, but you get the idea.) and to control the number of CoolFeatures that are on the page. Here’s the razor view:

@model AutoBind.Demo.Settings.Features
@{
    ViewData["Title"] = "Home Page";
}
    <div class="row">
        <div class="col-sm-12 mb-5">
            <h1 class="text-4">Features</h1>
        </div>
    </div>
    <div class="row">
    @if (Model.IsCoolFeatureEnabled)
    {
        @for (int index = 1; index <= Model.CoolFeatureCount; index++)
        {
            <div class="col-sm-4">
                <div class="card text-white bg-info mb-3">
                    <div class="card-header">Cool Feature # @index</div>
                    <div class="card-body">
                        <p class="card-text">Here's my cool feature!</p>
                    </div>
                </div>
            </div>
        }
    }
    @if (Model.IsUberFeatureEnabled)
    {
        <div class="col-sm-4">
            <div class="card text-white bg-dark mb-3">
                <div class="card-header">Uber Feature</div>
                <div class="card-body">
                    <p class="card-text">Here's my uber feature!</p>
                </div>
            </div>
        </div>
    }
    @if (Model.IsSecretFeatureEnabled)
    {
        <div class="col-sm-4">
            <div class="card text-white bg-warning mb-3">
                <div class="card-header">Secret Feature</div>
                <div class="card-body">
                    <p class="card-text">Here's my secret feature!</p>
                </div>
            </div>
        </div>
    }
    </div>

Here’s what the rendered page looks like in the browser:

Screenshot

Summary

This is a simple example, but in larger projects with lots of configuration its nice to be able to quickly create and use your configuration settings classes without having to deal with the plumbing.

The package is available on NuGet.org. You can install it in your projects from Visual Studio by searching for the package DotNetNinja.AutoBoundConfiguration or install it from the command line with:

dotnet add package DotNetNinja.AutoBoundConfiguration

The source is available on GitHub @ https://github.com/DotNet-Ninja/DotNetNinja.AutoBoundConfiguration.

Running Microsoft SQL Server in a Container on Windows 10

Why you may ask? SQL Server runs just fine on Windows 10, but there are a few advantages to running SQL Server in a container rather than installing it on your machine. The biggest advantage is that you can throw it away at any time, for any reason (like a new version has shipped) and leave your machine pristine and fully functional. If you have ever tried to uninstall SQL Server from your machine you’ll definitely appreciate that. Also it is faster to get up and running than a full install of SQL Server (Assuming you already have Docker Desktop and Docker Compose installed, which I do) .

In the modern world of microservice development I find that over time I end up with all sorts of dependencies installed on my machine for various projects. One project may be using SQL Server, the next MongoDB and the next PostgreSQL. And then there is Redis, RabbitMQ, the list goes on and on… Running these dependencies in containers just makes it quick and easy to switch between projects and not have all of these dependencies cluttering up my machine.

As I mentioned this approach does assume you have Docker Desktop installed, and I prefer to also use docker compose as well just to simplify starting things up and shutting them down when I need to. If you don’t already have these tools installed you can get them at Docker Hub, or by using Chocolatey (The Windows installer for Docker Desktop will install both for you.)

choco install docker-desktop

Getting Started

It’s pretty simple to get an instance of SQL Server running in a container, you’ll find all the basic information to get started on the DockerHub Microsoft SQL Server listing. To start up the latest version of SQL Server 2017 use the following command from your command shell.

docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password#1" -p 4133:1433 -d mcr.microsoft.com/mssql/server:2017-latest

Note: I’m running the commands in PowerShell which requires double quotes. If you run them using the command prompt use single quotes.

The -e arguments set environment variables inside the container that are picked up by SQL Server when it runs.
ACCEPT_EULA=Y accepts the Microsoft SQL Server EULA
SA_PASSWORD set the sa account password (You might want to choose a better password!)

-p maps the ports your-machine:container. If you want to map 1433 (the standard SQL Server port) to itself on your machine use -p 1433:1433, in my examples I’ll be mapping to 4133 on my machine as above.

-d runs the container detached, returning the container id and releasing your shell prompt for you to use. If you omit this standard out will be dumped to your shell as long as the container is running.

mcr.microsoft.com/mssql/server:2017-latest specifies the image to run (and pull if you don’t already have it) The :2017-latest is the tag and means to pull the latest tagged version of the image. You can specify a specific version if you so choose.

So if we run the command above (and we haven’t previously run it) Docker will go out and pull the image and start it up. It will likely take 30 seconds to a few minutes to download the image, but once it is completed you should see something like the following in your shell.

❯ docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password#1" -p 4133:1433 -d mcr.microsoft.com/mssql/server:2017-latest
Unable to find image 'mcr.microsoft.com/mssql/server:2017-latest' locally
2017-latest: Pulling from mssql/server
59ab41dd721a: Pull complete
57da90bec92c: Pull complete
06fe57530625: Pull complete
5a6315cba1ff: Pull complete
739f58768b3f: Pull complete
3a58fde0fc61: Pull complete
89b44069090d: Pull complete
93c7ccf94626: Pull complete
0ef1127ca8c9: Pull complete
Digest: sha256:f53d3a54923280133eb73d3b5964527a60348013d12f07b490c99937fde3a536
Status: Downloaded newer image for mcr.microsoft.com/mssql/server:2017-latest
bcb2d2585339b3f7fd1a2fdeafff202359ce563213801949a4c55f954e5beb11
❯

At this point you should have a shiny new instance of SQL Server 2017 up and running. You can see the running container by executing

docker ps

This will list out all of the running containers on your machine.

Note the Container ID and Name, you can use these to reference the container with subsequent Docker commands. At this point you can connect to your database server from your application or SQL Server Management Studio. With the command above the connection string to connect would be: “Server=localhost,4133;Database=master;User Id=sa; Password=Password#1”.

To stop the instance:

docker stop bcb

Above I used a shortened/abbreviated version of the container id, you can do this if it uniquely identifies the container. If I had 2 containers that started with this string I would need to use the full id (or at least more of it) or the name.

I can start it up again using:

docker run bcb

And I can permanently delete the instance using:

docker stop bcb
docker rm bcb

If you need to see the containers you have that are not currently running (ie. you stopped, but did not remove them) use:

docker ps -a

Making Things a Bit More Usable

All this is awesome, but you’ll soon run into a couple of issues:

  • You’ll grow tired of typing in all the long command, remembering all the correct switches etc, and listing out the containers to get the ids to manage them.
  • Once you delete your containers you’ll lose your databases! That’s right, the database files are stored in the container, so once you delete the container it’s gone.

Let’s start by solving the second problem first, which will make the first problem worse :(, then we’ll circle back to solve the first problem.

Mapping Your Data Files to Your Local Machine

Step one: You’ll need to share a drive in Docker. To do this:

  • Right click on the Docker Desktop Icon in your system tray and select “Settings”.
  • Select the “Resources” item and then “File Sharing”.
  • Select a drive to share and click “Apply & Share”

Step two: Create a folder in your shared drive to map into your container. In my case I’ve shared my x: drive so I’ve created a folder X:\DockerVolumes\SqlData\Sample

Step three: Now we are ready to modify our run command to map the shared (empty) folder into our container’s data directory. (I would avoid spaces in the path to your shared volumes directory, as I recall it make things “fun”.)

docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password#1" -p 4133:1433 -v X:\DockerVolumes\SqlData\Sample:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2017-latest

Assuming everything works as expected, you should now have all of your system databases in your shared directory. Now they will persist even if you destroy the container and spin up a new one.

Directory: X:\DockerVolumes\SqlData\Sample


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----       2020-01-29  10:07 PM        4194304 master.mdf
-a----       2020-01-29  10:07 PM        2097152 mastlog.ldf
-a----       2020-01-29  10:07 PM        8388608 model.mdf
-a----       2020-01-29  10:07 PM        8388608 modellog.ldf
-a----       2020-01-29  10:07 PM       14024704 msdbdata.mdf
-a----       2020-01-29  10:07 PM         524288 msdblog.ldf
-a----       2020-01-29  10:07 PM        8388608 tempdb.mdf
-a----       2020-01-29  10:07 PM        8388608 templog.ldf

If they do not show up, try stopping the container and restarting it without the -d switch and read through the output in your terminal, it will usually give you a clue as to your problem.

Cleaning It All Up with Docker Compose

All that is great but, typing out – docker run -e “ACCEPT_EULA=Y” -e “SA_PASSWORD=Password#1” -p 4133:1433 -v X:\DockerVolumes\SqlData\Sample:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2017-latest – every time you want to start SQL Server is a bit annoying and error prone. To solve this we’ll put all these arguments into a docker-compose file and make things much easier.

To organize things I create a folder on my drive to contain my docker-compose files, each file in it’s own sub folder. ex: C:\Docker\Sample would contain 1 docker-compose.yml file that defines my configuration for SQL Server 2017. Here is an example file for the docker run we ran above:

version: "3"
services:
  default-sql:
    image: "mcr.microsoft.com/mssql/server:2017-latest"
    ports:
      - 4133:1433
    environment:
      SA_PASSWORD: "Password#1"
      ACCEPT_EULA: "Y"
    volumes:
      - X:\DockerVolumes\SqlData\Sample:/var/opt/mssql/data

Most of this should look pretty familiar, it’s just a YAML representation of the arguments we’ve been specifying above.

If we navigate to the folder containing our docker-compose file, in my case C:\Docker\Sample\ we can simply run:

docker-compose up -d

Once again the -d switch is to run the container detached. You can omit it an see what is happening inside your container. After a few seconds our server will be up and running. When we are done with our container we can run:

docker-compose down

Now everything should be spun down. If you’re really lazy like me you can create an alias for docker-compose in your PowerShell profile so you can just use:

dc up -d
dc down

Final Thoughts

You’ll want to keep an eye on the containers you have that are sitting around in a stopped state by using “docker ps -a” and cleaning up the old containers by using “docker rm CONTAINERID” to remove them. You’ll also want to keep an eye on the images you have cached and periodically clean them up as well. You can list them with “docker images” and remove them with “docker rmi IMAGEID“. (rmi=remove image) These images can be pretty good size (the current SQL 2017 image is 1.4GB).

Resources