Author Archives: Larry House

Automatic Binding of Settings Classes to Configuration

I’ve had the idea to do this for a while now. It usually pops back into my head when I start a new project and I have to read configuration information into the application. Microsoft’s IOptions<T> is nice, but there is still a bit of ceremony in having to bind each class to it’s configuration in Startup.cs. It seemed like I should just be able to tell the system in some light-weight, unobtrusive way where to get this configuration from and be done with it.

The Dream

So what is this magical “light-weight, unobtrusive way” you speak of? Well, I’m glad you asked! My thought was to create a NuGet package so that I could just plug it into any project, add an attribute to my settings class(es), and set a single constructor parameter that was the section of the configuration to bind it to. Something like this:

    [AutoBind("Features")]
    public class Features
    {
        public bool IsCoolFeatureEnabled { get; set; }
        public int CoolFeatureCount { get; set; }
        public bool IsUberFeatureEnabled { get; set; }
        public bool IsSecretFeatureEnabled { get; set; }
    }

In my magical world that should automatically bind to a section in my configuration named “Features”.

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "Features": {
    "IsCoolFeatureEnabled": true,
    "CoolFeatureCount": 4,
    "IsUberFeatureEnabled": true,
    "IsSecretFeatureEnabled": false 
  },
  "AllowedHosts": "*"
}

Then, for example, I could inject an instance of “Features” into a controller and use them to control the availability of features in my application (for example). In this case I’m just going to pass the settings (“Features”) to my view to control the layout of the page like so:

    public class HomeController : Controller
    {
        private readonly ILogger<HomeController> _logger;
        private readonly Features _features;

        public HomeController(ILogger<HomeController> logger, Features features)
        {
            _logger = logger??throw new ArgumentNullException(nameof(logger));
            _features = features ?? throw new ArgumentNullException(nameof(features));
        }

        public IActionResult Index()
        {
            return View(_features);
        }

        ...
    }

The Startup Code (“The Glue”)

The only “glue” to hold all this together is a little one-liner in the Startup.cs file’s ConfigureServices method, that you only have to make once. You don’t have to go back to the startup each time you want to configure a new class for configuration settings.

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddAutoBoundConfigurations(Configuration).FromAssembly(typeof(Program).Assembly);
        services.AddControllersWithViews();
    }

The AddAutoBoundConfigurations(…) method sets up a builder with your configuration root. Each time you call FromAssembly(…) using the fluent API it will scan that assembly for any classes with the AutoBind attribute, create an instance, bind it to your configuration, and configure it as a singleton for dependency injection.

The fluent API also exposes a Services property which will allow you to chain back into the services fluent API to continue your setup if you wish to like this.

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddAutoBoundConfigurations(Configuration)
                .FromAssembly(typeof(Program).Assembly)
                .Services
                .AddControllersWithViews();
    }

Wrapping it up – Back to the View

I’ve created a view that uses the “Features” settings to enable features (OK, they’re just div tags on the page, but you get the idea.) and to control the number of CoolFeatures that are on the page. Here’s the razor view:

@model AutoBind.Demo.Settings.Features
@{
    ViewData["Title"] = "Home Page";
}
    <div class="row">
        <div class="col-sm-12 mb-5">
            <h1 class="text-4">Features</h1>
        </div>
    </div>
    <div class="row">
    @if (Model.IsCoolFeatureEnabled)
    {
        @for (int index = 1; index <= Model.CoolFeatureCount; index++)
        {
            <div class="col-sm-4">
                <div class="card text-white bg-info mb-3">
                    <div class="card-header">Cool Feature # @index</div>
                    <div class="card-body">
                        <p class="card-text">Here's my cool feature!</p>
                    </div>
                </div>
            </div>
        }
    }
    @if (Model.IsUberFeatureEnabled)
    {
        <div class="col-sm-4">
            <div class="card text-white bg-dark mb-3">
                <div class="card-header">Uber Feature</div>
                <div class="card-body">
                    <p class="card-text">Here's my uber feature!</p>
                </div>
            </div>
        </div>
    }
    @if (Model.IsSecretFeatureEnabled)
    {
        <div class="col-sm-4">
            <div class="card text-white bg-warning mb-3">
                <div class="card-header">Secret Feature</div>
                <div class="card-body">
                    <p class="card-text">Here's my secret feature!</p>
                </div>
            </div>
        </div>
    }
    </div>

Here’s what the rendered page looks like in the browser:

Screenshot

Summary

This is a simple example, but in larger projects with lots of configuration its nice to be able to quickly create and use your configuration settings classes without having to deal with the plumbing.

The package is available on NuGet.org. You can install it in your projects from Visual Studio by searching for the package DotNetNinja.AutoBoundConfiguration or install it from the command line with:

dotnet add package DotNetNinja.AutoBoundConfiguration

The source is available on GitHub @ https://github.com/DotNet-Ninja/DotNetNinja.AutoBoundConfiguration.

Running Microsoft SQL Server in a Container on Windows 10

Why you may ask? SQL Server runs just fine on Windows 10, but there are a few advantages to running SQL Server in a container rather than installing it on your machine. The biggest advantage is that you can throw it away at any time, for any reason (like a new version has shipped) and leave your machine pristine and fully functional. If you have ever tried to uninstall SQL Server from your machine you’ll definitely appreciate that. Also it is faster to get up and running than a full install of SQL Server (Assuming you already have Docker Desktop and Docker Compose installed, which I do) .

In the modern world of microservice development I find that over time I end up with all sorts of dependencies installed on my machine for various projects. One project may be using SQL Server, the next MongoDB and the next PostgreSQL. And then there is Redis, RabbitMQ, the list goes on and on… Running these dependencies in containers just makes it quick and easy to switch between projects and not have all of these dependencies cluttering up my machine.

As I mentioned this approach does assume you have Docker Desktop installed, and I prefer to also use docker compose as well just to simplify starting things up and shutting them down when I need to. If you don’t already have these tools installed you can get them at Docker Hub, or by using Chocolatey (The Windows installer for Docker Desktop will install both for you.)

choco install docker-desktop

Getting Started

It’s pretty simple to get an instance of SQL Server running in a container, you’ll find all the basic information to get started on the DockerHub Microsoft SQL Server listing. To start up the latest version of SQL Server 2017 use the following command from your command shell.

docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password#1" -p 4133:1433 -d mcr.microsoft.com/mssql/server:2017-latest

Note: I’m running the commands in PowerShell which requires double quotes. If you run them using the command prompt use single quotes.

The -e arguments set environment variables inside the container that are picked up by SQL Server when it runs.
ACCEPT_EULA=Y accepts the Microsoft SQL Server EULA
SA_PASSWORD set the sa account password (You might want to choose a better password!)

-p maps the ports your-machine:container. If you want to map 1433 (the standard SQL Server port) to itself on your machine use -p 1433:1433, in my examples I’ll be mapping to 4133 on my machine as above.

-d runs the container detached, returning the container id and releasing your shell prompt for you to use. If you omit this standard out will be dumped to your shell as long as the container is running.

mcr.microsoft.com/mssql/server:2017-latest specifies the image to run (and pull if you don’t already have it) The :2017-latest is the tag and means to pull the latest tagged version of the image. You can specify a specific version if you so choose.

So if we run the command above (and we haven’t previously run it) Docker will go out and pull the image and start it up. It will likely take 30 seconds to a few minutes to download the image, but once it is completed you should see something like the following in your shell.

❯ docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password#1" -p 4133:1433 -d mcr.microsoft.com/mssql/server:2017-latest
Unable to find image 'mcr.microsoft.com/mssql/server:2017-latest' locally
2017-latest: Pulling from mssql/server
59ab41dd721a: Pull complete
57da90bec92c: Pull complete
06fe57530625: Pull complete
5a6315cba1ff: Pull complete
739f58768b3f: Pull complete
3a58fde0fc61: Pull complete
89b44069090d: Pull complete
93c7ccf94626: Pull complete
0ef1127ca8c9: Pull complete
Digest: sha256:f53d3a54923280133eb73d3b5964527a60348013d12f07b490c99937fde3a536
Status: Downloaded newer image for mcr.microsoft.com/mssql/server:2017-latest
bcb2d2585339b3f7fd1a2fdeafff202359ce563213801949a4c55f954e5beb11
❯

At this point you should have a shiny new instance of SQL Server 2017 up and running. You can see the running container by executing

docker ps

This will list out all of the running containers on your machine.

Note the Container ID and Name, you can use these to reference the container with subsequent Docker commands. At this point you can connect to your database server from your application or SQL Server Management Studio. With the command above the connection string to connect would be: “Server=localhost,4133;Database=master;User Id=sa; Password=Password#1”.

To stop the instance:

docker stop bcb

Above I used a shortened/abbreviated version of the container id, you can do this if it uniquely identifies the container. If I had 2 containers that started with this string I would need to use the full id (or at least more of it) or the name.

I can start it up again using:

docker run bcb

And I can permanently delete the instance using:

docker stop bcb
docker rm bcb

If you need to see the containers you have that are not currently running (ie. you stopped, but did not remove them) use:

docker ps -a

Making Things a Bit More Usable

All this is awesome, but you’ll soon run into a couple of issues:

  • You’ll grow tired of typing in all the long command, remembering all the correct switches etc, and listing out the containers to get the ids to manage them.
  • Once you delete your containers you’ll lose your databases! That’s right, the database files are stored in the container, so once you delete the container it’s gone.

Let’s start by solving the second problem first, which will make the first problem worse :(, then we’ll circle back to solve the first problem.

Mapping Your Data Files to Your Local Machine

Step one: You’ll need to share a drive in Docker. To do this:

  • Right click on the Docker Desktop Icon in your system tray and select “Settings”.
  • Select the “Resources” item and then “File Sharing”.
  • Select a drive to share and click “Apply & Share”

Step two: Create a folder in your shared drive to map into your container. In my case I’ve shared my x: drive so I’ve created a folder X:\DockerVolumes\SqlData\Sample

Step three: Now we are ready to modify our run command to map the shared (empty) folder into our container’s data directory. (I would avoid spaces in the path to your shared volumes directory, as I recall it make things “fun”.)

docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password#1" -p 4133:1433 -v X:\DockerVolumes\SqlData\Sample:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2017-latest

Assuming everything works as expected, you should now have all of your system databases in your shared directory. Now they will persist even if you destroy the container and spin up a new one.

Directory: X:\DockerVolumes\SqlData\Sample


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----       2020-01-29  10:07 PM        4194304 master.mdf
-a----       2020-01-29  10:07 PM        2097152 mastlog.ldf
-a----       2020-01-29  10:07 PM        8388608 model.mdf
-a----       2020-01-29  10:07 PM        8388608 modellog.ldf
-a----       2020-01-29  10:07 PM       14024704 msdbdata.mdf
-a----       2020-01-29  10:07 PM         524288 msdblog.ldf
-a----       2020-01-29  10:07 PM        8388608 tempdb.mdf
-a----       2020-01-29  10:07 PM        8388608 templog.ldf

If they do not show up, try stopping the container and restarting it without the -d switch and read through the output in your terminal, it will usually give you a clue as to your problem.

Cleaning It All Up with Docker Compose

All that is great but, typing out – docker run -e “ACCEPT_EULA=Y” -e “SA_PASSWORD=Password#1” -p 4133:1433 -v X:\DockerVolumes\SqlData\Sample:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2017-latest – every time you want to start SQL Server is a bit annoying and error prone. To solve this we’ll put all these arguments into a docker-compose file and make things much easier.

To organize things I create a folder on my drive to contain my docker-compose files, each file in it’s own sub folder. ex: C:\Docker\Sample would contain 1 docker-compose.yml file that defines my configuration for SQL Server 2017. Here is an example file for the docker run we ran above:

version: "3"
services:
  default-sql:
    image: "mcr.microsoft.com/mssql/server:2017-latest"
    ports:
      - 4133:1433
    environment:
      SA_PASSWORD: "Password#1"
      ACCEPT_EULA: "Y"
    volumes:
      - X:\DockerVolumes\SqlData\Sample:/var/opt/mssql/data

Most of this should look pretty familiar, it’s just a YAML representation of the arguments we’ve been specifying above.

If we navigate to the folder containing our docker-compose file, in my case C:\Docker\Sample\ we can simply run:

docker-compose up -d

Once again the -d switch is to run the container detached. You can omit it an see what is happening inside your container. After a few seconds our server will be up and running. When we are done with our container we can run:

docker-compose down

Now everything should be spun down. If you’re really lazy like me you can create an alias for docker-compose in your PowerShell profile so you can just use:

dc up -d
dc down

Final Thoughts

You’ll want to keep an eye on the containers you have that are sitting around in a stopped state by using “docker ps -a” and cleaning up the old containers by using “docker rm CONTAINERID” to remove them. You’ll also want to keep an eye on the images you have cached and periodically clean them up as well. You can list them with “docker images” and remove them with “docker rmi IMAGEID“. (rmi=remove image) These images can be pretty good size (the current SQL 2017 image is 1.4GB).

Resources

Using Libman – The New Visual Studio Library Manager

Libman or “Library Manager” is a new tool built into Visual Studio 2017 (version 15.8+) for managing simple client side dependencies. For example, if you just want to create a simple ASP.NET Core MVC web application with Bootstrap 4, you can quickly and easily pull in just the CSS and JavaScript files you need, and you can put them where you want them. Unlike the old days when we pulled in client side packages using the NuGet Package Manager and the package maker decided where the files went and you just had to deal with it (or deal with the consequences of moving them) or today when using NPM where everything goes into the node_modules folder, with Libman you get to choose where the files go in your project. Not only can you choose where the files go, but you can choose exactly the files you want and ignore the rest if you don’t need them in your project. Obviously there are still plenty of scenarios where NPM fits the bill perfectly, but for the projects where you just want to manage a few of small dependencies and you want to keep it “lean and mean”, Libman may just be the right choice.

Managing Libraries Using the GUI

You can add/manage files with the Library Manager in a couple of different ways.

The first way is through the dialog which can be invoked by right clicking on a folder in the solution and selecting “Add..” and then “Client Side Library…” from the context menu. As an example, lets say we want to update all of our libraries in the lib folder @ -/wwwroot/lib to be managed by Libman. (You’ll want to clean out the folder first if you are going to use Libman to manage all of your libraries.)

  • Right click on the lib folder in -/wwwroot and select “Add..” and then “Client Side Library…”
  • A dialog will open allowing you to select the library you would like to pull in.
    • Choose a provider (More on this in a bit, we’ll leave it on cdnjs for now)
    • Start typing the name of the library (jQuery in this case), you’ll get intellisense to help you find the packages available.
      • Be sure to use tab to complete the library selection. This will fill in the version and populate the files list.
      • You can also type jquery@ and you’ll get an auto complete list of all the available versions.
  • I usually select “Choose specific files” and select the files I want in my project specifically.
  • “Target Location” will be populated automatically from the location you right clicked on in your project and the name of the library you chose, but you can customize it if you like.
  • And finally, just click install to add the file(s) to you project.

The first time you do this a libman.json file will be created in the root of your project. This acts a lot like the old package.config for NuGet packages in .NET Framework applications in that it keeps track of the relevant settings for Libman and lists all of the libraries being managed by Libman.

In order to replace all of the original libraries you’ll need to do this for each library and update the references to the individual items in the ~/Views/Shared/_Layout.cshtml and the ~/Views/Shared/_ValidationScriptsPartial.cshtml files. (The original libraries and version are jquery@3.3.1, jquery-validate@1.17.0, jquery-validation-unobtrusive@3.2.9, and twitter-bootstrap@3.3.7 .)

Managing Libraries Manually

The second way you can manage libraries is by right clicking the project and selecting “Manage Client-Side Libraries …”. This will open up the libman.json file at the root of your project to be edited. (If libman.json does not exist, one will be created for you.) Here’s an example of a new empty file:

{
  "version": "1.0", 
  "defaultProvider": "cdnjs",
  "libraries": []
} 

To add your first library to this file manually expand the brackets [] after libraries (libraries is an array) and start a new object with braces {}. Inside the braces add four new elements:

  • “provider” – the value for this should be one of the following:
    • “cdnjs” – use cdnjs.com as your provider.
    • “unpkg” – use unpkg.com as your provider.
    • “filesystem” – use your local file system.
  • “location” – The path from the root of your project to put the library files into.
  • “library” – The name of the library
  • “files” – A Json array of file names.

Here’s what it might look like with cdnjs as the provider, the same location we used above (-/wwwroot/lib) for jquery 3.3.1.

{
  "version": "1.0", 
  "defaultProvider": "cdnjs", 
  "libraries": [ 
    {
      "provider": "cdnjs", 
      "destination": "wwwroot/lib/jquery", 
      "library": "jquery@3.3.1", 
      "files": [ 
        "jquery.js", 
        "jquery.min.js", 
        "jquery.min.map"
      ]
    }
  ]
} 

Microsoft has provided a nice editor experience for this file inside Visual Studio. You get autocomplete for just about everything including the library names and file names. If you set the insertion point inside of a library object a light bulb will show up in the margin you can click on or you can use ctrl-. to activate a menu of options to update or uninstall that library. Also notice line 3 “defaultProvider : “cdnjs”? This means that if you omit the provider in a given library then “cdnjs” will be the provider by default. You can also add a line “defaultLocation” : “my/default/path” which will allow you to omit the location in each of the library objects.

Summary

When I read the initial blog post about this new feature there was no GUI at all, just the Json file. At first I wasn’t a fan, but after trying it out and seeing the autocomplete work it began to grow on me. Now I still use the Json editor even with the GUI being available. It’s a simple, but useful tool that does exactly what is needed and nothing more (and I can finally control where my client side dependencies get installed).

Resources

Configure Linux Mint/Ubuntu Screen Resolution Under Hyper-V

As a .NET developer running Windows 10 I had just about given up running Linux in a virtual machine on my development machines.  Once you install Visual Studio 2012/2013/2015 with all the bells and whistles (specifically the tooling to support Windows Phone development) you end up with Hyper-V installed and configured on your system to support the Windows Phone simulators.  The obvious thing to do then is to create your VM in Hyper-V, but that results in a virtual machine running in a window @1024×768 or @ 1152×864 which is a little annoying on a 1920×1080 display.  (This is definitely workable, but it is annoying).  After poking for a while inside Linux trying to get the resolution set to 1920×1080 I decided that surely I needed virtual display drivers much like I have used in the past with VMware.   I began scouring the web for the Hyper-V equivalent of the VMware Tools.   Unfortunately I wasn’t having any luck finding what I was looking for.

If you think you might try VMware or Virtual Box along side your Hyper-V installation, think again.  Running either of these platforms along side Hyper-V is difficult due to compatibility issues.  In particular they clash around the virtual network adapters.  I was able to get this scenario to work, but it required creating scripts to disable Hyper-V and then rebooting the machine to switch between the two visualization platforms.

After doing some research I found that it was possible to get the virtual machine to boot up with your desired resolution with a little modification to your grub file,  here’s how to make it happen.

  • Install the latest version of your distro of choice (Ubuntu or Linux Mint anyway)
    • I used Mint 17.2 (Rafael) with the Cinnamon desktop for my virtual machine
  • Open up a terminal window
  • Navigate to your /etc/default directory
    • cd /etc/default
  • Open your grub file for editing as an administrator
    • sudo gedit grub
  • Find the line GRUB_CMDLINE_LINUX_DEFAULT and change it to :
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=hyperv_fb:1920x1080"
  • Update your grub file
    • sudo update-grub

Now just reboot your Linux virtual machine and you’re good to go.

Resources

http://nramkumar.org/tech/blog/2013/05/04/ubuntu-under-hyper-v-how-to-overcome-screen-resolution-issue/