Author Archives: Larry House

Setting Up a Local Chat AI with Ollama and Open Web UI

As a software developer and architect, I’m always excited to explore new technologies that can revolutionize the way we interact with computers. AI is taking the technology world by storm, and for good reason, it can be a very powerful tool. Sometimes however, using a public service like ChatGPT or Microsoft’s Co-Pilot doesn’t work for a number of reasons (usually privacy related).

In this article, I’ll guide you through setting up a chat AI using Ollama and Open Web UI that you can quickly and easily run on your local, Windows based machine . We’ll use Docker Compose to spin up the environment, and I’ll walk you through the initial launch of the web UI, configuring models in settings, and generally getting things up and running.

Prerequisites

Before we dive into the setup process, make sure you have:

  • Docker installed on your machine (you can download it from the official Docker website)
  • A basic understanding of Docker Compose and its syntax. (Not necessarily required, but helpful if you run into issues or want to tweak things)
  • A compatible graphics card (GPU) to run Ollama efficiently. While this is not strictly required, your experience will not be very good without one. My example is configured to use an Nvidia graphics card.

Step 1: Create a Docker Compose File

Create a new file named docker-compose.yml in a directory of your choice. Copy the following content into this file:

services:
  openWebUI:
    image: ghcr.io/open-webui/open-webui:main
    restart: unless-stopped
    ports:
      - "3000:8080"
    environment:
      OLLAMA_BASE_URL: http://host.docker.internal:11434
    extra_hosts:
      - "host.docker.internal:host-gateway"
    volumes:
      - c:\tmp\owu:/app/backend/data

  ollama:
    image: ollama/ollama:latest    
    environment:
      NVIDIA_VISIBLE_DEVICES: all
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: ["gpu"]
              driver: nvidia
              count: all    
    restart: unless-stopped
    ports:
      - "11434:11434"
    volumes:
      - c:\tmp\ollama:/root/.ollama

This Docker Compose file defines two services:

  • openWebUI: runs the Open Web UI container with the necessary environment variables and port mappings.
  • ollama: runs the Ollama container with NVIDIA GPU support, exposes a port for communication between the containers, and mounts a volume to store model data.
A couple things to note here: 
Lines 12 and 29 are mapping local directories into the containers as volumes. That allows the data from your sessions to persist, even if the docker images are restarted or your machine is rebooted. C:\tmp\ollama and C:\tmp\owu can be changed to any empty directory you choose, but remember that in the following steps if you choose to change them.

Lines 16-24 configure the container to take advantage of your Nvidia GPU. If you don't have one, or don't want to use it, you can remove these lines and everything should still work, albeit much slower. If you have another GPU, this is where you will want to look into making changes to use your GPU - In particular lines 17 & 23 will likely need to change.

Lines 4 & 25 configure the containers to automatically restart unless they were manually stopped. That means they should restart if you reboot your machine, they crash, or you update docker and it restarts.

Step 2: Create Data Directories

As mentioned above, the directories C:\tmp\ollama and C:\tmp\owu will be mapped into the running containers and used for data storage. You will want to create these directories ahead of launching the containers to avoid any potential issues.

Step 3: Launch the Environment

Open your terminal or command prompt and navigate to the directory where you saved the docker-compose.yml file. Run the following command:

docker-compose up -d

This will start both containers in detached mode (i.e., they’ll run in the background).

Step 4: Launch the Web UI

Once the environment is set up, navigate to http://localhost:3000 in your web browser. This should open the Open Web UI interface.

You should be presented with a screen to log in, but you won’t have an account yet. Just click the “Don’t have an account? Sign up” link under the “Sign in” button. Since this will be the first account, it will automatically become the administrator. Simply enter your name, email address and a password and create your account.

Once you account is created you will be logged in and should now she the Chat Interface, which should look pretty familiar if you have been using ChatGPT.

Step 5: Setting/Adding up Models

Before you can start chatting it up with your new application, you’ll need to install some models to use. To get started I would install the llama3.1 model. To do this click on your name in the lower left corner of the UI, select “Settings” and then in the dialog, select “Admin Settings” on the left.

Now select “Models” in the Admin Panel and enter llama3.1 in the “Pull a model from Ollama.com” field and click the download button on the right of the field. (You can see a list of available models on Ollama.com: https://ollama.com/library)

You should see a small progress bar appear. Wait for the progress bar to get to 100%, then it will verify the hash and eventually you should see a green pop-up notifying you that it has successfully been added. Now you can click “New Chat” on the far top-left and select llama3.1 from the “Select a model” dropdown.

Next Steps

At this point you should now have a functioning Chat AI interface!

Going forward, I’ll be playing with this configuration and attempting to add on more functionality and potentially convert the docker compose above into kubernetes manifests so that I can run my service on a local kind cluster.

Resources

Source Code

https://github.com/DotNet-Ninja/DotNetNinja.Docker.Ollama

Configuring Azure AD B2C & ASP.NET 5.0 MVC Web Applications

There are a number of solutions available these days for identity management. One easy & cheap solution for small ASP.NET applications is Azure AD. With Azure AD B2C you get your first 50,000 MAUs (Monthly Active Users) free. It can however be a little daunting to figure out exactly how to get things set up.. This post is intended as a quick guide to getting up and running with Azure AD B2C and ASP.NET Core 5.0.

To start off with you will need to set up your directory in the Azure Portal.

Step 1: Create Your Azure AD B2C Directory

To get started log into your Microsoft Azure Portal and navigate to “Home”. Once on the home page click the “Create a resource icon near the top of the page.

In the search box type in “b2c” and select “Azure Active Directory B2C” from the list. This will navigate you to the Azure Active Directory B2C Page.

Next click the “Create” button near the top of the page. This will take you to “Create new B2C Tenant or Link to existing Tenant”.

Click “Create a new Azure AD B2C Tenant”.

On the “Create a tenant” page you will need to provide a name for your tenant (Organization name), a domain name, choose your country, a subscription and resource group. (Additionally, if you choose to create a new resource group, you will need to select a region.)

Now click “Review + Create” and then if everything looks correct, click “Create”.

It will take a few minutes to create your new directory. Once it completes the page will update with a link to your new tenant.

Follow the link to your new tenant.

Step 2: Configure User Flows

Now that we have our directory created it’s time to start configuring things for our application. To start off we’ll set up our user flows. The configuration of the user flows in your Azure B2C Directory determines what happens when one of the flows is triggered. The most common flow is “Sign up and Sign in”. This flow is how users actually sign into Azure AD B2C or sign up for an account in your directory so they can use your application. (You can also split these into 2 separate flows if you wish.) To get started click on the “User Flows” item under “Policies” in the left navigation bar.

You should now see the list of flows that have been configured for your directory, but the list is empty. Let’s start by adding a “Sign Up and Sign In” flow. To do that click the “New user flow” button near the top of the page.

Next select the “Sign up and sign in” button under “Select a user flow” then select “Recommended” (If you still have the option!) and finally the “Create” button.

On the “Create” page there are 5 distinct, numbered sections to complete.

Name
You can choose any unique name you would like here, but I prefer to keep it simple and just go with SignUpIn. (The whole name will be B2C_1_SignUpIn.)

Identity Providers
Since we just set up our directory, we will only have one option here which is “Email signup”. Later on you can enable third party login providers, but setting that up is probably another post unto itself. Just select the radio button and move on to the next step.

Multifactor Authentication
Under the multifactor authentication heading there are 2 groups of options. The first group allows you to select what form of multifactor authentication to choose (Email, SMS or Phone Call, SMS Only, or Phone call only).

The second set of options is for enforcement of MFA. The options are Off, Always on, and Conditional. With conditional set you are delegating the decision to require MFA to Azure B2C’s Conditional Access Policies that will determine at runtime the risk level associated with the login and automatically require it if needed based on the policies rules.

Since we are doing a demo here, and there is a charge associated with MFA usage I will, choose Email & Off for our configuration.

Conditional Access
Under this heading there is a single checkbox to enable the aforementioned Conditional Access policies. For now I will also leave this off (Unchecked)

User Attributes and Token Claims
The last section is where we can select which user attributes are collected when a user signs up and which attributes are returned in your token responses when a user is authenticated. At the bottom of the list on the page is a link that opens up the full list you have to choose from. I’m going to select the following options from the fly-out for the demo.

Finally, click the “Create” button at the bottom of the page to create your new flow.

Additional Flows

I am also going to set up two additional flows for profile editing and password reset named “B2C_1_EditProfile” & ” B2C_1_ PasswordReset” so we can demo triggering those flows from our application as well. Setting these up is the same as setting up the Sign up and sign in policy we set up previously. Just choose the correct flow for each and set the options the same way we set them in the previous step.

Step 3: Setup Your B2C Application

To get started setting up our application in Azure B2C, click “App registrations” in the left navigation bar ad then click the “New registration” button near the top of the page.

To complete app registration you will need to fill in a “Name” and a “Redirect URI”. I’ll name mine “NinjaAdDemo” and set the redirect URI to “https://localhost:5001/signin-oidc”.
The rest of the options on the page can be left at their default values. Just click the “Register” button to proceed.

We need to enable implicit flow for our application. You can do that from your application page by clicking on “Authentication” in the left navigation bar and the scrolling down to the bottom of the page and checking both “Access tokens (used for implicit flows)” and “ID tokens (used for implicit and hybrid flows)” and clicking the save button at the top of the page.

Now that your application is set up, it’s to to write some code!

Step 4: Configure Middleware In Your Application

For our demo I’ve created an MVC Web application running on .NET 5.0 using the command:

dotnet new mvc -n DotNetNinja.Samples.AzureAD

The first this we will need to do is add a couple of NuGet packages. (You’ll want to run these commands from inside your project directory, or you can add them using the Package Manager inside Visual Studio if you prefer.)

dotnet add package Microsoft.Identity.Web 
dotnet add package Microsoft.Identity.Web.UI

Next, in your editor of choice (I’ll assume Visual Studio), we’ll need to add open up your StartUp.cs file and add a few lines inside the ConfigureServices method so it looks like this:

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMicrosoftIdentityWebAppAuthentication(Configuration, "AzureAdB2C");

            services.AddControllersWithViews().AddMicrosoftIdentityUI();

            services.AddRazorPages();
        }

And we’ll need to add a few lines into our Configure method as well (specifically we are adding the app.UseAuthentication() & endpoints.MapRazorPages() lines):

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
                // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
                app.UseHsts();
            }
            app.UseHttpsRedirection();
            app.UseStaticFiles();

            app.UseRouting();

            app.UseAuthentication();
            app.UseAuthorization();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllerRoute(
                    name: "default",
                    pattern: "{controller=Home}/{action=Index}/{id?}");
                endpoints.MapRazorPages();
            });
        }

Step 5: Add Configuration

Now to configure our application. When using the Microsoft.Identity.Web middleware it will assume it can find the configuration in your application settings. If you notice above when we specified a config section named ” AzureAdB2C ” as the second parameter to the AddMicrosoftIdentityWebAppAuthentication method. Let’s add that section now:

{
  "AzureAdB2C": {
    "Instance": "https://<YOUR-DIRECTORY-NAME>.b2clogin.com",
    "Domain": "<YOUR-DIRECTORY-NAME>.onmicrosoft.com",
    "TenantId": "<YOUR-TENANT-ID>",
    "ClientId": "<YOUR-CLIENT-ID>",
    "SignUpSignInPolicyId": "B2C_1_SignUpIn",
    "ResetPasswordPolicyId": "B2C_1_PasswordReset",
    "EditProfilePolicyId": "B2C_1_EditProfile",
    "CallbackPath": "/signin-oidc",
    "SignedOutCallbackPath ": "/signout-callback-oidc"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "AllowedHosts": "*"
}

The actual values for this configuration come from your app configuration in the AzureAD B2C Portal.

Your directory name can be found one the directory overview page.

Step 6: Add A Protected Page

So that we can see things actually working we are going to add a few UI elements. Let’s start by adding a Secure() method to our HomeController.cs that returns the User as a model to the view and mark it with the Authorize attribute.

[Authorize]
public IActionResult Secure()
{
    return View(User);
}

The we’ll add a razor view ~/Views/Home/Secure.cshtml.

@model System.Security.Claims.ClaimsPrincipal
<div class="row">
    <h1>@Model.Identity.Name</h1>
    <table class="table table-striped">
        <thead>
            <tr>
                <th>Claim</th>
                <th>Value</th>
            </tr>
        </thead>
        <tbody>
            @foreach(var claim in Model.Claims.OrderBy(c=>c.Type))
            {
                <tr>
                    <td>@claim.Type</td>
                    <td>@claim.Value</td>
                </tr>
            }
        </tbody>
    </table>
</div>

Lastly, let’s add some navigation to our menu so we can get to our secure page and some links to sign in, sign out, edit profile & reset password. Replace the navigation in your ~/Views/Shared/_Layout.cshtml file, everything after the closing </button> tag to the </nav> tag, with the following.

<div class="navbar-collapse collapse d-sm-inline-flex justify-content-between">
    <ul class="navbar-nav flex-grow-1">
        <li class="nav-item">
            <a class="nav-link text-dark" asp-area="" asp-controller="Home" asp-action="Index">Home</a>
        </li>
        <li class="nav-item">
            <a class="nav-link text-dark" asp-area="" asp-controller="Home" asp-action="Privacy">Privacy</a>
        </li>
        <li class="nav-item">
            <a class="nav-link text-dark" asp-area="" asp-controller="Home" asp-action="Secure">Secure</a>
        </li>
    </ul>
    @if (User?.Identity?.IsAuthenticated??false)
    {
        <ul class="navbar-nav float-lg-right">
            <li class="nav-item dropdown">
                <a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#" 
                   role="button" aria-haspopup="true" aria-expanded="false">@User.Identity.Name</a>
                <div class="dropdown-menu">
                    <a class="dropdown-item" asp-area="MicrosoftIdentity" asp-controller="Account" asp-action="EditProfile">
                        Edit Profile
                    </a>
                    <a class="dropdown-item" asp-area="MicrosoftIdentity" asp-controller="Account" asp-action="ResetPassword">
                        Reset Password
                    </a>
                    <div class="dropdown-divider"></div>
                    <a class="dropdown-item" asp-area="MicrosoftIdentity" asp-controller="Account" asp-action="SignOut">
                        Sign Out
                    </a>
                </div>
            </li>
        </ul>
    }
    else
    {
        <ul class="navbar-nav float-lg-right">
            <li class="nav-item">
                <a class="nav-link text-dark" asp-area="MicrosoftIdentity" asp-controller="Account" asp-action="SignIn">Sign In</a>
            </li>
        </ul>
    }
</div>

Note all the links for anything related to our user flows (sign in, sign out, edit profile, and reset password) are all going to endpoints in the MVC area “MicrosoftIdentity”. These are endpoints provided by the Microsoft.Identity.Web.UI assembly that we referenced and configured earlier, and they handle all the details of making the properly formatted request to Azure AD B2C. All you need to do is link to them, or redirect to them to handle any interaction with your B2C directory.

Step 7: Try it out!

Now for the fun part! Let’s run our application and give it a try.

Try clicking on the “Secure” menu item. You should be redirected to Azure B2C to sign in. (First time through you can click the link to sign up on the bottom of the login form.)

Once you are signed in you should redirect back the the secure page.

Checkout the other flows using the links under your display name as well once you are logged in.

Final Thoughts

While there are a lot of steps involved in setting this up, once you get used to working with Azure B2C it’s actually pretty quick to set up a new directory and get things up and running. You can also customize the login experience. (Maybe we’ll look at that in a future post, stay tuned!)

Resources

Source Code

DotNet-Ninja/DotNetNinja.Samples.AzureAD: Sample ASP.NET MVC application integration with Azure AD (github.com)

Adding Ingress to Your Multi-Node Kind Cluster

In my last post about Kubernetes I went through how to set up a Kind (Kubernetes in Docker) cluster on your Windows desktop. In this post I’ll show you how to add an nginx ingress controller to your cluster and do a quick demo of it working,

To get started there are a couple of pre-requisites you’ll need to have.

  1. kubectl – Kubenetes command line interface
  2. Docker Desktop – We’ll be running our cluster as a group of containers in Docker.
  3. Kind – Kind makes it easy to manage Kubernetes clusters on you desktop.

I went through the steps to get the pre-requisites set up in my previous post, so I’ll just put the chocolatey commands here in case you need them, but if you need more guidance on how to set up the pre-requisites please see the previous post: Running a Multi-Node Kubernetes Cluster on Windows with Kind.

choco install docker-desktop
choco install kubernetes-cli
choco install kind

In this post I’ll be working with the latest version of Kind (0.11.1). If you installed Kind previously using Chocolatey you can quickly upgrade with the following command.

choco upgrade kind --version=0.11.1

The first thing we’ll need to do is add a bit more to our configuration file (lines 6-18) for our Kind cluster so we can map the ports from our cluster to our local machine. (Note you can get all of the files in this post from GitHub. The yaml files are in the Kind.0.11.1 directory.)

# Four node (three workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 8080
    protocol: TCP
  - containerPort: 443
    hostPort: 4443
    protocol: TCP
- role: worker
- role: worker
- role: worker

To get our cluster up and running we simply run the following kind command pointing to our new configuration file.

kind create cluster --config=cluster-config.yaml

Now to deploy the ingress controller. There is a configuration file in the repository for kubernetes/ingress-nginx. You can apply this using the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml  

Caution: These files have a habit of changing often as Kubernetes evolves, so what works today with a particular version of Kubernetes may not work tomorrow with the same version of Kubernetes because the file at that URL may have changed and no longer be compatible (or it may just be broken). I have copied the version of the file I am using with my cluster into my repository as ingress-nginx.yaml so that if (when) this happens there will be a working copy available for these instructions.

It will likely only take a few seconds for the command to complete, but it may take a minute or two for everything to actually complete inside the cluster and be ready to use. You can verify that everything is ready by running the following:

kubectl wait --for=condition=Ready pod -l app.kubernetes.io/component=controller -n ingress-nginx --timeout=300s
kubectl wait --for=condition=Complete job -l app.kubernetes.io/component=admission-webhook -n ingress-nginx --timeout=300s

This will wait until everything is in the proper state before returning.

Now that everything is set up let’s test it out by deploying an nginx pod, a service, and an ingress and verifying that we can indeed reach our running pod via the ingress. The following will create a pod from the nginx docker image, create a service to expose it and an ingress to allow us to connect to it from our local machine. Here’s the yaml for that:

apiVersion: v1
kind: Pod
metadata:
  name: ninja-web-pod
  labels:
    role: webserver
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: ninja-svc
spec:
  selector:
    role: webserver
  ports:
    - protocol: TCP
      port: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ninja-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
    - host: ninja.k8s.local
      http:
        paths:
          - backend:
              service:
                name: ninja-svc
                port:
                  number: 80
            path: /
            pathType: Prefix
            

Deploy the application using kubectl:

kubectl apply -f test-deploy.yaml

Note that I am specifying the host name ninja.k8s.local for my ingress. In order to use that host name we’ll also want to add a host file entry mapping it to your local machine.

127.0.0.1  ninja.k8s.local

And with that we should be able to access our pod @ http://ninja.k8s.local:8080/ and see the default nginx page served up.

Creating Project Templates for dotnet – Part 4 – Visual Studio Support

This is part 4 of a 4 part series of posts

In the previous posts in this series we have explored how to set up a project as a template and the basics of the new templating system, how to optionally include/exclude files, and finally how to handle optional content within various files in the project. In this final post we’ll take a look at how to add support for your template in Visual Studio so that users of your template can also use the template from within the IDE itself.

The first thing we need to do is enable support within Visual Studio’s options for third party templates. To do that we need to open up the options using the menus – Tools | Options and then expand Environment to find Preview Features and enable Show all .Net Core templates in the New project dialog (requires restart).

Next we’ll add a couple of items to the template.json file. Immediately after shortname we’ll add defaultName (This drives the default name Visual Studio will generate for a new project) and description (Which will show up under your project type name in the dialog) and the Framework section under symbols (Which will drive the frameworks selection drop down in the additional properties dialog).

{    
...    
    "shortName": "ninjamvc",     
    "defaultName": "DotNetNinjaMVC",   
    "description": ".Net 5.0 MVC Web Application - Batteries Included",         
    ...
    "symbols": {
        "Framework":{
            "type": "parameter",
            "description": "The target framework for the project.",
            "datatype": "choice",
            "choices": [
                {
                    "choice": "net5.0",
                    "description": ".Net 5.0"
                }
            ],
            "replaces": "net5.0",
            "defaultValue": "net5.0"
        },
        "ReadMe": {
            "type": "parameter",
            "datatype":"bool",
            "defaultValue": "true",
            "description": "Include a Read Me file (ReadMe.md) in the solution."
        },
       ...
}

With that done we need to add a new file named ide.host.json to our .template.config folder which should be located @ ~\src\Content\.template.config. This file will allow us to set up all of our command line options to show up as check-boxes in the new project dialogs within Visual Studio.

{
    "$schema":"http://json.schemastore.org/vs-2017.3.host",
    "symbolInfo": [
        {
            "id": "ReadMe",
            "name": {
                "text": "Include a Read Me file (ReadMe.md) in the solution."
            },
            "isVisible": "true"
        },        
        {
            "id": "License",
            "name": {
                "text": "Include an MIT License file (License.txt) in the solution."
            },
            "isVisible": "true"
        },
        {
            "id": "GitIgnore",
            "name": {
                "text": "Include a Git Ignore file (.gitignore) in the solution."
            },
            "isVisible": "true"
        },
        {
            "id": "EditorConfig",
            "name": {
                "text": "Include an Editor Config file (.editorconfig) in the solution."
            },
            "isVisible": "true"
        },
        {
            "id": "Authentication",
            "name": {
                "text": "Include code integrating Auth0 authentication in the solution."
            },
            "isVisible": "true"
        }
    ]
}

The first item in the file sets the schema of the json and enables intellisense in Visual Studio & VS Code when editing the file which is very handy. The rest of the file is an array of elements that map to the elements for our parameters in the template.json file and provide information for Visual Studio to be able to display these options in the dialogs.

  • id: maps to the name of the argument in the template.json file.
  • name: maps to the text that will be displayed withing Visual Studio along side the check box for the option.
  • isVisible: makes the option visible in the IDE.

I’ve also updated my Test-Template.ps1 file to add in the following snippet which will clear the template cache used by Visual Studio as well so that changes to the template appear in Visual Studio when the template is reinstalled during testings.

Remove-Item ~/.templateengine -Recurse -Force

Also note that I have updated the names of the options from my initial posts (changed the casing).

With all those changes we should now be able to test our template. Run the test-template.ps1 file to clear the cache, build and reinstall the template and then launch Visual Studio. You should now see that the template is available and when you use it you should be presented with dialogs that allow you to enable/disable all of the feature.

That ends this series of posts on creating templates, but we have really only scratched the surface of what can be done with the template engine. Check out the resources below for more information. You can also checkout the completed source code for my template on GitHub.

Resources