Category: Tips, Tricks & Notes

A Practical Local LLM Environment for Developers Using Ollama, Open Web UI, and Cody

Llama developing software

Over my last few posts I have talked about an approach to running large language models locally 100% on docker. I chose this approach for a couple of reasons:

  • It is simple to get up and running. Given the docker-compose file you really don’t need much knowledge to get things up and running and play around.
  • Everything is pretty much encapsulated in that single docker-compose file and things are fairly isolated, so the chances of success are greatly improved.
  • It’s simple to clean up/remove everything, either because you want to start over fresh, or you are just done and want to clean up your system.

All that means it makes for a great playground and a way to dip your toes in the water so to speak. That said, as I began to try to use the system day to day I ran into some rough points that just really made it impractical and unusable for real daily work. The biggest issue I experienced was that it would take nearly 2 minutes to load the model initially. If that only happened once per day, it would have been non-ideal, but bearable. However, what I found in practice was that if I didn’t use the LLM for more than about 5 minutes, it would unload and I would again have to wait 2 minutes for it to load the next time I went to use it. What that means in real terms is that since I often use it once or twice and then not again for a few minutes or more that I really ended up waiting nearly 2 minutes very often when attempting to use the system.

To solve this issue I have adapted things a bit to run Ollama “on-the-metal”, installing it directly on windows and running Open Web UI as a docker container. I’m not sure where the bottleneck is running this in Docker, but this appears to completely resolve the issue and now that Ollama for Windows appears to be available/working properly it is not substantially harder to get it all running.

The New Set Up

I am running this on Windows, but there are installers for Ollama for Windows, Mac and Linux, so it should in theory work for any desktop platform you are using. You “should” simply need to use the appropriate installer/installation steps for your platform.

Ollama Set Up

To install Ollama on Windows simply download the installer from ollama.com/download and run the installer. Once installed you should have a llama icon in your system tray and if you navigate to http://localhost:11434/ you should be greeted with the message “Ollama is running”.

At this point you can use Ollama from the command line. Try the following commands to pull a model and run an interactive session in your terminal.

ollama pull llama3.1

ollama run llama3.1

Note: If your terminal was open before installing Ollama you may need to start a new session to ensure Ollama is on the path in your session before executing these commands.

Open Web UI Set Up

I am setting up/starting Open Web UI using a docker compose file that is really just a stripped down version of the docker compose file from my previous posts.

services:
  openWebUI:
    image: ghcr.io/open-webui/open-webui:main
    restart: unless-stopped
    ports:
      - "3000:8080"
    environment:
      OLLAMA_BASE_URL: http://host.docker.internal:11434
    volumes:
      - c:\docker\data\owu:/app/backend/data

Note: Notice the last line of the compose file has a volume mapping. This maps the local path C:\docker\data\owu into the container at /app/backend/data. This allows your configuration data to persist between runs of the Open Web UI container (for example after a system re-boot). You can make this directory anything you would like that exists on your system.

Simply copy this file into a directory on your system and from a terminal, in that directory, run the following command to start the container.

docker compose up -d

It will take a minute or so for the container to come up and initialize, but once it does the web UI should be available at the URL: http://localhost:3000.

The first time you visit you will need to create an account (this is a local account, nothing shared externally) which will become an administrator of your instance. Once logged in you should already be pointed to the correct Ollama URL and assuming you downloaded a model earlier and tested Ollama in the terminal you should already be pointed to that model and ready to go.

If you did not download a model earlier, or want to add additional models you can do that in the admin settings by clicking your username in the sidebar (on the bottom left) and selecting Admin Panel.

Then select “Settings” and then “Connections” in the main UI.

Then click the wrench Icon next to your Ollama API Connection

That should bring up the “Manage Ollama” dialog

From here simply type in the name of a new model into the “Pull a model from Ollama.com” field and click the download button. The model will begin downloading and once completed and validated (You should see a green alert message in the UI once validated) you can go back to your chat and select the new model from the drop down near the top left corner of the UI. You can find a list of models that are available for download @ https://ollama.com/search

At this point you should have a functioning LLM running locally with access via web UI. Next well bring AI support directly into Visual Studio Code.

Cody Visual Studio Code Plugin Set Up

To install and configure Cody in Visual Studio Code you will want to open the Extensions MarketPlace and search for “Cody” and find the extension “Cody: AI Coding Assistant with Autocomplete & Chat”.

Install Cody extension

Click “Install” and wait for the Getting Started With Cody screen to appear once completed. Once you are done reviewing this screen you can close it and go back to your code (or open some code to test things out).

You can use Cody to work with any supported type of code, in my case I’ll take a look at some old C# Code. To get started you will need to log in to Cody/SourceGraph by clicking the Coy icon on the left sidebar and choosing a method to authenticate. I’ll be using the free version of Cody and authenticating using GitHub.

At this point you should see a new side panel that will allow you to interact with Cody to perform an number of AI assisted actions including, documenting existing code, explaining code (great for new code bases), finding code smells, generating unit tests, and even generating new code. This is however currently using external services to provide the AI assistance. To get it pointed locally follow these steps

  • Click on the gear icon in the bottom left corner of VS Code
  • Select “Settings”.
  • In the search box near the top of the page type “Cody autocomplete”
  • Under Cody > Autocomplete > Advanced: Provider select the option experimental-ollama

That should set up your auto completion in the editor to use your local LLM. Additionally, and more importantly for me and how I tend to use Cody so far, in the Cody Chat dialog you should see something like this:

Cody Chat provider Selection

Under the prompt input field you will see a drop down with all the available LLMs. That list should include a section labelled Ollama (Local models). Select one of these models to run your prompts against your local LLM.

My experience so far with Cody has been mostly positive, but somewhat limited (I haven’t used it much yet). I’ve had really good luck using it to explain and document existing code. It generated some useful unit tests, and did a good job implementing a few entirely new classes to implement existing interfaces in my code.

Running PostgreSql in a Container on Windows 10

Today at work we were setting up a development environment for a .Net Core project using PostgreSql as it’s datastore. We decided that we set up the database server running in a container in the same way I have been running SQL Server (See recent article: Running Microsoft SQL Server in a Container on Windows 10) for the local development environment. Using the docker-compose file from this article as a basis and referring to the documentation for the postgres docker image on Docker Hub we put together a docker-compose file for PostgreSQL that looked similar to this:

version: "3"
services:
  postgres:
    image: "postgres"
    ports:
      - 5432:5432
    environment:
      POSTGRES_USER: "MyUser"
      POSTGRES_PASSWORD: "Password!23"
      POSTGRES_DB: "example"
    volumes: 
      - C:\Docker\PostgreSql\data:/var/lib/postgresql/data

Upon running docker-compose we were greeted with the following output containing an error message:

Creating postgresql_postgres_1 ... done
Attaching to postgresql_postgres_1
postgres_1  | The files belonging to this database system will be owned by user "postgres".
postgres_1  | This user must also own the server process.
postgres_1  |
postgres_1  | The database cluster will be initialized with locale "en_US.utf8".
postgres_1  | The default database encoding has accordingly been set to "UTF8".
postgres_1  | The default text search configuration will be set to "english".
postgres_1  |
postgres_1  | Data page checksums are disabled.
postgres_1  |
postgres_1  | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1  | creating subdirectories ... ok
postgres_1  | selecting dynamic shared memory implementation ... posix
postgres_1  | selecting default max_connections ... 20
postgres_1  | selecting default shared_buffers ... 400kB
postgres_1  | selecting default time zone ... Etc/UTC
postgres_1  | creating configuration files ... ok
postgres_1  | running bootstrap script ... 2020-02-25 02:38:12.326 UTC [80] FATAL:  data directory "/var/lib/postgresql/data" has wrong ownership
postgres_1  | 2020-02-25 02:38:12.326 UTC [80] HINT:  The server must be started by the user that owns the data directory.
postgres_1  | child process exited with exit code 1
postgres_1  | initdb: removing contents of data directory "/var/lib/postgresql/data"
postgresql_postgres_1 exited with code 1

Notice line 19: “FATAL: data directory “/var/lib/postgresql/data” has wrong ownership”. After reading the error message we noted on line 12 it reads “fixing permissions on existing directory /var/lib/postgresql/data … ok”. Also near the top of the output on line 3 it reads “The files belonging to this database system will be owned by user “postgres”.” followed by “This user must also own the server process.”. Interesting…

So after digging around a bit we found that indeed the user “postgres” must own the files in order for the db system to read them and that the container starts up as root. It appears that line 12 is trying to fix the issue, and from what we found online it will… If the data directory is on a Linux file system. Since we are attempting to mount these files from a Windows file system, it appears that “fixing the permissions” fails. No major surprise there. So what is the work around for us poor developers working on Windows machines?

Named Volumes to the Rescue

In order to get this to work we set up a named volume. In this scenario, Docker takes care of handling the files and where they are actually stored, so we don’t readily have access to the files, but we don’t really care all that much. We just want our data to persist and not get blown away when the container gets deleted.

Here is the new (working) docker-compose file with the named volume:

version: "3"
services:
  postgres:
    image: "postgres"
    ports:
      - 5432:5432
    environment:
      POSTGRES_USER: "MyUser"
      POSTGRES_PASSWORD: "Password!23"
      POSTGRES_DB: "example"
    volumes: 
      - psql:/var/lib/postgresql/data

volumes:
  psql:

Using this approach you may want to keep an eye on the named volumes on your system and clean them up when you are no longer using them. To get a list of the volumes on your machine use the following command:

docker volumes ls

That will dump out a list of volumes on your machine that looks something like:

DRIVER              VOLUME NAME
local               600de9fcef37a60b93c410f9e7db6b4b7f9966faf5f6ba067cc6cb55ee851198
local               ae45bfac51d4fb1813bd747cc9af10b7d141cf3affa26d79f46f405ebfa07462
local               b94806ba697f79c7003481f8fd1d65599e532c0e2223800b39a2f90b087d5127
local               d02adf9ab33dfa22e154d25e13c5bb383a5969c19c1dd98cfa2ac8e560d87eb4
local               postgresql_psql

Notice the last entry named “postgresql_psql”? That is the one we just created above. To remove it use the following command (Note: It will not allow you to remove the volume if it is referenced by a container, running or not, so you’ll want to stop and remove the container first):

docker volume rm postgresql_psql

Running Microsoft SQL Server in a Container on Windows 10

Why you may ask? SQL Server runs just fine on Windows 10, but there are a few advantages to running SQL Server in a container rather than installing it on your machine. The biggest advantage is that you can throw it away at any time, for any reason (like a new version has shipped) and leave your machine pristine and fully functional. If you have ever tried to uninstall SQL Server from your machine you’ll definitely appreciate that. Also it is faster to get up and running than a full install of SQL Server (Assuming you already have Docker Desktop and Docker Compose installed, which I do) .

In the modern world of microservice development I find that over time I end up with all sorts of dependencies installed on my machine for various projects. One project may be using SQL Server, the next MongoDB and the next PostgreSQL. And then there is Redis, RabbitMQ, the list goes on and on… Running these dependencies in containers just makes it quick and easy to switch between projects and not have all of these dependencies cluttering up my machine.

As I mentioned this approach does assume you have Docker Desktop installed, and I prefer to also use docker compose as well just to simplify starting things up and shutting them down when I need to. If you don’t already have these tools installed you can get them at Docker Hub, or by using Chocolatey (The Windows installer for Docker Desktop will install both for you.)

choco install docker-desktop

Getting Started

It’s pretty simple to get an instance of SQL Server running in a container, you’ll find all the basic information to get started on the DockerHub Microsoft SQL Server listing. To start up the latest version of SQL Server 2017 use the following command from your command shell.

docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password#1" -p 4133:1433 -d mcr.microsoft.com/mssql/server:2017-latest

Note: I’m running the commands in PowerShell which requires double quotes. If you run them using the command prompt use single quotes.

The -e arguments set environment variables inside the container that are picked up by SQL Server when it runs.
ACCEPT_EULA=Y accepts the Microsoft SQL Server EULA
SA_PASSWORD set the sa account password (You might want to choose a better password!)

-p maps the ports your-machine:container. If you want to map 1433 (the standard SQL Server port) to itself on your machine use -p 1433:1433, in my examples I’ll be mapping to 4133 on my machine as above.

-d runs the container detached, returning the container id and releasing your shell prompt for you to use. If you omit this standard out will be dumped to your shell as long as the container is running.

mcr.microsoft.com/mssql/server:2017-latest specifies the image to run (and pull if you don’t already have it) The :2017-latest is the tag and means to pull the latest tagged version of the image. You can specify a specific version if you so choose.

So if we run the command above (and we haven’t previously run it) Docker will go out and pull the image and start it up. It will likely take 30 seconds to a few minutes to download the image, but once it is completed you should see something like the following in your shell.

❯ docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password#1" -p 4133:1433 -d mcr.microsoft.com/mssql/server:2017-latest
Unable to find image 'mcr.microsoft.com/mssql/server:2017-latest' locally
2017-latest: Pulling from mssql/server
59ab41dd721a: Pull complete
57da90bec92c: Pull complete
06fe57530625: Pull complete
5a6315cba1ff: Pull complete
739f58768b3f: Pull complete
3a58fde0fc61: Pull complete
89b44069090d: Pull complete
93c7ccf94626: Pull complete
0ef1127ca8c9: Pull complete
Digest: sha256:f53d3a54923280133eb73d3b5964527a60348013d12f07b490c99937fde3a536
Status: Downloaded newer image for mcr.microsoft.com/mssql/server:2017-latest
bcb2d2585339b3f7fd1a2fdeafff202359ce563213801949a4c55f954e5beb11
❯

At this point you should have a shiny new instance of SQL Server 2017 up and running. You can see the running container by executing

docker ps

This will list out all of the running containers on your machine.

Note the Container ID and Name, you can use these to reference the container with subsequent Docker commands. At this point you can connect to your database server from your application or SQL Server Management Studio. With the command above the connection string to connect would be: “Server=localhost,4133;Database=master;User Id=sa; Password=Password#1”.

To stop the instance:

docker stop bcb

Above I used a shortened/abbreviated version of the container id, you can do this if it uniquely identifies the container. If I had 2 containers that started with this string I would need to use the full id (or at least more of it) or the name.

I can start it up again using:

docker run bcb

And I can permanently delete the instance using:

docker stop bcb
docker rm bcb

If you need to see the containers you have that are not currently running (ie. you stopped, but did not remove them) use:

docker ps -a

Making Things a Bit More Usable

All this is awesome, but you’ll soon run into a couple of issues:

  • You’ll grow tired of typing in all the long command, remembering all the correct switches etc, and listing out the containers to get the ids to manage them.
  • Once you delete your containers you’ll lose your databases! That’s right, the database files are stored in the container, so once you delete the container it’s gone.

Let’s start by solving the second problem first, which will make the first problem worse :(, then we’ll circle back to solve the first problem.

Mapping Your Data Files to Your Local Machine

Step one: You’ll need to share a drive in Docker. To do this:

  • Right click on the Docker Desktop Icon in your system tray and select “Settings”.
  • Select the “Resources” item and then “File Sharing”.
  • Select a drive to share and click “Apply & Share”

Step two: Create a folder in your shared drive to map into your container. In my case I’ve shared my x: drive so I’ve created a folder X:\DockerVolumes\SqlData\Sample

Step three: Now we are ready to modify our run command to map the shared (empty) folder into our container’s data directory. (I would avoid spaces in the path to your shared volumes directory, as I recall it make things “fun”.)

docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password#1" -p 4133:1433 -v X:\DockerVolumes\SqlData\Sample:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2017-latest

Assuming everything works as expected, you should now have all of your system databases in your shared directory. Now they will persist even if you destroy the container and spin up a new one.

Directory: X:\DockerVolumes\SqlData\Sample


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----       2020-01-29  10:07 PM        4194304 master.mdf
-a----       2020-01-29  10:07 PM        2097152 mastlog.ldf
-a----       2020-01-29  10:07 PM        8388608 model.mdf
-a----       2020-01-29  10:07 PM        8388608 modellog.ldf
-a----       2020-01-29  10:07 PM       14024704 msdbdata.mdf
-a----       2020-01-29  10:07 PM         524288 msdblog.ldf
-a----       2020-01-29  10:07 PM        8388608 tempdb.mdf
-a----       2020-01-29  10:07 PM        8388608 templog.ldf

If they do not show up, try stopping the container and restarting it without the -d switch and read through the output in your terminal, it will usually give you a clue as to your problem.

Cleaning It All Up with Docker Compose

All that is great but, typing out – docker run -e “ACCEPT_EULA=Y” -e “SA_PASSWORD=Password#1” -p 4133:1433 -v X:\DockerVolumes\SqlData\Sample:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2017-latest – every time you want to start SQL Server is a bit annoying and error prone. To solve this we’ll put all these arguments into a docker-compose file and make things much easier.

To organize things I create a folder on my drive to contain my docker-compose files, each file in it’s own sub folder. ex: C:\Docker\Sample would contain 1 docker-compose.yml file that defines my configuration for SQL Server 2017. Here is an example file for the docker run we ran above:

version: "3"
services:
  default-sql:
    image: "mcr.microsoft.com/mssql/server:2017-latest"
    ports:
      - 4133:1433
    environment:
      SA_PASSWORD: "Password#1"
      ACCEPT_EULA: "Y"
    volumes:
      - X:\DockerVolumes\SqlData\Sample:/var/opt/mssql/data

Most of this should look pretty familiar, it’s just a YAML representation of the arguments we’ve been specifying above.

If we navigate to the folder containing our docker-compose file, in my case C:\Docker\Sample\ we can simply run:

docker-compose up -d

Once again the -d switch is to run the container detached. You can omit it an see what is happening inside your container. After a few seconds our server will be up and running. When we are done with our container we can run:

docker-compose down

Now everything should be spun down. If you’re really lazy like me you can create an alias for docker-compose in your PowerShell profile so you can just use:

dc up -d
dc down

Final Thoughts

You’ll want to keep an eye on the containers you have that are sitting around in a stopped state by using “docker ps -a” and cleaning up the old containers by using “docker rm CONTAINERID” to remove them. You’ll also want to keep an eye on the images you have cached and periodically clean them up as well. You can list them with “docker images” and remove them with “docker rmi IMAGEID“. (rmi=remove image) These images can be pretty good size (the current SQL 2017 image is 1.4GB).

Resources

Configure Linux Mint/Ubuntu Screen Resolution Under Hyper-V

As a .NET developer running Windows 10 I had just about given up running Linux in a virtual machine on my development machines.  Once you install Visual Studio 2012/2013/2015 with all the bells and whistles (specifically the tooling to support Windows Phone development) you end up with Hyper-V installed and configured on your system to support the Windows Phone simulators.  The obvious thing to do then is to create your VM in Hyper-V, but that results in a virtual machine running in a window @1024×768 or @ 1152×864 which is a little annoying on a 1920×1080 display.  (This is definitely workable, but it is annoying).  After poking for a while inside Linux trying to get the resolution set to 1920×1080 I decided that surely I needed virtual display drivers much like I have used in the past with VMware.   I began scouring the web for the Hyper-V equivalent of the VMware Tools.   Unfortunately I wasn’t having any luck finding what I was looking for.

If you think you might try VMware or Virtual Box along side your Hyper-V installation, think again.  Running either of these platforms along side Hyper-V is difficult due to compatibility issues.  In particular they clash around the virtual network adapters.  I was able to get this scenario to work, but it required creating scripts to disable Hyper-V and then rebooting the machine to switch between the two visualization platforms.

After doing some research I found that it was possible to get the virtual machine to boot up with your desired resolution with a little modification to your grub file,  here’s how to make it happen.

  • Install the latest version of your distro of choice (Ubuntu or Linux Mint anyway)
    • I used Mint 17.2 (Rafael) with the Cinnamon desktop for my virtual machine
  • Open up a terminal window
  • Navigate to your /etc/default directory
    • cd /etc/default
  • Open your grub file for editing as an administrator
    • sudo gedit grub
  • Find the line GRUB_CMDLINE_LINUX_DEFAULT and change it to :
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=hyperv_fb:1920x1080"
  • Update your grub file
    • sudo update-grub

Now just reboot your Linux virtual machine and you’re good to go.

Resources

http://nramkumar.org/tech/blog/2013/05/04/ubuntu-under-hyper-v-how-to-overcome-screen-resolution-issue/