-
Productive development using VS Code Continue extension with local LLMs
In last blog post, I talked about running AI LLM models locally. Read that first if you want to follow along. With local LLMs in place and running, I searched for ways to integrate them with VS Code. The answer is Continue extension. Set up: In my limited experience so far, I was very impressed.…
-
Run AI models locally with web interface
I recently set up ollama on my 7-year old desktop (AMD Ryzen 7 1700 8 core, 32GB RAM) with an equally old NVidia GPU (GeForce GTX 1070 8GB VRAM). I was able to run llama3.1:8b successfully via terminal CLI. I then configured Open Web UI, which gives me a friendly UI to work with the…
-
Blog reading using Miniflux
I used to host my own blog reader running TT-RSS. I’ve recently switched to Miniflux. I highly recommend it. Advantages of Miniflux over TT-RSS My setup I ran it using docker-compose, along with ol’ Apache web server. Here is a sample docker-compose.yml file: Here is a sample Apache2 site conf. I use Let’s Encrypt to…
-
Web front for my code
Awhile ago I set up my own git server. I’ve been hacking happily using that and Eclipse. In my spare time, I’ve been taking UCSD’s wonderful algorithm course: Algorithmic Design and Techniques. The course provides plenty of programming challenges! I chose the paid version so my code can be evaluated against all tests in the…
-
Getting Eclipse’s EGit to work with my own git server
I’ve been thinking about setting up my own git server for a while, and finally got it up and running last week. Since I do a lot of hacking with Eclipse, I naturally want Eclipse’s EGit to work with my own git server. Here are a couple of noteworthy points: As of this writing, if…