Experimenting with LLMs *locally* (offline)

If you’ve been thinking about experimenting with large language models but been holding back because of the complexity of the systems, cost for API keys, time required, security concerns, etc., then these projects may be of interest to you…

Ollama (GitHub) is a free and convenient way to get up and running with several LLM models. It has a standalone macOS app and many integrations. Best of all, it runs locally on your machine (no cloud!), which is important for privacy.

LlamaBot (GitHub) is an amazing project that provides a Pythonic interface to LLMs. Through it, it is very easy to interact with LLMs in your programs and even in Jupyter notebooks. It supports several LLMs, including Ollama.

Another interesting program is LM Studio, a desktop app that allows you to experiment with various LLMs locally (offline). The LLM models may be downloaded from Hugging Face repositories. I heard about LM Studio on TalkPython.fm.

I haven’t done much with LLMs to date, but I am starting to experiment with them as “user interfaces” to exploratory data analysis in some of my research projects.

Cheers!

I just started experimenting with Ollama. I’ve had no issues running LLMs locally on my Mac Studio (M1 Max, 64 GB RAM). I use it with a macOS native front end Enchanted.

Now that @ttscoff is using VSCode, he, and others, may like to know that you can replace GitHub Copilot with Ollama and Continue. El Reg can get you started.

Update: Or go to the Continue Docs.

And LM Studio looks interesting. Thanks for the link.

Have fun!

Out of curiosity, why would I want to replace Copilot? Is there some security/privacy concern there?

For coding, I’m actually testing out JetBrains’ AI/LLM tool that is built into the PyCharm IDE. The benefit there is that the tool is already aware of all the files in the project, so when you ask for help about writing a new function or refactoring an old one, its suggestions are more context-aware and specific. PyCharm also lets you audit the data sent to the cloud. And you can also easily turn the AI assistant off when you have projects with more sensitive data.

I have not used Copilot, but I expect that it is similar.

The links provided in the posts above are for running LLMs locally on your computer – mainly pertinent when the data that you are interacting with is sensitive (and thus, you are prohibited from sending it to the cloud) like protected health information (PHI), personally identifying information (PII), or sensitive electronic information (SEI). The challenge is to figure out use cases where LLMs may be useful, yet not risk data leakage.

Local LLMs like Ollama are also free, whereas an OpenAI API key can get expensive.

The LlamaBot library goes further than that, because it is actually the Python wrapper for integrating LLM functionality into your own program. In this case, it’s not that I would use the LLM to help me with coding (like PyCharm’s AI assistant or Copilot), but I would actually call the LLM to do something, like summarize documents or filter datasets.

For example, if you wanted to extend a future version of Marked3 to create a summary of an .md file, you could write the function that simply presents the .md file and the prompt to the LLM, the LLM will output the summary backed to Marked, and Marked will ask whether to save the summary separately or to add it to a section (“Abstract” or “Summary”) somewhere in the original .md document.

3 Likes

Cost for one. And there is always a security/privacy angle with Microsoft, and with having LLMs non-local. If those are real concerns is for you to decide.

There is a reason VSCodium exists.

I get Copilot for free as a “GitHub Star,” so cost isn’t an issue (at this time). And I’m happy for my code to feed the LLM and the results I get from Copilot are, well, astounding. So I guess I don’t have a reason to switch right now, but I’m still excited that there are options.

1 Like

Here’s an extra tool that may prove useful – Gollama is a command-line tool for managing Ollama models, with an added feature of being able to share Ollama models with LMStudio (so you don’t have to download them twice).

1 Like