Member-only story

Python and Ollama for airgapped local large language model interactions

Johan Louwers
3 min readJan 16, 2025

--

Developers discussing a Large Language Model

Although large language models are widely accessible through various services, with OpenAI’s ChatGPT being among the most prominent, there are situations where running your own model locally is preferable. This might be due to security concerns, operational constraints, the need to function within an air-gapped environment, or the requirement to operate in remote or potentially disconnected settings. While services like ChatGPT offer significant advantages, they don’t always align with specific deployment needs.

As part of a step by step approach of exploring, learning AI concepts and applying AI on a number of use cases I will write a number of medium posts which will take you along on that journey.

A way of running a model, a large language model is using Ollama. Ollama is an open-source platform that enables users to run large language models (LLMs), such as Meta’s Llama, directly on their own machines. Founded by Michael Chiang and Jeffrey Morgan.

While we use Ollama in this post as an example of how to run a Large Language Model locally there are multiple other ways of achieving the same, for example you could use something like GPT4all, LMstudio.ai, Jan or llama.cpp. While all are providing options to run a LLM locally they are all just a bit different while all…

--

--

Johan Louwers
Johan Louwers

Written by Johan Louwers

Johan Louwers is a technology enthousiasts with a long background in supporting enterprises and startups alike as CTO, Chief Enterprise Architect and developer.

No responses yet