LangChain is one of the most popular frameworks for building AI apps in Python.
If you want zero API costoffline AI, and fast local inference, then using Ollama + Llama3 is the best setup in 2025.

This guide shows you:

  • how to install LangChain
  • how to install Ollama
  • how to run LangChain with a local model
  • the correct 2025 LangChain syntax (prompt | llm | parser)
  • your first working AI pipeline in Python

Prerequisitive

Ollama Model Details

we will be using llama3 model in the local Ollama setup

Create a Python Environment

It is best practice to keep your AI projects in a dedicated virtual environment. we will be using the virutalenv for creating the dedicated environment. create the project folder and create the virrual environment

Bash
mkdir langchain-project
cd langchain-project
Bash
python3 -m venv venv

This will create a venv folder and then you can activate the virutalenv

Bash
source venv/bin/activate

Install LangChain + Ollama Integration

Bash
pip install langchain langchain-core langchain-ollama python-dotenv
PackagePurpose
langchainhigh level helpers
langchain-coreprompts, runnables, output parsers
langchain-ollamaofficial Ollama LLM wrapper
python-dotenvloads environment variables (optional)

No API key is required for local Ollama.

Verify LangChain Installation

Open a Python shell

Bash
python

Then run:

Bash
import langchain
import langchain_ollama

print("Installed successfully!")

This should show below output

Bash
(venv) ➜  langchain-project python
>>> import langchain
>>> import langchain_ollama
>>>
>>> print("Installed successfully!")
Installed successfully!
>>>

Your First LangChain + Ollama Example

create a new file named langchain_first.py and put the below contents

Bash
from langchain_ollama import ChatOllama
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser

# 1. Connect to local Ollama Llama3 instance
llm = ChatOllama(
    model="llama3",
    base_url="http://192.168.1.55:11434"
)

# 2. Create prompt template
prompt = PromptTemplate.from_template("Explain {topic} in simple words.")

# 3. Convert result into plain text
parser = StrOutputParser()

# 4. Build pipeline (Prompt → Model → Text)
chain = prompt | llm | parser

# 5. Run the pipeline
response = chain.invoke({"topic": "Generative AI"})
print(response)

This will send the api request to your Ollama server and return the below response after some time

Bash
(venv) ➜  python langchain_first.py
**Generative AI: The Magic of Creating New Content**

Imagine you have a super smart artist who can create new and original paintings, music, or even entire stories with just a few clicks. That's what Generative AI is all about!

**What does it do?**

Generative AI uses algorithms to generate new content that's similar to existing ones, but with its own unique twist. It's like training a machine to learn from examples and create something entirely new based on those examples.

**How does it work?**

Here are the basic steps:

1. **Training Data**: You feed the AI system with a huge amount of data, such as images, music, text, or videos.
2. **Learning Patterns**: The AI system analyzes this data to identify patterns, styles, and structures.
3. **Creating New Content**: With these learned patterns, the AI generates new content that's similar but not identical to the original data.

**Examples of Generative AI:**

1. **Artistic Images**: AI can generate stunning paintings or illustrations based on famous art styles or even create entirely new ones.
2. **Music Composition**: AI can compose music in various genres, like pop, rock, or classical.
3. **Language Generation**: AI can write short stories, articles, or even entire books with its own unique voice and style.

**Why is Generative AI important?**

1. **Creativity Expansion**: It allows humans to explore new creative possibilities and ideas that were previously unimaginable.
2. **Content Generation**: It enables fast and efficient content creation for various industries like media, advertising, or education.
3. **Personalization**: It can create customized experiences tailored to individual preferences.

In summary, Generative AI is a powerful tool that leverages machine learning to generate new and original content across various domains, pushing the boundaries of human creativity and innovation!

A More Advanced Multi-Line Prompt Example

The PromptTemplate allows you to create multi step template and can be called as prompt engineering. create a new file named as langchain_second.py

Bash
from langchain_ollama import ChatOllama
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = PromptTemplate.from_template("""
You are an expert AI tutor.
Explain the topic below clearly, using a friendly tone.

Topic: {topic}

Keep the answer under 100 words.
""")

chain = prompt | ChatOllama(
    model="llama3.1",
    base_url="http://192.168.29.191:11434"
) | StrOutputParser()

print(chain.invoke({"topic": "Neural Networks"}))

When you run this file then it will give you below response

Bash
(venv) ➜  python langchain_second.py
Let's dive into Neural Networks!

Imagine you're trying to recognize a cat in a picture. You'd look at its whiskers, ears, and fur texture. A Neural Network is like a super-smart brain that breaks down complex information (like a picture) into smaller parts, analyzes each one, and then puts the pieces together to make a decision.

Inside a Neural Network, there are many layers of "neurons" (think tiny computers) that communicate with each other to learn patterns in data. This helps them become super-good at recognizing things like pictures, speech, or even predicting future events!

Troubleshooting Common Issues

Ollama server not reachable

Check if Ollama is running

Bash
ollama serve

Why Use Ollama With LangChain?

Fully local: No API keys, no rate limits.

Private: Data never leaves your machine.

Fast: Runs at GPU/CPU speed with caching.

Compatible with LangChain, LangGraph, RAG, Agents. Perfect for building

  • chatbots
  • assistants
  • offline coding tools
  • research agents
  • multi document search

Conclusion

You now have a fully working LangChain + Ollama Llama3 Python environment.
With this setup, you can build:

  • AI assistants
  • RAG systems
  • workflow engines
  • LangGraph agents
  • offline chatbots
  • automation tools

References

Leave a Reply

Your email address will not be published. Required fields are marked *