- What Are LLMs in LangChain?
- What Are Chat Models?
- LangChain Prompt Templates
- Using a Prompt Template With Ollama
- Conclusion
- References
LangChain is built around the idea of connecting different component LLMs, prompts, parsers, memory, and tools to create powerful AI workflows.
Before building RAG systems, agents, or LangGraph workflows, you must understand three important building blocks:
- LLMs (Large Language Models)
- Chat Models
- Prompt Templates
In this article, you will learn:
- the difference between LLMs and chat models
- how LangChain uses them
- how to build prompt templates
- how to chain prompts, model and output
- how to run everything using local Ollama (Llama3)
What Are LLMs in LangChain?
An LLM (Large Language Model) is a text generation model. It receives a string and returns a string. In LangChain, LLMs operate on plain text:
input text → model → output textWhat Are Chat Models?
Chat models are more advanced than plain LLMs. Instead of raw text, they accept messages with roles:
- user
- system
- assistant
Example message format
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain neural networks."}
]Chat models:
- handle conversation
- support multi turn dialogue
- support system instructions
- are ideal for assistants and agents
Ollama’s ChatOllama is a chat optimized interface over Llama3.
LangChain Prompt Templates
A PromptTemplate is a reusable text pattern with placeholders. For example:
"Explain {topic} in simple words."This becomes a template that LangChain can format dynamically:
topic = "AI"
→ "Explain AI in simple words."Prompt templates help you
- keep prompts organized
- add variables cleanly
- apply consistent formatting
- make prompts reusable
- pass structured input to chains
Using a Prompt Template With Ollama
Here is a simple working example.
from langchain_ollama import ChatOllama
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = PromptTemplate.from_template("""
You are an expert AI tutor.
Explain the topic below clearly, using a friendly tone.
Topic: {topic}
Keep the answer under 100 words.
""")
chain = prompt | ChatOllama(
model="llama3.1",
base_url="http://192.168.29.191:11434"
) | StrOutputParser()
print(chain.invoke({"topic": "Neural Networks"}))
This code creates a simple LangChain pipeline using a local Ollama Llama3.1 model.
Here’s what each part does:
- PromptTemplate defines the instruction you want to send to the model, with
{topic}acting as a variable. - ChatOllama connects LangChain to your local Ollama server and runs the Llama3.1 model.
- StrOutputParser extracts clean text from the model’s output.
Conclusion
LLMs, chat models, and prompt templates are the core of LangChain. When combined with a local Ollama model like Llama3, you get:
- fast responses
- zero API cost
- offline private AI
- full LangChain compatibility