LangChain is built around the idea of connecting different component LLMs, prompts, parsers, memory, and tools to create powerful AI workflows.

Before building RAG systems, agents, or LangGraph workflows, you must understand three important building blocks:

  • LLMs (Large Language Models)
  • Chat Models
  • Prompt Templates

In this article, you will learn:

  • the difference between LLMs and chat models
  • how LangChain uses them
  • how to build prompt templates
  • how to chain prompts, model and output
  • how to run everything using local Ollama (Llama3)

What Are LLMs in LangChain?

An LLM (Large Language Model) is a text generation model. It receives a string and returns a string. In LangChain, LLMs operate on plain text:

Bash
input text  model  output text

What Are Chat Models?

Chat models are more advanced than plain LLMs. Instead of raw text, they accept messages with roles:

  • user
  • system
  • assistant

Example message format

Bash
[
  {"role": "system", "content": "You are a helpful assistant."},
  {"role": "user", "content": "Explain neural networks."}
]

Chat models:

  • handle conversation
  • support multi turn dialogue
  • support system instructions
  • are ideal for assistants and agents

Ollama’s ChatOllama is a chat optimized interface over Llama3.

LangChain Prompt Templates

PromptTemplate is a reusable text pattern with placeholders. For example:

Bash
"Explain {topic} in simple words."

This becomes a template that LangChain can format dynamically:

Bash
topic = "AI"
 "Explain AI in simple words."

Prompt templates help you

  • keep prompts organized
  • add variables cleanly
  • apply consistent formatting
  • make prompts reusable
  • pass structured input to chains

Using a Prompt Template With Ollama

Here is a simple working example.

Bash
from langchain_ollama import ChatOllama
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = PromptTemplate.from_template("""
You are an expert AI tutor.
Explain the topic below clearly, using a friendly tone.

Topic: {topic}

Keep the answer under 100 words.
""")

chain = prompt | ChatOllama(
    model="llama3.1",
    base_url="http://192.168.29.191:11434"
) | StrOutputParser()

print(chain.invoke({"topic": "Neural Networks"}))

This code creates a simple LangChain pipeline using a local Ollama Llama3.1 model.
Here’s what each part does:

  • PromptTemplate defines the instruction you want to send to the model, with {topic} acting as a variable.
  • ChatOllama connects LangChain to your local Ollama server and runs the Llama3.1 model.
  • StrOutputParser extracts clean text from the model’s output.

Conclusion

LLMs, chat models, and prompt templates are the core of LangChain. When combined with a local Ollama model like Llama3, you get:

  • fast responses
  • zero API cost
  • offline private AI
  • full LangChain compatibility

References

Leave a Reply

Your email address will not be published. Required fields are marked *