How to load PDFs
Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.
This guide covers how to load PDF
documents into the LangChain Document format that we use downstream.
Text in PDFs is typically represented via text boxes. They may also contain images. A PDF parser might do some combination of the following:
- Agglomerate text boxes into lines, paragraphs, and other structures via heuristics or ML inference;
- Run OCR on images to detect text therein;
- Classify text as belonging to paragraphs, lists, tables, or other structures;
- Structure text into table rows and columns, or key-value pairs.
- Use multimodal LLM to extrat the body, page by page
PDF files are organized in pages. This is not a good strategy. Indeed, this approach creates memory gaps in RAG projects. If a paragraph spans two pages, the beginning of the paragraph is at the end of one page, while the rest is at the start of the next. With a page-based approach, there will be two separate chunks, each containing part of a sentence. The corresponding vectors wonβt be relevant. These chunks are unlikely to be selected when thereβs a question specifically about the split paragraph. If one of the chunks is selected, thereβs little chance the LLM can answer the question. This issue is worsened by the injection of headers, footers (if parsers havenβt properly removed them), images, or tables at the end of a page, as most current implementations tend to do.
Images and tables are difficult challenges for PDF parsers.
Some parsers can retrieve images. The question is what to do with them. It may be interesting to apply an OCR algorithm to extract the textual content of images, or to use a multimodal LLM to request the description of each image. With the result of an image conversion, where do I place it in the document flow? At the end? At the risk of breaking the content of a paragraph present on several pages? Implementations try to find a neutral location, between two paragraphs, if possible.
When it comes to extracting tables, some can do it, with varying degrees of success, with or without integrating the tables into the text flow. A Markdown table cannot describe combined cells, unlike an HTML table.
Finally, the metadata extracted from PDF files by the various parsers varies. We propose a minimum set that parsers should offer:
source
page
total_page
creationdate
creator
producer
Most parsers offer similar parameters, such as mode, which allows you to request the retrieval of one document per page (mode="page"
), or the entire file stream in a single document (mode="single"
). Other modes can return the structure of the document, following the identification of each component.
LangChain tries to unify the different parsers, to facilitate migration from one to the other. Why is it important? Each has its own characteristics and strategies, more or less effective depending on the family of PDF files. One strategy is to identify the family of the PDF file (by inspecting the metadata or the content of the first page) and then select the most efficient parser in that case. By unifying parsers, the following code doesn't need to deal with the specifics of different parsers, as the result is similar for each.
LangChain integrates with a host of PDF parsers. Some are simple and relatively low-level; others will support OCR and image-processing, or perform advanced document layout analysis. The right choice will depend on your needs. Below we enumerate the possibilities.
We will demonstrate these approaches on a sample file:
file_path = (
"../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf"
)
Many modern LLMs support inference over multimodal inputs (e.g., images). In some applications -- such as question-answering over PDFs with complex layouts, diagrams, or scans -- it may be advantageous to skip the PDF parsing, instead casting a PDF page to an image and passing it to a model directly. We demonstrate an example of this in the Use of multimodal models section below.
Simple and fast text extractionβ
If you are looking for a simple string representation of text that is embedded in a PDF, the method below is appropriate. It will return a list of Document objects-- one per page-- containing a single string of the page's text in the Document's page_content
attribute. It will not parse text in images, tables or scanned PDF pages. Under the hood it uses the pypdf Python library.
LangChain document loaders implement lazy_load
and its async variant, alazy_load
, which return iterators of Document
objects. We will use these below.
%pip install -qU langchain_community pypdf
from pprint import pprint
from langchain_community.document_loaders import PyPDFLoader
loader = PyPDFLoader(file_path)
pages = []
async for page in loader.alazy_load():
pages.append(page)
pprint(pages[0].metadata)
print(pages[0].page_content)
Note that the metadata of each document stores the corresponding page number.
Vector search over PDFsβ
Once we have loaded PDFs into LangChain Document
objects, we can index them (e.g., a RAG application) in the usual way. Below we use OpenAI embeddings, although any LangChain embeddings model will suffice.
%pip install -qU langchain-openai
import getpass
import os
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import OpenAIEmbeddings
vector_store = InMemoryVectorStore.from_documents(pages, OpenAIEmbeddings())
docs = vector_store.similarity_search("What is LayoutParser?", k=2)
for doc in docs:
print(f'Page {doc.metadata["page"]}: {doc.page_content[:300]}\n')
Extract and analyse images
%pip install -qU rapidocr-onnxruntime
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_text_with_rapidocr,
)
loader = PyPDFLoader(
file_path,
mode="page",
extract_images=True,
images_to_text=convert_images_to_text_with_rapidocr(format="markdown"),
)
docs = loader.load()
print(docs[5].page_content)
It is possible to ask a multimodal LLM to describe the image.
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_description,
)
from langchain_openai import ChatOpenAI
loader = PyPDFLoader(
file_path,
mode="page",
extract_images=True,
images_to_text=convert_images_to_description(
model=ChatOpenAI(model="gpt-4o-mini", max_tokens=1024), format="text"
),
)
docs = loader.load()
print(docs[5].page_content)
Extract tables
Some parsers can extract tables. This is the case of PDFPlumberLoader
%pip install -qU langchain_community pdfplumber
from langchain_community.document_loaders import PDFPlumberLoader
loader = PDFPlumberLoader(
file_path,
mode="page",
extract_tables="markdown",
)
docs = loader.load()
print(docs[4].page_content)
Layout analysis and extraction of text from imagesβ
If you require a more granular segmentation of text (e.g., into distinct paragraphs, titles, tables, or other structures) or require extraction of text from images, the method below is appropriate. It will return a list of Document objects, where each object represents a structure on the page. The Document's metadata stores the page number and other information related to the object (e.g., it might store table rows and columns in the case of a table object).
Under the hood it uses the langchain-unstructured
library. See the integration docs for more information about using Unstructured with LangChain.
Unstructured supports multiple parameters for PDF parsing:
strategy
(e.g.,"auto"
,"fast"
,"ocr_only"
or"hi-res"
)- API or local processing. You will need an API key to use the API.
The hi-res strategy provides support for document layout analysis and OCR. We demonstrate it below via the API. See local parsing section below for considerations when running locally.
%pip install -qU langchain-unstructured
import getpass
import os
if "UNSTRUCTURED_API_KEY" not in os.environ:
os.environ["UNSTRUCTURED_API_KEY"] = getpass.getpass("Unstructured API Key:")
As before, we initialize a loader and load documents lazily:
from langchain_unstructured import UnstructuredLoader
loader = UnstructuredLoader(
file_path=file_path,
strategy="hi_res",
partition_via_api=True,
coordinates=True,
)
docs = []
for doc in loader.lazy_load():
docs.append(doc)
Here we recover more than 100 distinct structures over the 16 page document:
print(len(docs))
We can use the document metadata to recover content from a single page:
first_page_docs = [doc for doc in docs if doc.metadata.get("page_number") == 0]
for doc in first_page_docs:
print(doc.page_content)
Extracting tables and other structuresβ
Each Document
we load represents a structure, like a title, paragraph, or table.
Some structures may be of special interest for indexing or question-answering tasks. These structures may be:
- Classified for easy identification;
- Parsed into a more structured representation.
Below, we identify and extract a table:
Click to expand code for rendering pages
%pip install -qU matplotlib PyMuPDF pillow
import fitz
import matplotlib.patches as patches
import matplotlib.pyplot as plt
from PIL import Image
def plot_pdf_with_boxes(pdf_page, segments):
pix = pdf_page.get_pixmap()
pil_image = Image.frombytes("RGB", [pix.width, pix.height], pix.samples)
fig, ax = plt.subplots(1, figsize=(10, 10))
ax.imshow(pil_image)
categories = set()
category_to_color = {
"Title": "orchid",
"Image": "forestgreen",
"Table": "tomato",
}
for segment in segments:
points = segment["coordinates"]["points"]
layout_width = segment["coordinates"]["layout_width"]
layout_height = segment["coordinates"]["layout_height"]
scaled_points = [
(x * pix.width / layout_width, y * pix.height / layout_height)
for x, y in points
]
box_color = category_to_color.get(segment["category"], "deepskyblue")
categories.add(segment["category"])
rect = patches.Polygon(
scaled_points, linewidth=1, edgecolor=box_color, facecolor="none"
)
ax.add_patch(rect)
# Make legend
legend_handles = [patches.Patch(color="deepskyblue", label="Text")]
for category in ["Title", "Image", "Table"]:
if category in categories:
legend_handles.append(
patches.Patch(color=category_to_color[category], label=category)
)
ax.axis("off")
ax.legend(handles=legend_handles, loc="upper right")
plt.tight_layout()
plt.show()
def render_page(doc_list: list, page_number: int, print_text=True) -> None:
pdf_page = fitz.open(file_path).load_page(page_number - 1)
page_docs = [
doc for doc in doc_list if doc.metadata.get("page_number") == page_number
]
segments = [doc.metadata for doc in page_docs]
plot_pdf_with_boxes(pdf_page, segments)
if print_text:
for doc in page_docs:
print(f"{doc.page_content}\n")
render_page(docs, 5)
Note that although the table text is collapsed into a single string in the document's content, the metadata contains a representation of its rows and columns:
from IPython.display import HTML, display
segments = [
doc.metadata
for doc in docs
if doc.metadata.get("page_number") == 5 and doc.metadata.get("category") == "Table"
]
display(HTML(segments[0]["text_as_html"]))
able 1. LUllclll 1ayoul actCCLloll 1110AdCs 111 L1C LayoOulralsel 1110U4cl 200 | ||
---|---|---|
Dataset | | Base Model'| | Notes |
PubLayNet [38] | F/M | Layouts of modern scientific documents |
PRImA | M | Layouts of scanned modern magazines and scientific reports |
Newspaper | F | Layouts of scanned US newspapers from the 20th century |
TableBank [18] | F | Table region on modern scientific and business document |
HJDataset | F/M | Layouts of history Japanese documents |
Extracting text from specific sectionsβ
Structures may have parent-child relationships -- for example, a paragraph might belong to a section with a title. If a section is of particular interest (e.g., for indexing) we can isolate the corresponding Document
objects.
Below, we extract all text associated with the document's "Conclusion" section:
render_page(docs, 14, print_text=False)
conclusion_docs = []
parent_id = -1
for doc in docs:
if doc.metadata["category"] == "Title" and "Conclusion" in doc.page_content:
parent_id = doc.metadata["element_id"]
if doc.metadata.get("parent_id") == parent_id:
conclusion_docs.append(doc)
for doc in conclusion_docs:
print(doc.page_content)
Extracting text from imagesβ
OCR is run on images, enabling the extraction of text therein:
render_page(docs, 11)
Note that the text from the figure on the right is extracted and incorporated into the content of the Document
.
Local parsingβ
Parsing locally requires the installation of additional dependencies.
Poppler (PDF analysis)
- Linux:
apt-get install poppler-utils
- Mac:
brew install poppler
- Windows: https://github.com/oschwartz10612/poppler-windows
Tesseract (OCR)
- Linux:
apt-get install tesseract-ocr
- Mac:
brew install tesseract
- Windows: https://github.com/UB-Mannheim/tesseract/wiki#tesseract-installer-for-windows
We will also need to install the unstructured
PDF extras:
%pip install -qU "unstructured[pdf]"
We can then use the UnstructuredLoader much the same way, forgoing the API key and partition_via_api
setting:
loader_local = UnstructuredLoader(
file_path=file_path,
strategy="hi_res",
)
docs_local = []
for doc in loader_local.lazy_load():
docs_local.append(doc)
The list of documents can then be processed similarly to those obtained from the API.
Use of multimodal modelsβ
Many modern LLMs support inference over multimodal inputs (e.g., images). In some applications-- such as question-answering over PDFs with complex layouts, diagrams, or scans-- it may be advantageous to skip the PDF parsing, instead casting a PDF page to an image and passing it to a model directly. This allows a model to reason over the two dimensional content on the page, instead of a "one-dimensional" string representation.
In principle we can use any LangChain chat model that supports multimodal inputs. A list of these models is documented here. Below we use OpenAI's gpt-4o-mini
.
First we define a short utility function to convert a PDF page to a base64-encoded image:
%pip install -qU PyMuPDF pillow langchain-openai
import base64
import io
import fitz
from PIL import Image
def pdf_page_to_base64(pdf_path: str, page_number: int):
pdf_document = fitz.open(pdf_path)
page = pdf_document.load_page(page_number - 1) # input is one-indexed
pix = page.get_pixmap()
img = Image.frombytes("RGB", [pix.width, pix.height], pix.samples)
buffer = io.BytesIO()
img.save(buffer, format="PNG")
return base64.b64encode(buffer.getvalue()).decode("utf-8")
from IPython.display import Image as IPImage
from IPython.display import display
base64_image = pdf_page_to_base64(file_path, 11)
display(IPImage(data=base64.b64decode(base64_image)))
We can then query the model in the usual way. Below we ask it a question on related to the diagram on the page.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
from langchain_core.messages import HumanMessage
query = "What is the name of the first step in the pipeline?"
message = HumanMessage(
content=[
{"type": "text", "text": query},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
},
],
)
response = llm.invoke([message])
print(response.content)
Other PDF loadersβ
For a list of available LangChain PDF loaders, please see this table.