A brand new open-source framework referred to as PageIndex solves one of many outdated issues of retrieval-augmented era (RAG): dealing with very lengthy paperwork.
The traditional RAG workflow (chunk paperwork, calculate embeddings, retailer them in a vector database, and retrieve the highest matches primarily based on semantic similarity) works properly for fundamental duties akin to Q&A over small paperwork.
PageIndex abandons the usual "chunk-and-embed" methodology solely and treats doc retrieval not as a search drawback, however as a navigation drawback.
However as enterprises attempt to transfer RAG into high-stakes workflows — auditing monetary statements, analyzing authorized contracts, navigating pharmaceutical protocols — they're hitting an accuracy barrier that chunk optimization can't clear up.
AlphaGo for paperwork
PageIndex addresses these limitations by borrowing an idea from game-playing AI relatively than search engines like google and yahoo: tree search.
When people want to seek out particular data in a dense textbook or a protracted annual report, they don’t scan each paragraph linearly. They seek the advice of the desk of contents to determine the related chapter, then the part, and eventually the precise web page. PageIndex forces the LLM to duplicate this human conduct.
As a substitute of pre-calculating vectors, the framework builds a "International Index" of the doc's construction, making a tree the place nodes signify chapters, sections, and subsections. When a question arrives, the LLM performs a tree search, explicitly classifying every node as related or irrelevant primarily based on the complete context of the consumer's request.
"In pc science phrases, a desk of contents is a tree-structured illustration of a doc, and navigating it corresponds to tree search," Zhang stated. "PageIndex applies the identical core thought — tree search — to doc retrieval, and may be considered an AlphaGo-style system for retrieval relatively than for video games."
This shifts the architectural paradigm from passive retrieval, the place the system merely fetches matching textual content, to energetic navigation, the place an agentic mannequin decides the place to look.
The boundaries of semantic similarity
There’s a basic flaw in how conventional RAG handles complicated knowledge. Vector retrieval assumes that the textual content most semantically just like a consumer’s question can also be probably the most related. In skilled domains, this assumption incessantly breaks down.
Mingtian Zhang, co-founder of PageIndex, factors to monetary reporting as a main instance of this failure mode. If a monetary analyst asks an AI about "EBITDA" (earnings earlier than curiosity, taxes, depreciation, and amortization), a normal vector database will retrieve each chunk the place that acronym or an analogous time period seems.
"A number of sections could point out EBITDA with comparable wording, but just one part defines the exact calculation, changes, or reporting scope related to the query," Zhang instructed VentureBeat. "A similarity primarily based retriever struggles to differentiate these circumstances as a result of the semantic alerts are almost indistinguishable."
That is the "intent vs. content material" hole. The consumer doesn’t wish to discover the phrase "EBITDA"; they wish to perceive the “logic” behind it for that particular quarter.
Moreover, conventional embeddings strip the question of its context. As a result of embedding fashions have strict input-length limits, the retrieval system often solely sees the precise query being requested, ignoring the earlier turns of the dialog. This detaches the retrieval step from the consumer’s reasoning course of. The system matches paperwork in opposition to a brief, decontextualized question relatively than the complete historical past of the issue the consumer is making an attempt to resolve.
Fixing the multi-hop reasoning drawback
The actual-world affect of this structural strategy is most seen in "multi-hop" queries that require the AI to observe a path of breadcrumbs throughout totally different components of a doc.
In a current benchmark check generally known as FinanceBench, a system constructed on PageIndex referred to as "Mafin 2.5" achieved a state-of-the-art accuracy rating of 98.7%. The efficiency hole between this strategy and vector-based techniques turns into clear when analyzing how they deal with inner references.
Zhang gives the instance of a question relating to the whole worth of deferred property in a Federal Reserve annual report. The principle part of the report describes the “change” in worth however doesn’t checklist the whole. Nonetheless, the textual content accommodates a footnote: “See Appendix G of this report … for extra detailed data.”
A vector-based system usually fails right here. The textual content in Appendix G seems nothing just like the consumer’s question about deferred property; it’s seemingly only a desk of numbers. As a result of there isn’t a semantic match, the vector database ignores it.
The reasoning-based retriever, nonetheless, reads the cue in the principle textual content, follows the structural hyperlink to Appendix G, locates the right desk, and returns the correct determine.
The latency trade-off and infrastructure shift
For enterprise architects, the quick concern with an LLM-driven search course of is latency. Vector lookups happen in milliseconds; having an LLM "learn" a desk of contents implies a considerably slower consumer expertise.
Nonetheless, Zhang explains that the perceived latency for the end-user could also be negligible as a consequence of how the retrieval is built-in into the era course of. In a traditional RAG setup, retrieval is a blocking step: the system should search the database earlier than it might probably start producing a solution. With PageIndex, retrieval occurs inline, through the mannequin’s reasoning course of.
"The system can begin streaming instantly, and retrieve because it generates," Zhang stated. "Which means PageIndex doesn’t add an additional 'retrieval gate' earlier than the primary token, and Time to First Token (TTFT) is akin to a standard LLM name."
This architectural shift additionally simplifies the information infrastructure. By eradicating reliance on embeddings, enterprises now not want to keep up a devoted vector database. The tree-structured index is light-weight sufficient to sit down in a conventional relational database like PostgreSQL.
This addresses a rising ache level in LLM techniques with retrieval parts: the complexity of retaining vector shops in sync with residing paperwork. PageIndex separates construction indexing from textual content extraction. If a contract is amended or a coverage up to date, the system can deal with small edits by re-indexing solely the affected subtree relatively than reprocessing your entire doc corpus.
A call matrix for the enterprise
Whereas the accuracy positive factors are compelling, tree-search retrieval just isn’t a common substitute for vector search. The know-how is finest considered as a specialised device for "deep work" relatively than a catch-all for each retrieval activity.
For brief paperwork, akin to emails or chat logs, your entire context usually matches inside a contemporary LLM’s context window, making any retrieval system pointless. Conversely, for duties purely primarily based on semantic discovery, akin to recommending comparable merchandise or discovering content material with an analogous "vibe," vector embeddings stay the superior selection as a result of the purpose is proximity, not reasoning.
PageIndex matches squarely within the center: lengthy, extremely structured paperwork the place the price of error is excessive. This consists of technical manuals, FDA filings, and merger agreements. In these situations, the requirement is auditability. An enterprise system wants to have the ability to clarify not simply the reply, however the path it took to seek out it (e.g., confirming that it checked Part 4.1, adopted the reference to Appendix B, and synthesized the information discovered there).
The way forward for agentic retrieval
The rise of frameworks like PageIndex alerts a broader development within the AI stack: the transfer towards "Agentic RAG." As fashions turn out to be extra able to planning and reasoning, the duty for locating knowledge is shifting from the database layer to the mannequin layer.
We’re already seeing this within the coding house, the place brokers like Claude Code and Cursor are shifting away from easy vector lookups in favor of energetic codebase exploration. Zhang believes generic doc retrieval will observe the identical trajectory.
"Vector databases nonetheless have appropriate use circumstances," Zhang stated. "However their historic position because the default database for LLMs and AI will turn out to be much less clear over time."

