|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "code", |
| 5 | + "execution_count": null, |
| 6 | + "metadata": {}, |
| 7 | + "outputs": [], |
| 8 | + "source": [ |
| 9 | + "# Copyright (c) 2024 Microsoft Corporation.\n", |
| 10 | + "# Licensed under the MIT License." |
| 11 | + ] |
| 12 | + }, |
| 13 | + { |
| 14 | + "cell_type": "markdown", |
| 15 | + "metadata": {}, |
| 16 | + "source": [ |
| 17 | + "## Example of indexing from an existing in-memory dataframe\n", |
| 18 | + "\n", |
| 19 | + "Newer versions of GraphRAG let you submit a dataframe directly instead of running through the input processing step. This notebook demonstrates with regular or update runs.\n", |
| 20 | + "\n", |
| 21 | + "If performing an update, the assumption is that your dataframe contains only the new documents to add to the index." |
| 22 | + ] |
| 23 | + }, |
| 24 | + { |
| 25 | + "cell_type": "code", |
| 26 | + "execution_count": null, |
| 27 | + "metadata": {}, |
| 28 | + "outputs": [], |
| 29 | + "source": [ |
| 30 | + "from pathlib import Path\n", |
| 31 | + "from pprint import pprint\n", |
| 32 | + "\n", |
| 33 | + "import pandas as pd\n", |
| 34 | + "\n", |
| 35 | + "import graphrag.api as api\n", |
| 36 | + "from graphrag.config.load_config import load_config\n", |
| 37 | + "from graphrag.index.typing.pipeline_run_result import PipelineRunResult" |
| 38 | + ] |
| 39 | + }, |
| 40 | + { |
| 41 | + "cell_type": "code", |
| 42 | + "execution_count": null, |
| 43 | + "metadata": {}, |
| 44 | + "outputs": [], |
| 45 | + "source": [ |
| 46 | + "PROJECT_DIRECTORY = \"<your project directory>\"\n", |
| 47 | + "UPDATE = False\n", |
| 48 | + "FILENAME = \"new_documents.parquet\" if UPDATE else \"<original_documents>.parquet\"\n", |
| 49 | + "inputs = pd.read_parquet(f\"{PROJECT_DIRECTORY}/input/{FILENAME}\")\n", |
| 50 | + "# Only the bare minimum for input. These are the same fields that would be present after the load_input_documents workflow\n", |
| 51 | + "inputs = inputs.loc[:, [\"id\", \"title\", \"text\", \"creation_date\"]]" |
| 52 | + ] |
| 53 | + }, |
| 54 | + { |
| 55 | + "cell_type": "markdown", |
| 56 | + "metadata": {}, |
| 57 | + "source": [ |
| 58 | + "### Generate a `GraphRagConfig` object" |
| 59 | + ] |
| 60 | + }, |
| 61 | + { |
| 62 | + "cell_type": "code", |
| 63 | + "execution_count": null, |
| 64 | + "metadata": {}, |
| 65 | + "outputs": [], |
| 66 | + "source": [ |
| 67 | + "graphrag_config = load_config(Path(PROJECT_DIRECTORY))" |
| 68 | + ] |
| 69 | + }, |
| 70 | + { |
| 71 | + "cell_type": "markdown", |
| 72 | + "metadata": {}, |
| 73 | + "source": [ |
| 74 | + "## Indexing API\n", |
| 75 | + "\n", |
| 76 | + "*Indexing* is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats." |
| 77 | + ] |
| 78 | + }, |
| 79 | + { |
| 80 | + "cell_type": "markdown", |
| 81 | + "metadata": {}, |
| 82 | + "source": [ |
| 83 | + "## Build an index" |
| 84 | + ] |
| 85 | + }, |
| 86 | + { |
| 87 | + "cell_type": "code", |
| 88 | + "execution_count": null, |
| 89 | + "metadata": {}, |
| 90 | + "outputs": [], |
| 91 | + "source": [ |
| 92 | + "index_result: list[PipelineRunResult] = await api.build_index(\n", |
| 93 | + " config=graphrag_config, input_documents=inputs, is_update_run=UPDATE\n", |
| 94 | + ")\n", |
| 95 | + "\n", |
| 96 | + "# index_result is a list of workflows that make up the indexing pipeline that was run\n", |
| 97 | + "for workflow_result in index_result:\n", |
| 98 | + " status = f\"error\\n{workflow_result.errors}\" if workflow_result.errors else \"success\"\n", |
| 99 | + " print(f\"Workflow Name: {workflow_result.workflow}\\tStatus: {status}\")" |
| 100 | + ] |
| 101 | + }, |
| 102 | + { |
| 103 | + "cell_type": "markdown", |
| 104 | + "metadata": {}, |
| 105 | + "source": [] |
| 106 | + }, |
| 107 | + { |
| 108 | + "cell_type": "markdown", |
| 109 | + "metadata": {}, |
| 110 | + "source": [ |
| 111 | + "## Query an index\n", |
| 112 | + "\n", |
| 113 | + "To query an index, several index files must first be read into memory and passed to the query API. " |
| 114 | + ] |
| 115 | + }, |
| 116 | + { |
| 117 | + "cell_type": "code", |
| 118 | + "execution_count": null, |
| 119 | + "metadata": {}, |
| 120 | + "outputs": [], |
| 121 | + "source": [ |
| 122 | + "entities = pd.read_parquet(f\"{PROJECT_DIRECTORY}/output/entities.parquet\")\n", |
| 123 | + "communities = pd.read_parquet(f\"{PROJECT_DIRECTORY}/output/communities.parquet\")\n", |
| 124 | + "community_reports = pd.read_parquet(\n", |
| 125 | + " f\"{PROJECT_DIRECTORY}/output/community_reports.parquet\"\n", |
| 126 | + ")\n", |
| 127 | + "\n", |
| 128 | + "response, context = await api.global_search(\n", |
| 129 | + " config=graphrag_config,\n", |
| 130 | + " entities=entities,\n", |
| 131 | + " communities=communities,\n", |
| 132 | + " community_reports=community_reports,\n", |
| 133 | + " community_level=2,\n", |
| 134 | + " dynamic_community_selection=False,\n", |
| 135 | + " response_type=\"Multiple Paragraphs\",\n", |
| 136 | + " query=\"What are the top five themes of the dataset?\",\n", |
| 137 | + ")" |
| 138 | + ] |
| 139 | + }, |
| 140 | + { |
| 141 | + "cell_type": "markdown", |
| 142 | + "metadata": {}, |
| 143 | + "source": [ |
| 144 | + "The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response." |
| 145 | + ] |
| 146 | + }, |
| 147 | + { |
| 148 | + "cell_type": "code", |
| 149 | + "execution_count": null, |
| 150 | + "metadata": {}, |
| 151 | + "outputs": [], |
| 152 | + "source": [ |
| 153 | + "print(response)" |
| 154 | + ] |
| 155 | + }, |
| 156 | + { |
| 157 | + "cell_type": "markdown", |
| 158 | + "metadata": {}, |
| 159 | + "source": [ |
| 160 | + "Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model)." |
| 161 | + ] |
| 162 | + }, |
| 163 | + { |
| 164 | + "cell_type": "code", |
| 165 | + "execution_count": null, |
| 166 | + "metadata": {}, |
| 167 | + "outputs": [], |
| 168 | + "source": [ |
| 169 | + "pprint(context) # noqa: T203" |
| 170 | + ] |
| 171 | + } |
| 172 | + ], |
| 173 | + "metadata": { |
| 174 | + "kernelspec": { |
| 175 | + "display_name": "graphrag", |
| 176 | + "language": "python", |
| 177 | + "name": "python3" |
| 178 | + }, |
| 179 | + "language_info": { |
| 180 | + "codemirror_mode": { |
| 181 | + "name": "ipython", |
| 182 | + "version": 3 |
| 183 | + }, |
| 184 | + "file_extension": ".py", |
| 185 | + "mimetype": "text/x-python", |
| 186 | + "name": "python", |
| 187 | + "nbconvert_exporter": "python", |
| 188 | + "pygments_lexer": "ipython3", |
| 189 | + "version": "3.12.10" |
| 190 | + } |
| 191 | + }, |
| 192 | + "nbformat": 4, |
| 193 | + "nbformat_minor": 2 |
| 194 | +} |
0 commit comments