diff --git a/source/industry-solutions.txt b/source/industry-solutions.txt
index 81e46baf..69d099ea 100644
--- a/source/industry-solutions.txt
+++ b/source/industry-solutions.txt
@@ -3,3 +3,8 @@
==================
Industry Solutions
==================
+
+.. toctree::
+ :titlesonly:
+
+ Claim Migration with LLMs and Vector Search
\ No newline at end of file
diff --git a/source/industry-solutions/claim-management-llm-vs.txt b/source/industry-solutions/claim-management-llm-vs.txt
new file mode 100644
index 00000000..2faa116e
--- /dev/null
+++ b/source/industry-solutions/claim-management-llm-vs.txt
@@ -0,0 +1,168 @@
+.. _arch-center-is-claim-management-llm-vs:
+
+=====================================================
+Claim Management Using LLMS and Vector Search for RAG
+=====================================================
+
+.. default-domain:: mongodb
+
+.. facet::
+ :name: genre
+ :values: reference
+
+.. meta::
+ :keywords: Vector Search,
+ :description: Learn how to combine Atlas Vector Search and LLMs to streamline the claim adjustment process.
+
+.. contents:: On this page
+ :local:
+ :backlinks: none
+ :depth: 2
+ :class: onecol
+
+Discover how to combine Atlas Vector Search and large language models (LLMs) to streamline the claim adjustment process.
+
+**Use cases:** `Generative AI `__,
+`Content Management `__
+
+**Industries:** `Insurance `__,
+`Financial Services `__,
+`Manufacturing and Mobility `__,
+`Retail `__
+
+**Products:** :ref:`MongoDB Atlas`, :ref:`Vector Search`
+
+**Partners:** `Langchain `__,
+`OpenAI `__,
+`FastAPI `__
+
+Overview
+--------
+
+One of the biggest challenges for claim adjusters is pulling and
+aggregating information from disparate systems and diverse data
+formats. PDFs of policy guidelines might be stored in a content-sharing
+platform, customer information locked in a legacy CRM, and claim-related
+pictures and voice reports in yet another tool. All of this data is not
+just fragmented across siloed sources and hard to find but also in
+formats that have been historically nearly impossible to index with
+traditional methods.
+
+Over the years, insurance companies have been accumulating terabytes of
+`unstructured data `__
+in their datastores, but failing to capitalize on the possibility
+of accessing and leveraging it to uncover business insights,
+deliver better customer experiences, and streamline operations.
+Some of our customers even admit they’re not fully aware of all of the
+data that’s truly in their archives. There’s a tremendous opportunity
+now to leverage all of this unstructured data to the benefit of these
+organizations and their customers.
+
+Our solution addresses these challenges by combining the power of
+:ref:`Atlas Vector Search ` and a
+`Large Language Model (LLM) `__
+in a
+`retrieval augmented generation (RAG) `__
+system, allowing organizations to go beyond the limitations of
+baseline foundational models, making them context-aware by feeding
+them proprietary data. In this way, they can leverage the
+full potential of AI to streamline operations.
+
+Reference Architectures
+-----------------------
+
+**With MongoDB**
+
+MongoDB Atlas combines transactional and search capabilities in the same
+platform, providing a unified development experience. As embeddings are
+stored alongside existing data, when running a vector search query, we
+get the document containing both the vector embeddings and the associated
+metadata, eliminating the need to retrieve the data elsewhere. This is a
+great advantage for developers who don’t need to learn to use and maintain
+a separate technology and can fully focus on building their apps. Ultimately,
+the data obtained from MongoDB Vector Search is fed to the LLM as context.
+
+.. [picture of RAG querying flow]
+
+Data Model Approach
+-------------------
+
+The “claim” collection contains documents including a number of fields related
+to the claim. In particular, we are interested in the “claimDescription” field,
+which we vectorize and add to the document as “claimDescriptionEmbedding.” This
+embedding is then indexed and used to retrieve documents associated with the user prompt.
+
+.. code-block:: javascript
+ :emphasize-lines: 4,10
+
+ {
+ _id: ObjectId('64d39175e65'),
+ customerID: "c113",
+ claimDescription: "A motorist driving...",
+ damageDescription: "Front-ends of both...",
+ lossAmount: 1250,
+ photo: "image_65.jpg",
+ claimClosedDate: "2024-02-03",
+ coverages: Array(2),
+ claimDescriptionEmbedding: [0.3, 0.6, <...>, 11.2],
+ ...
+ }
+
+Building the Solution
+---------------------
+
+The instructions to build the demo are included in the readme of
+`the GitHub repository `__,
+where you can use the following steps:
+
+1. OpenAI API key setup
+2. Atlas connection setup
+3. Dataset download
+4. LLM configuration options
+5. Vector Search index creation
+
+Visit the :ref:`vector-search-quick-start` guide to try our semantic search tool now.
+
+Step 4 of the :ref:`avs-overview` tutorial walks you through the creation and configuration of the
+Vector Search index within the Atlas UI. Make sure you follow this structure:
+
+.. code-block:: javascript
+
+ {
+ "fields": [
+ {
+ "type": "vector",
+ "path": "claimDescriptionEmbedding",
+ "numDimensions": 350,
+ "similarity": "cosine"
+ }
+ ]
+ }
+
+Ultimately, you must run both the front and the back end. You can access a web UI
+that allows you to ask questions of the LLM, obtain an answer, and see the reference
+documents used as context.
+
+Key Learnings
+-------------
+
+- **Text embedding creation**: The embedding generation process can be carried out
+ using different models and deployment options. It is always important to be mindful
+ of privacy and data protection requirements. A locally deployed model is recommended
+ if we need our data to never leave our servers. Otherwise, we can simply call an API
+ and get our vectors back, as explained in :ref`this tutorial`
+ that tells you how to do it with OpenAI.
+- **Creation of a Vector Search index in Atlas**: It is now possible to create indexes for
+ local deployments.
+- **Performing a Vector Search query**: Notably, :ref:`Vector Search queries`
+ have a dedicated operator within MongoDB’s :ref:`aggregation pipeline`.
+ This means they can be concatenated with other operations, making it extremely convenient
+ for developers because they don’t need to learn a different language or change context.
+- Using LangChain as the framework that glues together MongoDB Atlas Vector Search and
+ the LLM, allowing for an easy and fast RAG implementation.
+
+Authors
+-------
+
+- Luca Napoli, Industry Solutions, MongoDB
+- Jeff Needham, Industry Solutions, MongoDB
\ No newline at end of file