Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file added backend/__init__.py
Empty file.
44 changes: 44 additions & 0 deletions backend/docs/docs/education_popes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
\# Popes Education and Secular Jobs



\## Pope Francis (Jorge Mario Bergoglio)

\- Education: Philosophy and Theology

\- Secular jobs: Chemistry lab technician, literature teacher



\## Pope Benedict XVI (Joseph Ratzinger)

\- Education: Philosophy and Theology

\- Secular jobs: None (mostly academic)



\## Pope John Paul II (Karol Wojtyła)

\- Education: Philosophy, Theology, Literature

\- Secular jobs: Actor, poet



\## Pope John Paul I (Albino Luciani)

\- Education: Theology

\- Secular jobs: None



\## Pope Paul VI (Giovanni Battista Montini)

\- Education: Theology, Canon Law

\- Secular jobs: Journalist (early career)



22 changes: 22 additions & 0 deletions backend/docs/docs/interrupt_example.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# LangGraph Interrupt Example

LangGraph is a Python library for building stateful LLM applications
using graphs or functional workflows.

This document shows how interrupts can be implemented
using both the Functional API and the Graph API.

## Functional API

The Functional API allows defining agent logic as a Python function.
Interrupts are typically implemented using control flow statements.

```python
from langgraph.functional import agent

def my_agent():
while True:
inp = input(">>> ")
if inp == "STOP":
return "Interrupted"
print(f"Echo: {inp}")
50 changes: 18 additions & 32 deletions backend/examples/cli_research.py
Original file line number Diff line number Diff line change
@@ -1,42 +1,28 @@
import os
import argparse
from langchain_core.messages import HumanMessage
from agent.graph import graph
from agent.state import OverallState
from langchain_core.messages import HumanMessage


def main() -> None:
"""Run the research agent from the command line."""
parser = argparse.ArgumentParser(description="Run the LangGraph research agent")
parser.add_argument("question", help="Research question")
parser.add_argument(
"--initial-queries",
type=int,
default=3,
help="Number of initial search queries",
)
parser.add_argument(
"--max-loops",
type=int,
default=2,
help="Maximum number of research loops",
)
parser.add_argument(
"--reasoning-model",
default="gemini-2.5-pro-preview-05-06",
help="Model for the final answer",
)
def main():
parser = argparse.ArgumentParser()
parser.add_argument("question", nargs="?", default=None, help="Question to ask")
parser.add_argument("--dir", required=True, help="Directory for local Markdown sources")
parser.add_argument("--loops", type=int, default=3, help="Max research loops")
args = parser.parse_args()

state = {
"messages": [HumanMessage(content=args.question)],
"initial_search_query_count": args.initial_queries,
"max_research_loops": args.max_loops,
"reasoning_model": args.reasoning_model,
}
state = OverallState(
messages=[HumanMessage(content=args.question or "")],
search_dir=args.dir,
max_research_loops=args.loops,
research_loop_count=0,
is_sufficient=False,
)

result = graph.invoke(state)
messages = result.get("messages", [])
if messages:
print(messages[-1].content)

for msg in result["messages"]:
print("\n" + msg.content)


if __name__ == "__main__":
Expand Down
Empty file added backend/src/__init__.py
Empty file.
Loading