Skip to content

Commit 4174a51

Browse files
Update ReACT
1 parent 0abf034 commit 4174a51

File tree

5 files changed

+84
-8
lines changed

5 files changed

+84
-8
lines changed

react/image.png

57.2 KB
Loading

react/react_assistant.py

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,8 @@
55
from typing import Optional
66

77
from flexrag.assistant import ASSISTANTS, AssistantBase, SearchHistory
8-
from flexrag.retriever import (
9-
RetrievedContext,
10-
WikipediaRetriever,
11-
WikipediaRetrieverConfig,
12-
)
8+
from flexrag.common_dataclass import RetrievedContext
9+
from flexrag.retriever import WikipediaRetriever, WikipediaRetrieverConfig
1310
from flexrag.prompt import ChatPrompt
1411
from flexrag.models import GENERATORS, GenerationConfig
1512
from flexrag.utils import LOGGER_MANAGER
@@ -27,8 +24,6 @@ class ReActConfig(GeneratorConfig, GenerationConfig, WikipediaRetrieverConfig):
2724

2825
@ASSISTANTS("react", config_class=ReActConfig)
2926
class ReActAssistant(AssistantBase):
30-
is_hybrid = False
31-
3227
def __init__(self, cfg: ReActConfig) -> None:
3328
# load retriever
3429
self.retriever = WikipediaRetriever(cfg)
@@ -56,7 +51,7 @@ def __init__(self, cfg: ReActConfig) -> None:
5651
self.cfg = cfg
5752
return
5853

59-
def answer_with_generation(
54+
def answer(
6055
self, question: str
6156
) -> tuple[str, Optional[list[RetrievedContext]], Optional[dict]]:
6257
if self.use_chat:

react/readme.md

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
# ITRG / Iter-RetGen
2+
This is the reproduction of the paper:
3+
- [ReAct: Synergizing Reasoning and Acting in Language Models](https://par.nsf.gov/biblio/10451467)
4+
5+
6+
## Introduction
7+
ReAct solves knowledge-intensive tasks by taking both reasoning and retrieving actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with and gather additional information from external sources such as knowledge bases or environments.
8+
9+
<center>
10+
<img src="./image.png" alt="IRCoT" width="50%"/>
11+
</center>
12+
13+
## Running the Method
14+
Before conducting the experiment, you need to prepare the generator. In this example, we use VLLM to deploy the generator, you can skip this step if you wish to use the generator from OpenAI.
15+
```bash
16+
bash ./run_generator.sh
17+
```
18+
This script will start a `Qwen2-7B-Instruct` model server on port 8000. You can change the `MODEL_NAME` in the script if you want to use a different model.
19+
20+
21+
Then, run the following command to evaluate the IRCoT on the test set of `Natural Questions`:
22+
```bash
23+
bash ./run.sh
24+
```
25+
This script will run the IRCoT method on the test set of `Natural Questions` and save the results in the `results` directory. You can change the `DATASET_NAME` and the `SPLIT` variables in the script to evaluate on different datasets.
26+
27+
## Citation
28+
If you use this code in your research, please cite the following paper:
29+
30+
```bibtex
31+
@software{Zhang_FlexRAG_2025,
32+
author = {Zhang, Zhuocheng and Feng, Yang and Zhang, Min},
33+
doi = {10.5281/zenodo.14593327},
34+
month = jan,
35+
title = {{FlexRAG}},
36+
url = {https://github.com/ictnlp/FlexRAG},
37+
year = {2025}
38+
}
39+
```
40+
41+
```bibtex
42+
@inproceedings{yao2023react,
43+
title={React: Synergizing reasoning and acting in language models},
44+
author={Yao, Shunyu and Zhao, Jeffrey and Yu, Dian and Du, Nan and Shafran, Izhak and Narasimhan, Karthik and Cao, Yuan},
45+
booktitle={International Conference on Learning Representations (ICLR)},
46+
year={2023}
47+
}
48+
```

react/run.sh

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
#!/bin/bash
2+
3+
MODEL_NAME=Qwen2-7B-Instruct
4+
DATASET_NAME=nq
5+
SPLIT=test
6+
EXAMPLE_PATH="./react"
7+
8+
9+
python -m flexrag.entrypoints.run_assistant \
10+
name=$DATASET_NAME \
11+
split=$SPLIT \
12+
user_module=$EXAMPLE_PATH \
13+
assistant_type=react \
14+
react_config.generator_type=openai \
15+
react_config.openai_config.model_name=$MODEL_NAME \
16+
react_config.openai_config.base_url=http://127.0.0.1:8000/v1 \
17+
react_config.do_sample=False \
18+
eval_config.metrics_type=[retrieval_success_rate,generation_f1,generation_em] \
19+
eval_config.retrieval_success_rate_config.eval_field=text \
20+
eval_config.response_preprocess.processor_type=[simplify_answer] \
21+
log_interval=10

react/run_generator.sh

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
#!/bin/bash
2+
3+
MODEL_NAME=Qwen2-7B-Instruct
4+
5+
6+
python -m vllm.entrypoints.openai.api_server \ (base)
7+
--model $MODEL_NAME \
8+
--gpu-memory-utilization 0.95 \
9+
--tensor-parallel-size 2 \
10+
--port 8000 \
11+
--host 0.0.0.0 \
12+
--trust-remote-code

0 commit comments

Comments
 (0)