This is a project that seeks to answer the question of if fantasy-style game content can be auto-generated through the use of artificial intelligence.
More specifically - in its current state - it attempts to use Large Language Model (LLM) text inference to fill in area and quest details similar to the common types found in Massively Mulitplayer Online games (MMOs).
This respository has two projects in it: the Server and the Client.
The Server is where content is generated, stored, and made deliverable. It talks to a local LLM chat via local HTTP requests, stores generated data in a SQlite database file, and hosts a web server with endpoints for data access.
The Client is a video game that uses HTTP requests to get data from the Server, and then uses that data to create a playable game world in a game engine.
Both the Client and the Server are Rust projects, so any information on Rust and cargo tools is highly relevant.
Throughout development, a tool called Gpt4All was used in order to host a LLM chat. However, it should be noted that the Server uses fairly standard OpenAI style API calls, so a variety of tools should work properly.
Four main crates used in the project are bevy as the game engine for the Client, axum for API hosting on the Server, rusqlite for SQlite reading and writing, and reqwest for HTTP-related functionality.
Firstly, the Server starts with (and fills, as necessary) LLM prompts read from text files.
Next, the Server feeds these configured prompts into a local LLM hoster like Gpt4All, and gets appropriate responses.
This gives the Server data to work with, which it stores in a local SQlite database.
The Server also runs a web server which makes this data accessible through HTTP requests.
The Client then uses web request to gather generated data and create a playable world.
Feel free to contact me at [email protected] with any questions / inquiries.