Skip to content

Research Spike: using cached content when processing #122

@felipemontoya

Description

@felipemontoya

When adding non deterministic AI workflows into the course we open the user experience to be different for every student. Also, when processing the same content over and over we are paying for the llm to process the same tokens different times. There might be ways to address both concerns with some caching capabilities at the orchestrator or processor level.

This task is a first pass to investigate and design how such a feature could work.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    Planning

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions