Skip to content

manifest: ai_model parsing too restrictive #3256

@vdice

Description

@vdice

Validation of llm model entries supplied in the ai_models configuration of a spin manifest is too restrictive, preventing use of legitimate models.

Some examples:

$ spin build
Building component open-ai-rust with `cargo build --target wasm32-wasip1 --release`
    Finished `release` profile [optimized] target(s) in 0.17s
Finished building all Spin components
TOML parse error at line 18, column 13
   |
18 | ai_models = ["gpt-oss:20b"]
   |             ^^^^^^^^^^^^^^^
'-'-separated words may only contain alphanumeric ASCII; got ':'
$ spin build
Building component open-ai-rust with `cargo build --target wasm32-wasip1 --release`
    Finished `release` profile [optimized] target(s) in 0.17s
Finished building all Spin components
Warning: The manifest has errors not related to the Wasm component build. Error details:
TOML parse error at line 18, column 13
   |
18 | ai_models = ["gpt-4"]
   |             ^^^^^^^^^
'-'-separated words must start with an ASCII letter; got '4'

Granted, Spin must first support a model before it can actually be used, but this issue blocks applications from using them when/if support is added (eg see @seun-ja's work in #3238 re: these particular gpt* models)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions