Skip to content

Box v1.0.8

Latest

Choose a tag to compare

@jegly jegly released this 11 May 01:04
· 7 commits to main since this release
524e79f
Box

Box v1.0.8

A huge thank you to everyone who has reported issues and taken the time to share feedback — your reports directly drive
improvements in Box and are genuinely appreciated. Keep them coming.

_**Two variants available:**_

Box_v1.0.8_Main_Signed_Release.apk — stock Android 15 +
Box_v1.0.8_custom-rom-support_Signed_Release.apk — GrapheneOS / custom ROMs without Google Play Services

New Features

  • Saved System Prompts — Save, name, and reuse system prompts directly in the model settings dialog. Tap a saved prompt to
    apply it instantly; swipe or long-press to delete.
  • Restore Defaults — New button in model settings resets all sliders (temperature, top-K, top-P, max tokens) back to their
    default values in one tap.

Fixes

  • System prompt now actually applied — Changing the system prompt mid-session correctly resets the conversation with the new
    instruction. Previously the prompt was saved in the UI but not passed to the model.
  • Markdown rendering in math responses — Plain-text segments inside chat bubbles now render through the Markdown pipeline,
    fixing broken formatting in responses that mix text and LaTeX math.
  • Randomised inference seed — Each conversation now uses a unique random seed, producing more varied outputs across sessions
    when using CPU backend.

UI Polish

  • AI chat bubbles now use the full available width.
  • Removed "API Documentation", "Example code", and "Learn more" links from the model list — they pointed to upstream Google
    resources not relevant to Box.
  • Removed the stray ? icon that appeared next to models that hadn't been downloaded yet.
  • LaTeX headers fixed.

Known Issue Upstream

  • GPU backend produces identical outputs — On devices where the GPU sampler is unavailable (affects Pixel 6a and others), the
    LiteRT LM engine internally restricts token candidates to 1 before sampling, forcing greedy decoding regardless of
    temperature/top-K settings. This is a limitation in LiteRT LM v0.11.0 with no app-level workaround. Switch to CPU in model
    settings for varied outputs. Reported upstream as issue #817.
Box3