Skip to content

Releases: unixwzrd/text-generation-webui-macos

Fixes version issues in requirements.txt

01 Oct 07:16
Compare
Choose a tag to compare

Yeah, it was just a short time ago, but found library issues with upgrading, so I've fixed the requirements.

20240915a-macOS-dev

30 Sep 23:10
Compare
Choose a tag to compare

Fixed a minor bug.

20240915-macOS-dev

24 Sep 13:38
Compare
Choose a tag to compare

Brought the macOS version from 2024-05 to current upstream main branch.

This has not been extensively tested, but I know it works.

20240531 Release

02 Jun 23:06
2a33ab7
Compare
Choose a tag to compare

This is a preliminary release from Mid-May 2024. Please report any issues of bugs.

No bugs reported after a week. Please notify if you find anything.

Find the latest in macOS-dev branch. Working on bringing that forward.

1.5.1a

19 Sep 22:58
Compare
Choose a tag to compare

Some prior commits got missed now added.

Merged with oobabooga 1.5

17 Sep 08:23
db94d4f
Compare
Choose a tag to compare

tagged the latest, after fixing an indent which didn't seem to push properly from my local repo.

From Dev merge (#11)

  • updated source to supoort mps and cuds

  • Current point in time.

  • missed a change

  • Updated files and README

  • Committing changes before applying stash

  • .gitignore.. whatever.

  • Update the requirements.tst for llama.cpp and gguf

  • Update to requitements for ctransformers

  • Update requirements.txt

  • bumped llama-cpp-python

  • Handle GGML files for model loading

  • chaekcpint

Patched version 1.3.1 of original oobagooba 1.3.1

30 Jul 01:49
105d3af
Compare
Choose a tag to compare

This release seems to work and has been patched with updated Python packages as per security updates to Gradio and new updated llama-cpp-python 0.1.77. Tagging it at this point in time as stable, at last for llama.ccp and GGML models. Need more testing, but this is a start and it uses Apple Silicon GPU.

Also a minor change in the way output files are handled by the Elevenlabs extension to give them a sequence number which was causing problems with audio files cached in the web browser.

Coming next will be improving performance as there are still some places which nee optimization and supporting more models, though priority is places on Llama2.

A Discussion for this release is created, please leave any comments regarding the release there.

Please use the Apple Silicon Wishlist discussion for requests, enhancements, or ideas fro what should included with the next release for macOS and Apple Silicon. Here: Apple Silicon macOS Wishlist