Patched version 1.3.1 of original oobagooba 1.3.1 #8
unixwzrd
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This release seems to work and has been patched with updated Python packages as per security updates to Gradio and new updated llama-cpp-python 0.1.77. Tagging it at this point in time as stable, at last for llama.ccp and GGML models. Need more testing, but this is a start and it uses Apple Silicon GPU.
Also a minor change in the way output files are handled by the Elevenlabs extension to give them a sequence number which was causing problems with audio files cached in the web browser.
Coming next will be improving performance as there are still some places which nee optimization and supporting more models, though priority is places on Llama2.
A Discussion for this release is created, please leave any comments regarding the release there.
Please use the Apple Silicon Wishlist discussion for requests, enhancements, or ideas fro what should included with the next release for macOS and Apple Silicon. Here: Apple Silicon macOS Wishlist
This discussion was created from the release Patched version 1.3.1 of original oobagooba 1.3.1.
Beta Was this translation helpful? Give feedback.
All reactions