-
Notifications
You must be signed in to change notification settings - Fork 11k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama : add xcframework build script #11996
base: master
Are you sure you want to change the base?
Conversation
Package.swift
Outdated
.binaryTarget( | ||
name: "llama", | ||
path: "build-ios/llama.xcframework" | ||
), | ||
//.systemLibrary(name: "llama", pkgConfig: "llama"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something that is not clear to me yet is now that we build a XCFramework, do we need this SPM package? Could we simply add the framework to the project and skip the package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way we are using SPM where we expect llama.cpp to be installed on the local system, my understanding is that these are two ways of doing the same thing and perhaps we should just use framework which seems to be pretty simple.
The option of using SPM for source code distribution (not binary distribution or what is might be called when using a system library) would enable end users to include llama.cpp in their projects using something like this their projects Package.swift:
dependencies: [
.package(url: "https://github.com/ggerganov/llama.cpp.git", from: "1.0.0")
]
SPM would then download the source code, compile it, etc. But I'm not sure how much work it would require to get this working and maintain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, so we want to move away from the workflow where the SPM package builds the llama from source for 2 reasons:
- We would need to maintain a second build system (i.e. the SPM package)
- It will not work when
ggml
becomes a submodule in the future (SPM does not support submodules)
So it seems to me that we should probably remove the Package.swift
all together and use the framework approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it seems to me that we should probably remove the
Package.swift
all together and use the framework approach.
This sounds like a good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ggerganov I've removed Package.swift now as it was causing issues with xcodebuild and also produced an error in xcode (but it was still possible to build/run). I've updated the CI build to remove the install of llama.cpp as it should no longer be required, and also added a FRAMEWORKD_FOLDER_PATH
for the xcodebuild commands.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both, there is also https://github.com/ggml-org/llama.cpp/tree/master/Sources
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll remove spm-headers and Sources shortly. (Package.swift has been removed and in llama : remove Package.swift)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Package dependency management is not just an easy way to integrate dependencies.
It's also there to manage them. SPM actually makes use of the Git version tags, so, when using SPM, one can explicitly define a version they want (need) to use, and distribute that knowledge to others in a team.
If you remove Package.swift
, that capability gets lost.
Also, if there ever will be support for executing build scripts on installation of SPM dependencies, this build script could be referenced there.
(cough In that regard, I would like to promote CocoaPods again, as that is capable of exactly such a feature…)
Currently, the nicest way would be to have a Github Action build the xcframework
during a release and add that to a Github release page.
That could be referenced from a Package.swift
and suddenly people would have a really easy time to integrate without needing a lot of knowledge on how to compile C++ things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure I've understood this correctly, I as a user would be able to add the following to my project's Package.swift:
dependencies: [
.package(url: "https://github.com/ggml-org/llama.cpp.git", from: "b4777")
]
And the Swift Package Manager would then clone the repository and look for a Package.swift file in the root of llama.cpp. This Package.swift could then looks something like the following:
import PackageDescription
let package = Package(
name: "llama",
platforms: [
.iOS(.v14),
.macOS(.v10_15),
.tvOS(.v14),
.visionOS(.v1)
],
products: [
.library(name: "llama", targets: ["llama"])
],
targets: [
.binaryTarget(
name: "llama",
url: "https://github.com/ggml-org/llama.cpp/releases/download/b4777/llama-b4777-xcframework.zip",
checksum: "the-sha256-checksum-here"
)
]
)
Without Swift Package (Manual Integration):
- User has to manually go to our GitHub releases page
- Find the correct version of the xcframework.zip
- Download it
- Extract the zip file
- Drag the XCFramework into their Xcode project
- Configure build settings (embedding, signing, etc.)
- Repeat all these steps whenever they want to update to a newer version
With Swift Package (Automated Integration):
- User adds a single line to their dependencies
- Swift Package Manager automatically downloads the right binary
- Proper linking and integration happens automatically
- Updating is as simple as changing a version number
A very basic example can be found here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly.
To be completely honest, though, I should mention this one:
The checksum is mandatory and is only known after creating the xcframework.zip
.
So, someone or something would need to update that in the Package.swift
file after creating the new xcframework and, at best, then tag the release accordingly, or force move an already tagged release.
If that is too much hassle, the Package.swift
part could be done in another repo which releases separately. Like https://github.com/srgtuszy/llama-cpp-swift.
Tagging people who participated in recent iOS/Swift build discussions - please provide feedback about the proposed build changes in this PR. The new XCFramework approach proposed here should resolve all build issues for 3rd-party Swift projects depending on @jiabochao @nvoter @pgorzelany @Animaxx @MrMage @jhen0409 @hnipps @yhondri @tladesignz |
with xcframework I am able to build the swift example project, but when run it got error:
when I trying to use in other project got error:
|
@Animaxx Thanks for trying this out!
I'm trying to figure this out. Can you tell me if you are running this using the iphone simulator in xcode or some other way I can reproduce this?
I've been able to reproduce this and looking into it now. Thanks! Update: I've been looking at this issue and have updated the script to use static libraries which I should probably have done in the first place. With these changes, using the generated XCFramework I'm able to run the llama.swiftui example, and I also created a standalone example that imported the import SwiftUI
import llama
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
.onAppear {
// Test llama framework functionality
let params = llama_context_default_params()
print("Llama context params initialized:")
print("n_ctx: \(params.n_ctx)")
print("n_threads: \(params.n_threads)")
let start = ggml_time_us()
print("start: \(start)")
}
}
}
#Preview {
ContentView()
} I ran out of time today and I'll be out Mon-Tue, but I'll revisit the issue @Animaxx reported when I'm back on Wednesday. |
Hi @danbev thank you for the PR! The error I got is from debugging on the real device, not sure if that related to xcframework sign |
with the latest commit works on device now! trying to have validate archives got the error
|
Ah, I missed building for macos, it currently only builds for ios. I'll try adding a bulid for macos. |
Thank you so much, @danbev, for starting this! I took your work for a test drive:
It ran on the following devices:
I didn't try out visionOS, since I don't have a device and I was too lazy to download the simulator. I stumbled over various minor things, however, so I suggest to take care of the following:
|
Im not sure what issue I'm running into on my m4 pro MacBook running latest version of Sequoia. I have all of the Xcode dependencies updated and I cannot get passed this. I ran into it the other day when I tried too but the same commit that had issues that day worked once today when I tried again and after pulling in the latest macOS changes from your repo I'm back to the same issues.
|
@blaineam, that looks like your C compiler toolchain (LLVM) is broken. Run this: xcode-select --install |
🫶 |
yeah that didn't fix it, I already had that installed. needed to run The build made it farther, however it did end up failing on:
|
Unfortunately, this log line doesn't tell how it failed. Anyway, this brings me to an idea:
Did you just upgrade from Sonoma to Sequoia? In that case, a lot of brew installed build tools are broken. You might want to run brew reinstall installed (That'll take time a lot of time, depending on the stuff you installed via brew. Maybe |
its not happy about something just not sure what. it seems cmake fails to find the c compiler and cxx compiler identifications sometimes and if either of those fail it cannot build. on rare occasion it finds them though ... had one almost successful run and then it failed too. then subsequent builds fail each time. no clue what is wrong on my MacBook to cause such intermittent issues but something is. I tried running the following: brew list | xargs brew reinstall
sudo rm -rf /Library/Developer/CommandLineTools;
brew install cmake
sudo xcodebuild -license accept
sudo xcodebuild -runFirstLaunch
sudo xcode-select --reset
sudo xcode-select --install CLI OutputAri/llama.cpp-temp/llama.cpp on xcframework-build-10747 took 8s Ari/llama.cpp-temp/llama.cpp on xcframework-build-10747
/Users/blainemiller/Documents/mine/Personal/Apps/Ari/llama.cpp-temp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-quants.c:741:22: warning: implicit CompileC /Users/blainemiller/Documents/mine/Personal/Apps/Ari/llama.cpp-temp/llama.cpp/build-ios-device/build/ggml-cpu.build/Release-iphoneos/Objects-normal/arm64/ggml-cpu-traits.o /Users/blainemiller/Documents/mine/Personal/Apps/Ari/llama.cpp-temp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-traits.cpp normal arm64 c++ com.apple.compilers.llvm.clang.1_0.compiler (in target 'ggml-cpu' from project 'llama.cpp')
CompileC /Users/blainemiller/Documents/mine/Personal/Apps/Ari/llama.cpp-temp/llama.cpp/build-ios-device/build/ggml-cpu.build/Release-iphoneos/Objects-normal/arm64/ggml-cpu-hbm.o /Users/blainemiller/Documents/mine/Personal/Apps/Ari/llama.cpp-temp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-hbm.cpp normal arm64 c++ com.apple.compilers.llvm.clang.1_0.compiler (in target 'ggml-cpu' from project 'llama.cpp')
CompileC /Users/blainemiller/Documents/mine/Personal/Apps/Ari/llama.cpp-temp/llama.cpp/build-ios-device/build/ggml-cpu.build/Release-iphoneos/Objects-normal/arm64/ggml-cpu-c0b5ffc4095df3ac83da75871c8389dd.o /Users/blainemiller/Documents/mine/Personal/Apps/Ari/llama.cpp-temp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c normal arm64 c com.apple.compilers.llvm.clang.1_0.compiler (in target 'ggml-cpu' from project 'llama.cpp')
CompileC /Users/blainemiller/Documents/mine/Personal/Apps/Ari/llama.cpp-temp/llama.cpp/build-ios-device/build/ggml-cpu.build/Release-iphoneos/Objects-normal/arm64/ggml-cpu-aarch64.o /Users/blainemiller/Documents/mine/Personal/Apps/Ari/llama.cpp-temp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp normal arm64 c++ com.apple.compilers.llvm.clang.1_0.compiler (in target 'ggml-cpu' from project 'llama.cpp')
note: Run script build phase 'Generate CMakeFiles/ZERO_CHECK' will be run during every build because the option to run the script phase "Based on dependency analysis" is unchecked. (in target 'ZERO_CHECK' from project 'llama.cpp') The following build commands failed: ... -- The C compiler identification is unknown |
I am getting an error when I try to compile this. Lots of lines that look like
I also see these lines in the output
I should note that I tested this before you added mac support (which I am very happy about) and it worked fine. I haven't tried reinstalling all my brew packages because it seems a bit heavy since it was working. |
so I noticed one of the errors was about Disk IO errors so I moved the llama.cpp repo to a non iCloud Drive synced folder and now builds are consistent and mostly working each time. It fails to build macOS with the same issues @schlu pointed out every time. iOS and iOS sim seem to build ok now that Im not using a icloud synced folder. |
@schlu Thanks for reporting this, I'm looking into this now. |
Much worse now. I stops right away. I rolled back to commit 14d48be to make sure it was still working and it was still compiling the iOS version fine. Here is the entire output
|
Ran into the same error as well. Didn't have time to test a previous commit though. |
Sorry about this (I am running the build locally and checking that the builds work in various projects to try to avoid wasting your time). I've pushed another commit which I hope will address this 🤞 |
It built fully this time with no errors I could see. |
Don't worry, thank you for your time. I am getting this error now:
I went and downloaded the visonOS stuff in Xcode but the error persisted: |
I do see this adding my app with the llama.xcframework to App Store Connect.
And they emailed me these issues:
Here the link they reference too: |
@schlu Can you tell me the version of cmake you are using? I'm asking because I think CMake added official support for visionOS in version 3.28.0. Based on this in the error you are seeing, perhaps that is the issue: System is unknown to cmake, create:
Platform/visionOS to use this system, please post your config file on discourse.cmake.org so it can be added to cmake
CMake Error at CMakeLists.txt:2 (project): If this is the case I'll add a check to the script. |
I've not tried this but I'll take a closer look at this tomorrow 👍 |
Thanks, I updated cmake and it worked! I should have tried this before I reported, I don't really use CMake much, I preferred to just use SPM but alas I am not a maintainer. |
With the latest commit I was able to create a project and validate it that uses the xcodeframework: ![]() @blaineam Would you be able to try this as well to see if there is anything we are missing for App Store Connect? |
looks like its even less happy now:
|
@blaineam Could you give me some pointers to how to recreate these errors? |
@danbev I do know Xcode does some stuff differently than apple's server side validation. Sometimes it's best to just upload something to confirm it will be validated by Apple's server side requirements. As for the minimum Xcode version that's probably tied to my app directly because my app has its minimum to like iOS 17 or 18 (can't remember which) |
@blaineam Thanks for that! I just validated an iphone app (the one I tried previously was a macos app) and I'm seeing the same errors. I'll dig into them in the morning. update: I'm still working on this and while I've been able fix issues above, I ran into different issues with the packaging and signing after doing that. I'll need some more time but hopefully will have a suggestion for this soon (but not today which I was hoping for). |
@danbev seems to have worked well for my iOS and macOS app. Apps run smoothly and App Store Connect was happy. |
Great to hear that, thanks for testing it! |
My plan is to add support for tvOS to the xcframework build script (since we build this in CI currently) tomorrow, but skip watchOS. Does that sound alright? |
This commit adds a script to build an XCFramework for Apple ios, macos, visionos, and tvos platforms. The generated XCFramework can then be added to a project and used in the same way as a regular framework. The llama.swiftui example project has been updated to use the XCFramework and can be started using the following command: ```console $ open examples/llama.swiftui/llama.swiftui.xcodeproj/ ``` Refs: ggml-org#10747
This commit removes the reference to llama.cpp from the project.pbxproj file since Package.swift has been removed.
This commit adds the ability to create a GitHub release with the xcframework build artifact.
This commit adds scripts that can validate the iOS, macOS, tvOS, and VisionOS applications. The scripts create a simple test app project, copy the llama.xcframework to the test project, build and archive the app, create an IPA from the archive, and validate the IPA using altool. The motivation for this is to provide some basic validation and hopefully avoid having to manually validate apps in Xcode.
This commit removes the Package.swift file, as we are now building an XCFramework for the project.
da0ea36
to
69a6d36
Compare
I've rebased this to hopefully make it a little easier to review after all the iterations. I'm not sure how we should handle the Package.swift issue mentioned in this comment. At the moment Package.swift has been removed by a commit in this PR. For local testing I've been using this as a Package.swift (not checked in): // swift-tools-version: 5.10
import PackageDescription
let package = Package(
name: "llama",
platforms: [
.iOS(.v14),
.macOS(.v10_15),
.visionOS(.v1)
],
products: [
.library(name: "llama.cpp", targets: ["llama-framework"])
],
targets: [
.binaryTarget(
name: "llama-framework",
//url: "https://github.com/ggml-org/llama.cpp/releases/download/bXXXX/llama-bXXXX-xcframework.zip",
//checksum: "the-sha256-checksum-here"
// The following can be used for local testing. Run the build-xcframework.sh
// script first to generate the XCFramework.
path: "build-apple/llama.xcframework"
)
]
) This was only to verify that I could create a swift project and use the framework as a dependency from it. Perhaps we can investigate this further in a follow up commit to figure out what the best way is to handle the checksum issue? |
This commit adds a script to build an XCFramework for Apple
ios, macos, visionos, and tvos platforms.
The generated XCFramework can then be added to a project and used in
the same way as a regular framework. The llama.swiftui example project
has been updated to use the XCFramework and can be started using the
following command:
$ open examples/llama.swiftui/llama.swiftui.xcodeproj/
Refs: #10747