A PATH TO 1.0.0: A Move from Mirroring to PAI Packages #222
Replies: 13 comments 6 replies
-
|
I just thought of another crazy thing related to this. Kai already has access to this repo has already helped extensively with fixing commenting on enclosing issues and PRs. Well now when somebody submits an issue with a package, Kai can actually respond directly with an explanation and provide additional code or context or whatever is required to help out. And here's the part that will break your brain. You have your DA read what Kai wrote. So just like the Packages themselves, you can manually make the changes or you can have your AI do it for you using the additional context Kai provided! |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
So packages would be more like a meal kits deliveries ? Discrete boxes with all the ingredients and instructions needed to cook your own 🥧? |
Beta Was this translation helpful? Give feedback.
-
|
Yeah, as I've been futzing with PAI, that really does seem to be the sensible route! For the most part its really just discrete groups of files that don't actually change the functionality of the app being used (outside of hooks). Being able to share skills/commands/agents more easily would be great! As hinted at, hooks really seem to be the most tricky & sticky (mostly due to platform differences I am battling on windows). Looking forward to seeing the new hotness. ❤️ |
Beta Was this translation helpful? Give feedback.
-
|
Nice one @danielmiessler — this will be a big step forward! 🎉 I’ve developed three skills or packages over the last few weeks and spent a lot of time setting up tooling for automated tests, releases, and configuration management. Including meta-processes to cleanly separate “packages” from my own PAI instance. Data privacy turned out to be a real concern — I nearly leaked private data when bundling up skills for PR submission, which was a good learning moment. All of that experience and tooling could be useful context. I’m happy to have my personal assistant review the PAI package infrastructure and provide feedback grounded in what I’ve learned. One thing I’ve been thinking about: it would help to know where your focus is in the near and mid-term — even a lightweight priority list or rough roadmap. I’ve always appreciated how VS Code publishes their iteration plans, though that might be too structured for this stage. Knowing what you’re working on and what’s next would make it easier for the community to align efforts and contribute meaningfully rather than just consuming releases. A few things that helped me through the process of contributing my three PRs — might be worth extracting as shared resources: One lesson from submitting my PRs: it’s easy to generate code at 10x speed with AI, but that just shifts the burden onto reviewers. I’ve been trying to be a 0.1x contributor instead — making sure what I submit is solid, properly tested, and reviewed before it lands in someone else’s queue. A rigorous review process will be important as AI-generated contributions grow. |
Beta Was this translation helpful? Give feedback.
-
|
Another question: how should packages be managed long-term? |
Beta Was this translation helpful? Give feedback.
-
|
One concrete example of where this gets tricky: I was grappling with how to handle the dependency on CORE/skill.md for routing in the packages I’ve built. I ended up documenting it as a manual installation step, but that won’t scale for a pluggable package system. P.S. I found that relying on CC skill routing out of the box isn't predictable enough and routing from Core skill seems to be required for consistent results. |
Beta Was this translation helpful? Give feedback.
-
|
Interesting timing — came across this the other day: Agent Skills is an open format for packaging domain expertise and workflows into portable, version-controlled packages that work across different agent products. It originated at Anthropic (who also created MCP) and is now seeing adoption from Cursor, GitHub, VS Code, Claude Code, and OpenAI’s Codex CLI. Related: earlier this month Anthropic donated MCP to the newly formed Agentic AI Foundation under the Linux Foundation, co-founded with Block and OpenAI, with support from Google, Microsoft, AWS, Cloudflare, and Bloomberg. The foundation also includes Block’s goose framework and OpenAI’s AGENTS.md as founding projects. Worth watching how these open standards evolve — I wouldn’t be surprised to see Agent Skills follow the same path into the foundation. Aligning PAI packages with this direction could maximise interoperability down the track. |
Beta Was this translation helpful? Give feedback.
-
|
Yeah, I think there will be a lot of projects similar to this. I think the difference for us is that it's not just skills, it's also hooks, it's also history, it's also context management, it's also agent spin-up. It's the entire personal AI infrastructure. |
Beta Was this translation helpful? Give feedback.
-
|
This should be on a block chain. Companies like memco are build this infra. |
Beta Was this translation helpful? Give feedback.
-
|
This is a really exciting direction! The Jenga tower analogy resonates - we just spent a day QA'ing the current Skills system and found TitleCase inconsistencies across nearly every skill (PR #236). The interconnected nature made it easy for small inconsistencies to propagate. A few thoughts from that experience that might be useful for the Package format: 1. Validation as part of the package spec Having a machine-readable schema would catch issues before they become PRs. Something like: package:
name: MyPackage # Must match directory name (TitleCase)
version: 1.0.0
requires:
- CORE >= 1.0.0
provides:
skills: [MySkill]
hooks: [my-hook.ts]
commands: [my-command]A simple
2. Platform compatibility flags We filed issue #237 today - hooks using platforms:
- macos
- linux # Requires case-sensitive path handling3. Dependency isolation The current system has implicit dependencies (skills referencing other skills, hooks depending on specific directory structures). Explicit declaration would help: depends_on:
skills: [CORE, Fabric] # Will load these first
env_vars: [ELEVENLABS_API_KEY] # Optional, degrades gracefully
tools: [bun, jq] # CLI tools required4. Testing/examples as first-class citizens Each package could include: Really looking forward to seeing the format evolve. Happy to help with validation tooling or documentation based on what we've learned from the QA process. |
Beta Was this translation helpful? Give feedback.
-
|
Super excited to see this next update ! I love the approach, congrats for the work and thanks for sharing it with us. |
Beta Was this translation helpful? Give feedback.
-
|
I think that its a great idea. Granular control for some is just better. I'm not a Mac user, so the voice server aspects, etc., are not installed on my assistant. But having to go through and verify all the connections, this is something that gets solved with modularity. I'm buckled in for the ride, either way. Thanks for all of your contributions. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey all, pretty major update here.
First, I want to thank you all for the patience that you've shown over the last months as I tried to get this thing to a 1.0 release.
The problem
As many long-timers have noticed, we kind of get close to getting there, and then it decays even more. And that cycle has continued.
As we have all assessed pretty accurately, it is because things are changing too quickly. Not just in AI itself, but Kai is changing too quickly. Too many new things are being released into the world, which I then implement inside of Kai. And then the process of trying to get that into PAI is too slow for my taste and too imperfect.
Either I release something that is not useful to others because it's highly specialized to my use cases, or I end up releasing something I shouldn't have released at all, like a key or an API token or some kind of secret sauce or personal information that's not supposed to be public, etc.
It's very stressful.
I don't want there to be any delay between PAI and Kai, but practically, there has to be.
But even more than the delay, the issue is things being broken.
The entire Claude Code system and therefore the entire Kai System is a giant Jenga tower of interconnected files and components. It's a miracle that Claude Code itself ever loads for anyone, anywhere. And the more you add on top of it, the more of a miracle it becomes.
The issue here, which I'm disappointed I didn't realize earlier, is that I was trying to do too much manual work in an old way of thinking.
Let's manually copy over 747,211 Jenga blocks (with AI help) and make sure that they are all perfectly lined up and we don't miss any! And let's leave it up to me to figure out which ones to move and how!
That is 2025 thinking.
2026 thinking is describing in detail what you want, providing everything that's needed, and letting your AI figure it out for you.
A more modern solution
Given the few months of attempting to get this thing stable, I came up with what I believe is a much better solution to this.
Functionality Packages instead of a Mirror Platform.
Let me explain.
The whole point of PAI is for everyone to have their own system, including an open-source system like OpenCode or something homegrown. Having the system based on the Claude Code is an automatic deviation from that. One that I was okay with, but still wasn't pure.
Over the last week or two this became more of a wizard or a setup system that customizes PAI functionality that I release for your environment. So like running the setup script, and you put in your variables or whatever.
The natural progression for this, given the fact that we have this brilliant AI available to us now, is to have your own AI system evaluate what is released into the PAI project and have your system integrated into itself.
So what we are going to move to is a system I'm calling PAI Packages.
Instead of trying to maintain an entire Jenga tower where one block out of place knocks the whole thing over, I'm going to release functionality packages that have everything an AI needs to implement that new idea within your own system. For example,
Basically a self-contained package containing every single thing that is needed to implement this piece of functionality within a brand new Claude Code / OpenCode / Homebrew / Gemini / Codex instance, or into a system like Kai that you've been hacking on for years.
And then we will have categories for these. And a website to search for them. And we can integrate Fabric into that as well, because Fabric becomes essentially a package of prompts.
And what I'm super excited about more than anything is the fact that PRs become submissions of packages.
This solves the other issue that I was having with the system, which is how do we get all your awesome ideas (so many of you have submitted awesome PRs) into the PAI system?
This is the answer.
People submit PAI Packages, and we do some kind of vetting of them, and then we put it into the structure with tagging and labeling, etc., so that they could be discovered and used by others!
I'm so massively excited for this!
Going forward
This is what we're transitioning to for the project, and I'll be restructuring the repo accordingly soon. It'll look something like this.
Stay sharp, more to follow. And in the meantime please put your thoughts and ideas below!
Thank you all so much for caring about the mission of Agumenting yourselves and others using AI.
-Daniel
Beta Was this translation helpful? Give feedback.
All reactions