Consider an agent making a financial transaction via curl—the LLM sees API keys, account numbers, PII, and credentials in plain text:
# What the LLM generates and sees in context:
curl -X POST https://api.bank.com/v1/transfers \
-H "Authorization: Bearer sk_live_abc123xyz789" \
-H "Content-Type: application/json" \
-d '{
"from_account": "acct_8847291034",
"to_account": "acct_recipient_456",
"amount": 50000,
"routing_number": "021000021",
"account_holder": "Jane Smith",
"address": "123 Main St, Apt 4B, New York, NY 10001",
"ssn_last_four": "7890"
}'With Agent Actions, the LLM only sees:
enact run acme/banking/transfer '{"to": "acct_recipient_456", "amount": 50000}'The action definition handles secrets securely:
env:
BANK_API_KEY:
secret: true
required: true
actions:
- name: transfer
description: Transfer funds to another account
command: ["python", "transfer.py", "--to", "{{to}}", "--amount", "{{amount}}"]
inputSchema:
type: object
required: [to, amount]
properties:
to:
type: string
amount:
type: integer
maximum: 100000- Secrets declared, never exposed: The
envfield withsecret: truedeclares credentials upfront - Runtime injection: Secrets are injected at execution time from secure storage (OS keyring, encrypted vault)
- Schema validation: Input constraints (like
maximum: 100000) are enforced before execution - No plaintext storage: Secrets are never written to disk in plaintext or logged
- LLM isolation: Sensitive details stay in the action implementation, not the conversation