Skip to content

Latest commit

 

History

History
286 lines (181 loc) · 3.84 KB

File metadata and controls

286 lines (181 loc) · 3.84 KB

AI Experiment Log

Track your AI experiments and learnings using this template.


Experiment Details

Field Value
Experiment ID EXP-___
Initiative (Link to AI Idea Canvas)
Experimenter
Date Started
Date Completed
Status 🔬 Running / ✅ Complete / ⏸️ Paused / ❌ Stopped

1. Hypothesis

We believe that...

State what you're testing

...will result in...

State the expected outcome

...for...

State who benefits

We'll know this is true when...

Define success metrics

Metric Target How Measured

2. Method

What we built/configured

Describe the technical setup

Tools and technologies used

Component Tool/Service
AI Model
Data Source
Infrastructure
Interface

How we tested

Describe the testing approach

  • A/B test
  • Pilot with subset of users
  • Shadow mode (parallel to existing)
  • Full replacement
  • Other: _______________

Who participated

List participants/users

Group Size Selection Criteria

Duration

Phase Dates
Setup
Testing
Analysis

3. Results

Quantitative Results

Metric Target Actual Variance

Did we hit our targets?

  • Yes — All targets met or exceeded
  • Partially — Some targets met
  • No — Targets not met

Qualitative Observations

What did we observe that wasn't captured in metrics?

User Feedback

Direct quotes or summary of feedback

Surprises

Unexpected findings (good or bad)


4. Learnings

What worked well

Things to keep/repeat

What didn't work

Things to avoid/fix

What we'd do differently

Improvements for next time

Key Insight

The single most important learning


5. Technical Notes

Configuration Details

Settings, parameters, versions

Model:
Version:
Parameters:
  -
  -

Data Quality Issues

Any problems with input data

Performance Observations

Speed, reliability, cost

Metric Observed
Response time
Error rate
Cost per transaction
Throughput

Integration Challenges

Technical difficulties encountered


6. Recommendation

Decision

  • Scale Up — Move to full deployment
  • Iterate — Run another experiment with changes
  • Pivot — Test a different approach
  • Stop — This approach won't work

Rationale

Why this decision?

If Scaling Up

Item Details
Estimated timeline
Resources needed
Dependencies
Risks to address

If Iterating

Change Rationale

If Pivoting

New hypothesis to test:

If Stopping

Learnings to preserve:


7. Next Steps

Action Owner Due Date Status

8. Artifacts

Documentation Created

Document Location
Technical spec
User guide
Training materials

Code/Config

Repository Branch/Tag

Data

Dataset Location Retention

Approval to Proceed

Role Name Date Decision
Experiment Owner
Technical Reviewer
Business Sponsor

Notes


Template version 1.0 — From Strata AI Portfolio Framework