Curated collection of industry-leading frameworks for AI, cloud architecture, and engineering excellence.
We don't reinvent the wheel. This playbook builds on proven frameworks from industry leaders.
Source: MIT Center for Information Systems Research
Key Concepts:
- Four stages: Foundational → Integrated → Optimized → Transformative
- Financial performance improves at each stage
- Based on survey of 721 companies
How We Use It: Foundation for our AI Maturity Assessment
Source: OWASP Project
Key Concepts:
- Security and responsible AI focus
- Based on OWASP SAMM methodology
- Eight practices across AI lifecycle
How We Use It: Governance and security dimensions of our assessments
Source: Gartner Research (subscription required)
Key Concepts:
- Five levels: Awareness → Active → Operational → Systemic → Transformational
- Focus on value realization and organizational change
- Industry benchmarking
How We Use It: Client positioning and industry comparisons
Source: MITRE AI Resources
Key Concepts:
- Six pillars: Ethical Use, Strategy, Organization, Technology Enablers, Data, Performance Measurement
- Structured questionnaires for each pillar
- Five maturity levels per pillar
How We Use It: Detailed dimension assessments
Source: AWS Well-Architected
Pillars:
- Operational Excellence
- Security
- Reliability
- Performance Efficiency
- Cost Optimization
- Sustainability (added 2021)
How We Use It: Platform-agnostic adaptation in our Well-Architected Review
Source: Microsoft Learn
Pillars:
- Reliability
- Security
- Cost Optimization
- Operational Excellence
- Performance Efficiency
How We Use It: Azure-specific implementations in our Azure CoE
Source: Google Cloud
Pillars:
- System Design
- Operational Excellence
- Security, Privacy, and Compliance
- Reliability
- Cost Optimization
- Performance Optimization
How We Use It: GCP-specific guidance and comparisons
Source: The Open Group
Key Concepts:
- Architecture Development Method (ADM)
- Enterprise Continuum
- Architecture Repository
How We Use It: Enterprise-wide architecture governance
Source: Zachman International
Key Concepts:
- 6x6 matrix of perspectives and abstractions
- What, How, Where, Who, When, Why
How We Use It: Stakeholder communication and documentation structure
Source: CNCF Landscape
Key Concepts:
- Comprehensive map of cloud-native technologies
- Graduated, incubating, and sandbox projects
- Community-driven standards
How We Use It: Technology selection and comparison
Source: ThoughtWorks Radar
Key Concepts:
- Four rings: Adopt, Trial, Assess, Hold
- Four quadrants: Techniques, Platforms, Tools, Languages & Frameworks
- Updated biannually
How We Use It: Technology recommendations in our decision frameworks
Source: SRE Books
Key Concepts:
- Error budgets
- Service Level Objectives (SLOs)
- Toil elimination
- Blameless postmortems
How We Use It: Operational excellence and reliability engineering
Source: teamtopologies.com
Key Concepts:
- Four team types: Stream-aligned, Platform, Enabling, Complicated Subsystem
- Three interaction modes: Collaboration, X-as-a-Service, Facilitating
- Cognitive load management
How We Use It: Organization design and team structure recommendations
Source: Zhamak Dehghani
Key Concepts:
- Domain-oriented ownership
- Data as a product
- Self-serve data platform
- Federated computational governance
How We Use It: Data architecture patterns in principles
Source: Microsoft MLOps
Key Concepts:
- Five levels from no MLOps to full automation
- Focus on reproducibility, traceability, and governance
How We Use It: AI infrastructure assessments
Source: Databricks
Key Concepts:
- Bronze (raw) → Silver (cleaned) → Gold (curated)
- Incremental data quality improvement
- Lakehouse pattern foundation
How We Use It: Data platform design recommendations
Source: NIST CSF
Key Concepts:
- Five functions: Identify, Protect, Detect, Respond, Recover
- Implementation tiers
- Framework profiles
How We Use It: Security assessments and governance
Source: OWASP Top 10
Key Concepts:
- Top 10 web application security risks
- Updated every 3-4 years
- Includes remediation guidance
How We Use It: Security reviews and developer training
Source: NIST SP 800-207
Key Concepts:
- Never trust, always verify
- Assume breach
- Least privilege access
How We Use It: Security architecture patterns in principles
| Client Need | Recommended Framework |
|---|---|
| AI strategy and roadmap | MIT CISR + Our AI Maturity Assessment |
| Cloud migration | AWS/Azure/GCP Well-Architected + Cloud Readiness |
| AI security and governance | OWASP AIMA + NIST AI RMF |
| Enterprise architecture | TOGAF + Zachman |
| DevOps transformation | CNCF + SRE + Team Topologies |
| Data platform modernization | Data Mesh + Medallion Architecture |
Most engagements use 2-3 frameworks:
- Primary: Domain-specific (e.g., MIT CISR for AI, TOGAF for EA)
- Secondary: Platform-specific (e.g., Azure Well-Architected)
- Tertiary: Industry validation (e.g., ThoughtWorks Radar)
- KubeCon + CloudNativeCon
- AWS re:Invent
- Microsoft Ignite
- Google Cloud Next
- AWS Solutions Architect
- Azure Solutions Architect Expert
- Google Cloud Professional Cloud Architect
- Kubernetes Administrator (CKA)
- TOGAF Certified
Remember: Frameworks are tools, not dogma. Adapt them to your client's context and maturity level.