
Context Engineering on Your Terms
Runtime proxies promise to compress your AI context window automatically. A file-based approach trades that convenience for something more valuable: visibility into exactly what your AI sees.
Thoughts and insights on technology, leadership, and software engineering.

Runtime proxies promise to compress your AI context window automatically. A file-based approach trades that convenience for something more valuable: visibility into exactly what your AI sees.

When I asked ChatGPT to review a post critical of OpenAI, it applied disproportionate editorial standards that shifted based on which company was being critiqued. It eventually admitted the asymmetry.

Organizations conflate AI Governance and AI Enablement, treating fundamentally different capabilities as the same job. Governance manages risk through guardrails and approval processes. Enablement builds capability through training, coaching, and change management. You need both, but they require different skills, and conflating them means one will fail.

The hockey stick curves toward AGI that populate pitch decks assume scaling LLMs will yield qualitatively different results. Three metaphors illuminate the structural limits: the verification wall, the interpolation ceiling, and the perception gap between demos and production.

Shadow AI represents a fundamental shift in how unauthorized technology enters the enterprise. Where shadow IT required deliberate procurement decisions, shadow AI often arrives embedded in existing approved platforms, creating governance challenges that demand new approaches to discovery and containment.

Recent research exposed over 3,000 MCP servers through a single path traversal vulnerability in centralized infrastructure. Every AI integration creates security debt we have few ways to track, and most organizations are flying blind on what's actually running.
Rebuilding MADE, Inc. for the Age of AI: A blueprint for transforming a dormant consulting practice into an AI-powered assessment platform that demonstrates methodology through action, showing how modern development enables sophisticated consulting platforms.

Operationalizing governance at scale: moving from pilot purgatory to production deployment while maintaining control. Part 4 transforms NIST frameworks into sustainable operations that deliver consistent business value without collapsing under operational complexity.

Beyond trust theater: implementing metrics that actually matter for AI trustworthiness. Part 3 transforms measurement from technical performance dashboards to systematic evaluation of the seven NIST characteristics that determine whether AI systems are safe to deploy.

The real AI governance crisis isn't the models you've formally approved, it's the ones you don't know exist. Part 2 tackles the visibility gap that's creating compliance exposure and security risks.

Transforming the NIST AI Risk Management Framework from compliance theater to strategic enablement. Part 1 focuses on governance structures that accelerate AI adoption while managing real risks.

Exploring how AI integration is reshaping software development team structures and the evolution from traditional PRD workflows to AI-augmented collaboration patterns.

McKinsey's latest research reveals a striking gap between GRC aspiration and implementation reality, further making the case for top-down approaches to risk management.

A multi-dimensional framework for maintaining disaster recovery and business continuity plans through incremental reviews, addressing the gap between documentation and actual recovery capabilities.

Examining how traditional risk frameworks apply to emerging AI technologies, with a focus on agent-to-agent communication systems and multi-context planning.