Product-oriented Teams, Conway's Law, the Model Context Protocol & GPTs
Product Oriented Teams, Conway's Law, the Model Context Protocol & GPTs
For decades, Conway's Law has been a cornerstone of software architecture wisdom: organizations design systems that mirror their own communication structures. It’s a deceptively simple observation, yet its implications are profound and, as many of us have experienced, seemingly inescapable. No matter the technological advancements, the sheer gravity of organizational design and human communication patterns has meant that this law remains a fundamental truth.
But what if we could dramatically change the efficiency and nature of those communication pathways, especially for critical support functions? My hypothesis is this: While Conway's Law will likely never be superseded in system design, the way enabling teams support product-oriented organizations is ripe for a revolution. Through emerging systems like the Model Context Protocol (MCP) and the power of Custom GPTs, we can now externalize and scale expert knowledge far more effectively, allowing it to dovetail directly into the value streams of product teams.
This isn't about fighting Conway's Law; it's about making the "mirrored system" an order of magnitude more efficient and responsive.
The Enduring Truth of Conway's Law
Melvin Conway's 1967 paper, "How Do Committees Invent?", laid it bare. If you have four teams working on a compiler, you'll get a four-pass compiler. This is because the interfaces between system components inevitably reflect the communication interfaces (or lack thereof) between the teams building them. Organizational inertia, cognitive load limits, and the very real challenges of cross-team communication ensure this law's persistence. Even as we've moved through monoliths, microservices, and serverless, the architectures often still tell a story about who talks to whom and how. Research has even shown this "mirroring hypothesis" holds strong, with systems reinforcing the very structures that created them.
Enabling Teams: The Traditional Bottleneck in a Value-Stream World
In modern product-driven organizations, particularly those embracing concepts like Team Topologies, enabling teams play a vital role. These are specialists in areas like platform engineering, security, SRE, or specific complex domains. Their mission is to help stream-aligned (product) teams deliver value faster and more autonomously by providing tools, expertise, and reducing their cognitive load.
Traditionally, this enablement has happened through:
- Direct consultation and pairing: Highly effective but doesn't scale.
- Workshops and training: Good for foundational knowledge but can be generic and time-consuming.
- Extensive documentation and wikis: Often hard to navigate, quickly outdated, and a passive form of knowledge transfer.
- Building shared libraries or tools: Valuable, but the learning curve and integration can still be significant.
The challenge? Enabling teams often become bottlenecks. Their deep expertise is in high demand, leading to queues, context switching, and delays for product teams trying to move quickly within their value streams. Scaling these enabling teams linearly with product teams is often unsustainable in the traditional way.
Enabling teams, while vital, traditionally face limitations. Their expertise, sought after by multiple product teams, creates a bottleneck. Linear scaling of these teams to match growing product demands is unsustainable. Therefore, a shift is necessary to optimize their function, transforming them from potential roadblocks into scalable sources of knowledge and capabilities.
MCP and Custom GPTs: Externalizing Expertise as a Service
This is where the new wave of technology offers that profound shift.
Model Context Protocol (MCP): The Standardized Interface
Imagine a "USB-C for AI integrations." That's essentially what the Model Context Protocol (MCP) aims to be. As an emerging open standard, MCP is designed to allow AI models (like our Custom GPTs) to interact seamlessly and efficiently with diverse applications, data sources, and tools.
For our purposes, MCP provides a standardized way for the "knowledge" encapsulated by an enabling team's AI to be accessed and understood by the systems and tools product teams use. It facilitates:
- Simplified Integration: Reducing bespoke integration efforts.
- Enhanced Context Handling: Allowing AI to maintain context across interactions, crucial for complex problem-solving.
- Stateful Interactions: Supporting ongoing, multi-step processes.
- Token Efficiency: Optimizing calls to Large Language Models (LLMs), which can translate to cost savings and speed.
Custom GPTs/Gems: The Knowledge Engine
Custom Generative Pre-trained Transformers (GPTs), or similar generative AI models ("Gems"), can be trained on an organization's specific, internal knowledge base. This isn't just public internet data; this is your best practices, your codebase conventions, your platform intricacies, your security playbooks, and your troubleshooting guides.
An enabling team can now "teach" a Custom GPT:
- Platform Engineering GPT: Answers questions about deploying to the internal platform, generates compliant pipeline configurations, or explains specific platform error messages.
- Security Champion GPT: Advises on secure coding practices for a given language, helps fill out security review questionnaires based on project details, or points to relevant internal security policies.
- Data Science Environment GPT: Guides users on setting up their environment, accessing specific datasets ethically, or using pre-approved modeling libraries.
- Legacy System Modernization GPT: Trained on the old codebase and modernization patterns, it could suggest refactoring approaches or explain obscure parts of the legacy system.
GitHub Copilot, when trained on an organization's private repositories, is an early, potent example of this externalized knowledge in action, providing code completions aligned with internal standards.
The Synergy: Efficiently Dovetailing with Product Team Value Streams
When you combine MCP with Custom GPTs, enabling teams gain a powerful new mechanism to "export" their knowledge and capabilities. Instead of being a human bottleneck, they become curators and enhancers of an AI-powered knowledge service.
Product teams can now:
- Access expert guidance on-demand, 24/7: No more waiting for the enabling team expert to be free.
- Get context-specific assistance: The AI can be prompted with the product team's specific problem or code.
- Receive instant, actionable outputs: Boilerplate code, configuration snippets, diagnostic steps.
- Integrate this "knowledge service" directly into their IDEs, CI/CD pipelines, or team chatops via MCP-facilitated interactions.
This directly accelerates the product team's value stream by reducing friction, minimizing rework, and ensuring adherence to organizational best practices from the outset. Enabling teams, freed from repetitive Q&A, can focus on more strategic work: evolving the platform, researching new capabilities, and, crucially, continuously improving and expanding the knowledge base of their Custom GPTs.
Conway's Law Still Reigns, But the Communication is Hyper-Efficient
This new model doesn't break Conway's Law. The organizational structure—with distinct product teams and specialized enabling teams—still exists. The system architecture, now including these AI-driven interfaces facilitated by MCP, still mirrors this structure.
What has changed is the communication channel and its efficiency. The "interface" to the enabling team, for many common needs, becomes this intelligent, automated system. It's a highly optimized communication pathway designed to handle a large volume of interactions that previously required direct human intervention. The system reflects the organization's intent to provide specialized knowledge, but it does so in a massively scalable way.
Dunbar's number suggests a cognitive limit to the number of stable social relationships one can maintain. By streamlining communication through AI-powered interfaces and minimizing direct human interactions, cognitive load is reduced and teams can operate closer to optimal efficiency, avoiding the bottlenecks and delays that arise from overloading team members beyond their social cognitive limits.
Challenges and the Path Forward
This vision isn't without its hurdles:
- Accuracy and Reliability: GPTs can "hallucinate." Ensuring the accuracy of the information they provide is paramount. Rigorous training, fine-tuning, and human-in-the-loop validation are essential.
- Knowledge Maintenance: The Custom GPT is only as good as the knowledge it's trained on. Processes for keeping this knowledge current are critical.
- Cost and Complexity: Developing, training, and maintaining these systems requires investment and new skills.
- Over-Reliance and Skill Atrophy: Teams must still cultivate their own understanding, using GPTs as accelerators, not crutches.
- Ethical Considerations: Bias in training data, data privacy (especially with MCP connecting to various sources), and transparency in AI decision-making must be proactively addressed.
The Future of Enablement is Augmented
Conway's Law will continue to shape the systems we build because it reflects fundamental truths about how humans organize and collaborate. However, the advent of technologies like Model Context Protocol and the power of Custom GPTs offer a paradigm shift for enabling teams. They can move from being constrained sources of expertise to scaled exporters of knowledge and capability.
By embracing these tools, organizations can create far more efficient and responsive internal ecosystems, allowing product teams to innovate faster while still benefiting from the deep expertise of their enabling functions. The law remains, but the landscape of interaction is being excitingly redrawn.