Skip to main content
Benchmarking Maturity Models

The Qwesty Lens: Reading Between the Lines of Maturity Model Narratives

This article is based on the latest industry practices and data, last updated in April 2026. Maturity models are ubiquitous in business strategy, from cybersecurity to DevOps. Yet, in my 15 years of consulting, I've seen more organizations misled by their glossy narratives than empowered by them. The Qwesty Lens is a framework I've developed to cut through the marketing hype and vendor spin, helping leaders discern the genuine path to capability from a mere compliance checklist. In this comprehe

Introduction: The Siren Song of the Perfect Score

In my practice, I've sat across the table from countless CTOs and VPs proudly displaying a "Level 4" or "Optimizing" rating from a prominent maturity model assessment. Their teams have spent months, sometimes years, checking boxes and implementing prescribed tools. Yet, when we peel back the layers, their actual operational resilience, innovation speed, or security posture hasn't materially improved. This dissonance is what led me to develop the Qwesty Lens. The core pain point isn't a lack of effort; it's a fundamental misalignment between the model's narrative and the organization's unique quest for capability. Maturity models, by their nature, are reductionist. They compress the messy, human, and contextual journey of growth into a neat, linear staircase. I've found that organizations become so fixated on climbing to the next rung that they forget to ask: "Is this ladder leaning against the right wall?" This guide is my attempt to equip you with the critical thinking tools I use with my clients—tools to read between the lines, question underlying assumptions, and align maturity efforts with genuine business outcomes, not just benchmark scores.

The Allure and The Trap

The initial appeal is undeniable. A maturity model offers a seemingly objective roadmap out of chaos. For a leadership team feeling overwhelmed, it provides structure and a common language. I recall a 2022 engagement with a mid-sized fintech, "Company Alpha," whose new CISO had mandated a rapid climb up a popular cybersecurity framework. They achieved a high score within 18 months by investing heavily in tooling and documentation. However, during a simulated breach exercise I facilitated, their incident response was slow and siloed. The model had made them compliant on paper but hadn't fostered the cross-team communication and adaptive decision-making needed in a real crisis. The score was a trap, creating a false sense of security that actually increased their risk profile. This experience cemented my belief that uncritical adoption of these models is a strategic liability.

Shifting from Compliance to Capability

The pivotal mindset shift I coach my clients through is moving from a compliance mindset ("We need to prove we're Level 3") to a capability mindset ("We need to reliably detect threats within 5 minutes"). The former is about auditing against a static standard; the latter is about developing dynamic, internal competencies that deliver value regardless of an external benchmark. This shift changes every conversation, from tool selection to KPI design. It's the heart of the Qwesty approach: treating the maturity model not as a destination, but as one of many maps to be consulted critically on your specific organizational quest.

Deconstructing the Model: Anatomy of a Narrative

Before you can read between the lines, you must understand what the lines are saying—and, more importantly, what they're omitting. In my analysis work, I break down every maturity model into its core narrative components. First, there's the Structural Axis: the defined levels (e.g., Initial, Repeatable, Defined, Managed, Optimizing). I've found that the progression between these levels often implies a causality that may not exist. For instance, moving from "Defined" to "Managed" typically prescribes increased measurement. But what if your processes at "Defined" are flawed? Measuring a broken process more efficiently, a pitfall I've witnessed, just gets you to the wrong answer faster. Second, there's the Domain Matrix: the capabilities or practice areas (e.g., "Risk Management," "Deployment Automation"). The model's authors have decided what matters, which inherently means they've decided what doesn't. A major trend I see is models heavily weighting technological capabilities while underweighting human and cultural factors like psychological safety or learning systems.

The Hidden Vendor Agenda

Perhaps the most critical element to scrutinize is the model's provenance. Is it from a vendor whose revenue depends on selling the tools that "Level 4" requires? I've reviewed models where the prescribed capabilities for higher maturity levels mapped suspiciously well to the vendor's premium product suite. In one case, a DevOps model heavily emphasized a specific type of artifact repository in its "Managed" stage; coincidentally, the model's creator sold a market-leading product in that category. This isn't necessarily malicious, but it creates a conflict of interest that you, as the consumer, must be aware of. The Qwesty Lens demands you ask: "Who benefits from me accepting this narrative as truth?"

The Qualitative Gap in Quantitative Clothing

Many models use numbers and scores to create an illusion of precision. You might be rated a "2.7" on "Continuous Testing." But what does 0.3 of a testing capability look like? This false quantification can obscure qualitative realities. A client in 2023, a SaaS platform I'll call "BetaTech," scored highly on "Automated Deployment" because they had a robust CI/CD pipeline. However, their deployment success rate was poor because developers feared the complex, brittle process and deployed infrequently. The quantitative score said "mature," but the qualitative behavior—fear and avoidance—was the true indicator of immaturity. My intervention focused not on more automation, but on simplifying the process and improving feedback loops, which the model had entirely overlooked.

The Qwesty Lens Framework: Your Three Core Filters

Applying the Qwesty Lens is a deliberate practice of filtering any maturity model narrative through three core perspectives. I didn't develop these in academia; they emerged from repeated cycles of trial, error, and reflection with my clients. Filter One: Context Over Compliance. This asks, "How does this prescribed capability interact with our unique business context, constraints, and culture?" A model might dictate a centralized governance body for architecture. For a large, regulated bank, this is sensible. For a 50-person startup in hyper-growth, it could be fatal. I helped a scaling e-commerce company reject this part of a model, opting instead for a lightweight guild system, which preserved their agility while still improving design coherence.

Filter Two: Trajectory Over Tier

This is perhaps the most powerful shift. Instead of asking "What tier are we?", ask "What is our trajectory and velocity through the capability landscape?" A team moving steadily from poor to good monitoring has a positive trajectory, even if they're still at "Level 2." Another team stuck at "Level 4" for two years has a flat trajectory, indicating stagnation. I track this with clients using simple directional metrics (e.g., trend lines on mean time to recovery, not just its absolute value). This filter reveals momentum, which is a far better predictor of future success than a static score. It aligns with research from the DevOps Research and Assessment (DORA) team, which found that elite performers are defined by their continuous improvement habits, not by hitting a one-time benchmark.

Filter Three: Signals Over Scores

This filter moves you from auditing artifacts to observing behaviors. What are the qualitative signals of health or dysfunction that the model's score might miss? For "Collaboration," a score might be based on tool usage logs. The signals I look for include: Are post-mortems blameless? Do teams spontaneously help each other during incidents? Is knowledge shared in accessible wikis or trapped in tribal heads? In a 2024 project with a media company, their collaboration score was high, but the signal was negative—design decisions were made in closed-door meetings between architects. We worked to create open design review forums, a behavioral change that improved outcomes more than any tool ever could.

Comparative Analysis: Three Approaches to Model Adoption

In my experience, organizations typically fall into one of three patterns when engaging with maturity models. Understanding these helps you diagnose your own approach and its pitfalls. Approach A: The Blueprint Method. This is the most common. The model is treated as a literal blueprint for transformation. Teams work methodically through each capability area, often in a linear fashion. Pros: Provides clear structure, easy to track progress, and satisfies audit requirements. Cons: It's rigid, can lead to "checklist fatigue," and often ignores interdependencies between capabilities. It works best in highly regulated environments where demonstrable compliance is the primary goal, but I've found it stifling for organizations needing innovation.

Approach B: The Diagnostic Method

Here, the model is used as a periodic health check or diagnostic tool, not a continuous roadmap. An assessment is run annually to identify glaring weaknesses or regressions. Pros: Less overhead, prevents model obsession, and allows for organic growth between check-ins. Cons: It can be reactive, creating a "fire drill" atmosphere around assessments. Improvements may lack coherence. This method is ideal for relatively stable organizations with strong intrinsic motivation, but it risks missing subtle, decaying trends.

Approach C: The Qwesty-Inspired Catalyst Method

This is the approach I advocate and coach. The model is treated as a library of potential capabilities and a source of provocative questions. You use the Qwesty Lens to select only the most context-relevant elements as catalysts for focused improvement sprints. Pros: Highly adaptive, ties improvements directly to business value, and fosters critical thinking. Cons: Requires significant leadership judgment, can be harder to "report" on, and lacks the false comfort of a linear plan. It is recommended for organizations in dynamic markets or those undergoing genuine digital transformation, where adaptability is key. The table below summarizes the key differences:

MethodCore PhilosophyBest ForPrimary Risk
Blueprint"Follow the map precisely."Heavily regulated industries (Finance, Healthcare)Building an efficient, but irrelevant, system.
Diagnostic"Get an annual physical."Stable orgs with mature cultureMissing slow-burn capability decay.
Catalyst (Qwesty)"Use the map to ask better questions."Dynamic orgs, digital transformationsAppearing unfocused or lacking "proof" of maturity.

Step-by-Step: Applying the Lens to Your Strategic Planning

Let's translate theory into action. Here is the exact, step-by-step process I use when facilitating strategic planning sessions with my clients, incorporating the Qwesty Lens. Step 1: Model Deconstruction (Half-Day Workshop). Assemble key stakeholders. Choose a relevant maturity model (e.g., CMMC, DORA, a DevOps model). Together, literally print it out. Use the Qwesty Filters to interrogate it. For each high-level capability, ask: "Why is this here? Who defined it? What underlying problem is it trying to solve? Does that problem exist for us?" This isn't about agreement yet, just about understanding the narrative you're being sold.

Step 2: Contextual Capability Mapping (1-2 Days)

This is the heart of the work. Don't map your organization to the model. Instead, flip the script. On a whiteboard, list your top 3-5 strategic business outcomes for the next 18 months (e.g., "Reduce customer onboarding time by 50%," "Achieve compliance in EU market"). Now, for each outcome, brainstorm the specific organizational capabilities needed to achieve it. Only after this list is generated, look back at the maturity model. See where there is overlap. The model's elements that support your contextual capabilities are your high-priority items. Those that don't align are candidates for deprioritization, no matter what "level" they represent.

Step 3: Define Qualitative Benchmarks & Signals (Ongoing)

For each chosen capability, define what success looks like in behavioral, qualitative terms, not just quantitative metrics. For "Improved Deployment Safety," a metric is "change failure rate." A qualitative benchmark is "Developers feel confident deploying on Fridays." A signal is "The deployment log shows steady activity throughout the week, not just Tuesday morning." I have clients create "Signal Dashboards" alongside their metric dashboards. This grounds the work in human reality.

Step 4: Implement, Measure Trajectory, and Re-evaluate (Quarterly)

Execute focused improvements on your selected capabilities. Every quarter, don't just measure the score. Assess the trajectory. Are our qualitative signals improving? Has this capability impacted our strategic outcomes? Also, re-run Step 1 lightly. Has our business context changed? Does the model's narrative still make sense, or do we need to adjust our focus? This cyclical process ensures you remain agile and value-focused.

Real-World Case Studies: The Lens in Action

Theory is useful, but practice is convincing. Let me share two detailed case studies from my client work where applying the Qwesty Lens led to materially different outcomes. Case Study 1: The Over-Engineered SOC. In 2023, I was engaged by a healthcare technology provider, "MedSecure," who had just completed a multi-million dollar Security Operations Center (SOC) overhaul guided by a vendor's maturity model. They were rated "Advanced." Yet, breach detection time was poor, and analyst burnout was high. Applying the Lens, we found they had built for "Level 4" without mastering "Level 2" behaviors. They had a fancy SOAR platform but no clear, documented process for tier-1 analysts to triage common alerts. The model had them focused on automation before achieving basic clarity. We paused all new tooling for six months. My team worked with theirs to codify simple playbooks and improve alert quality. This boring, foundational work—which the model had glossed over—reduced mean time to triage by 70% and improved analyst job satisfaction dramatically. The "score" might have dipped temporarily, but the actual capability soared.

Case Study 2: The Startup Scaling Through Chaos

A contrasting case from last year involved "NexusFlow," a Series B startup with a brilliant product but engineering chaos. They wanted to adopt a DevOps maturity model as a blueprint. Using the Qwesty Lens in our first workshop, we identified that their single most important business outcome was "releasing a critical new API to capture a market window in 5 months." The model's prescribed first step was "establish a centralized CMDB." This was a context mismatch. Instead, we used the model as a catalyst to ask: "What capabilities would make our release pipeline predictable and fast?" We isolated three: basic deployment automation, environment consistency, and feature flagging. We ignored 90% of the model and ran a 10-week sprint on just those. They hit their market window. Later, they revisited the model for other needs, but from a position of strength and critical judgment. The model served them; they did not serve the model.

Common Pitfalls and How to Avoid Them

Even with the best framework, it's easy to stumble. Based on my observations, here are the most frequent pitfalls and my advice for navigating them. Pitfall 1: Confusing Maturity with Tool Acquisition. This is the most seductive error. The model says "Level 3 requires automated testing," so you buy an expensive testing platform. But maturity is about the practice and culture of testing, not the tool. I've seen teams with open-source stacks outperform those with six-figure tool suites because their engineers cared about quality. Antidote: Never approve a tool purchase for a maturity initiative without first defining the behavioral change and skill development it will enable. Pilot the process manually or with a simple script first.

Pitfall 2: The "Maturity Team" Silo

Organizations often assign "achieving maturity" to a dedicated team or PMO. This instantly divorces the work from the people who own the actual capabilities—the engineers, the operators, the analysts. The maturity team becomes reporters of scores, not facilitators of improvement. Antidote: Embed the improvement work within the existing teams. Use coaches or facilitators (internal or external) to guide them, but the ownership and accountability must lie with the people doing the work. This is slower but infinitely more sustainable.

Pitfall 3: Ignoring the Cultural Debt

Models focus on processes and technology. They rarely address the cultural debt—the norms, fears, and incentives—that block progress. You can implement a perfect incident management process, but if people are punished for mistakes, no one will declare an incident. Antidote: For every process change you consider from a model, explicitly discuss the cultural enablers and blockers. Use the Qwesty Lens Filter Three (Signals Over Scores) to monitor cultural health. Initiatives like blameless post-mortems or reward systems for collaboration may be more critical than any technical prescription.

Pitfall 4: The Annual Assessment Panic

This plagues the Diagnostic Method. Teams spend weeks frantically creating artifacts and "teaching to the test" for the annual assessment, then revert to old habits. This creates no lasting value and breeds cynicism. Antidote: If you must do formal assessments, make them continuous and lightweight. Sample real work artifacts regularly. Focus the conversation on learning and help, not judgment and scoring. This transforms the assessment from an audit into a feedback loop, which is what it should always have been.

Conclusion: Your Quest for Authentic Capability

The journey through maturity model landscapes is ultimately a quest for authentic organizational capability. The Qwesty Lens isn't a rejection of these models—they contain collective wisdom and useful frameworks. Instead, it's an assertion of your own sovereignty over your organization's development path. In my years of guiding companies, the most transformative outcomes have come not from slavishly following a map, but from using maps to ask better questions, challenge assumptions, and focus relentlessly on the capabilities that matter for your unique context. Remember, maturity is not a score to be achieved; it's a direction of travel, characterized by learning, adaptation, and resilience. Use the models, but don't let them use you. Apply the three filters—Context, Trajectory, and Signals—to cut through the noise. Focus on qualitative benchmarks that reveal how work actually gets done. By reading between the lines of the maturity narrative, you unlock the ability to write your own story of genuine, lasting improvement.

About the Author

This article was written by our industry analysis team, which includes professionals with extensive experience in organizational transformation, digital strategy, and capability development. With over 15 years of hands-on consulting across finance, healthcare, and technology sectors, our team combines deep technical knowledge with real-world application to provide accurate, actionable guidance. The Qwesty Lens framework is a direct result of this applied practice, developed and refined through hundreds of client engagements where theoretical models met complex reality.

Last updated: April 2026

Share this article:

Comments (0)

No comments yet. Be the first to comment!