✨ This is a Free Edition of The Scenarionist. To enjoy the full experience, become a Premium Member!
Welcome to the 69th edition of Deep Tech Catalyst, the channel by The Scenarionist, where science meets venture!
Whether you're in aerospace, robotics, AI for critical systems, or advanced materials, developing high-impact technology often means working with limited time, limited capital, and high expectations. Nowhere is this more true than in the defense and dual-use landscape—where complexity and capital intensity are the norm.
In this edition, we’re joined by Dan Ward, Senior Principal Systems Engineer at MITRE, who shares practical insights on how to de-risk early-stage development through simplified processes, iterative testing, and close alignment with end users—drawing on his field experience and MITRE’s Innovation Toolkit, a practical set of tools designed to help teams collaborate, validate ideas, and move faster with fewer resources.
While we often focus on funding strategies or venture models for deep tech startups, this conversation zooms in on how to build smarter from the ground up.
In particular, we explore how to:
Align your team early to avoid fragmented execution and move faster with shared clarity
Use rapid, low-cost prototyping to validate ideas and surface unexpected user insights
Design for real-world adoption by engaging stakeholders across the entire operational ecosystem
At the center of it all is a simple idea: innovation doesn’t have to be slow, expensive, or overly complex. With the right mindset and methods, even capital-heavy technologies can be built and tested faster—and with a much clearer path to market adoption.
Let’s dive in. 🎯
✨ For more, see Membership | Partnership | Deep Tech Briefing | Insights
LESSONS LEARNED
Approaching Product Design in Defense
1. Align Your Team
In high-tech and defense-related innovation, especially in the early stages, teams often dive into building without fully aligning on purpose.
This lack of clarity can lead to fragmented execution—where smart people are working hard but heading in slightly different directions.
Establishing a shared understanding of the problem and the objective isn’t just helpful—it’s essential. When everyone on the team sees the challenge through the same lens, it sets the foundation for faster progress, better collaboration, and more meaningful results.
2. Define the Right Problem
One of the most critical early steps is framing the problem accurately. It’s common for different team members to carry different interpretations of the goal, often without realizing it. Someone might assume the value lies in technical performance, while others might see usability or operational relevance as the primary objective.
Surfacing these assumptions early allows the team to identify gaps, bring in multiple perspectives—including from end users, investors, or operational stakeholders—and reach a more complete, validated definition of the challenge at hand.
This broader understanding leads to better strategic planning and ensures the solution being developed is solving the right problem, in the right way.
3. Engage Key Stakeholders
Counterintuitively, slowing down at the beginning to align can accelerate progress down the line. When teams have a shared vision from the start, they avoid costly missteps and unnecessary rework.
This kind of early alignment is particularly valuable in defense and dual-use contexts, where solutions need to meet the needs of a variety of stakeholders—from engineers to operators to decision-makers. A well-aligned team can move more quickly and with greater confidence toward real-world deployment.
Rapid Prototyping 101: Embracing a Looped Mindset From Idea to Feedback
No matter how advanced or elegant a technology may be, its real success depends on whether people will actually use it. If a solution doesn't resonate with the priorities of its intended users, even the most well-engineered product can fail to gain traction—wasting time, capital, and opportunity in the process.
Testing Early with Low-Cost Prototypes
Once a problem is clearly defined and validated as a real, unmet need, the next critical step is putting potential solutions to the test. But in capital-intensive sectors like defense, space, or advanced hardware, teams often delay testing until they feel the technology is “ready.”
That delay is risky. The earlier you start testing—even with rough, low-fidelity versions—the sooner you can uncover insights that help you build the right thing, not just build it right.
Early testing doesn't require a finished product. It requires tangible, simplified versions of your idea that can be put in front of users quickly to generate real-world feedback.
Make something real → Show it to a real person → Get a real response. Then repeat.
Rapid prototyping starts with speed, not perfection. It means using lightweight, accessible materials—paper, cardboard, foam, simple 3D prints, or even sketches—to simulate a core part of the solution.
The goal is to learn fast, not impress. Effective teams embrace a looped mindset:
Make something real → Show it to a real person → Get a real response. Then repeat.
This approach not only accelerates learning, but also minimizes the risk of building based on flawed assumptions. It creates a feedback-driven environment that’s especially valuable when timelines are tight and budgets are limited.
Breaking Down the Test: Focus on One Variable at a Time
One of the biggest mistakes in early-stage validation is trying to test everything at once—functionality, usability, desirability, and more. This makes it hard to interpret feedback and can overwhelm both the team and test users. Instead, effective prototyping isolates a single question or element with each iteration.
Is the user interface intuitive?
Does the form factor fit operational constraints?
Is the concept emotionally compelling?
Answering these one at a time results in sharper feedback and more confident decisions.
It’s this discipline—focused, fast, user-informed iteration—that turns ideas into solutions with real-world relevance and adoption potential.
How to Know If Feedback Is Real
In the early stages of a high-tech or defense-oriented project, getting user feedback can feel like steering in the dark. Founders and technical teams often wonder: Are we hearing what people actually think? Or are we just getting polite noise?
This matters even more when that feedback becomes the foundation for future capital raises—especially if the next phase involves significant investment. Making decisions based on weak or biased input can lead to costly missteps.
So, how do you know if the feedback you’re collecting is valid?
Start by Knowing Who You're Building For
In technical teams, it's common to push this off—focusing on code, prototypes, or features without clearly identifying the end user. But without that clarity, feedback loses its value. If you're unsure who the technology is meant to serve, you can't accurately interpret whether their reaction means you're on the right track.
That’s why it’s critical to define the desired user as early as possible, and then keep that user close throughout the process.
Real Feedback Requires Real Users—and Enough of Them
Validating feedback isn’t about collecting one perfect reaction. It’s about creating volume and variety. Every single user engagement is a data point, and some will be more useful than others. The only way to get a reliable signal is to gather many of them.
There’s no universal number that fits every case—testing a system designed for 10 users will look very different from one meant for 10,000. But a useful rule of thumb is to engage with at least a representative slice of your intended audience.
The key is not statistical precision, but learning through patterns: Where are people consistently confused? Where do they consistently get excited?
Each conversation or demo should teach you something. And the iterative nature of this process is what builds confidence—not a single response, but a series of evolving reactions.
Case Studies
1. Iterating on Satellite Communication in the Field
To illustrate this, consider a real-world example from the early 2000s. After 9/11, a team was developing an intelligence dissemination tool designed to deliver satellite and aerial imagery to special operations forces in highly mobile, fast-moving environments.
Rather than wait until the product was polished, the team maintained direct communication with an operator on the ground. New features were pushed out rapidly, and the operator would immediately provide feedback after real missions. Sometimes the system was used in unexpected ways that the engineers hadn’t even planned for—insights that directly shaped future development.
This tight feedback loop—show, use, react—ensured that development stayed grounded in practical needs rather than abstract requirements.
2. The Bazooka and Real-World Adaptation
A classic example of rapid feedback and iteration comes from World War II: the development of the bazooka. The first version went from concept to field deployment in just 30 days. It was crude—just a metal tube assembled with available components—but it was enough to start gathering frontline feedback.
Initial use revealed that the weapon wasn’t as effective against tanks as engineers had expected. But it turned out to be incredibly effective against enemy pillboxes—fortified firing positions. This was a use case that operators discovered on their own, and their feedback quickly informed the next version.
Within six months, version 2.0 of the bazooka was rolled out—updated based on frontline input. This story illustrates the power of putting a prototype in the hands of users quickly, even if it's imperfect, and letting real-world use guide the evolution of the product.
Take-Home Message: Minimize the Distance Between Builders and Users
Whether you're developing battlefield technology or scientific instrumentation, shortening the distance between the people building and the people using the product is a critical best practice. Not only does it surface insights you might otherwise miss—it also de-risks development and sharpens your value proposition when talking to investors.
Navigating the DoD Acquisition Process
When building for complex environments like defense or enterprise B2B, the concept of the “real person” who interacts with your product extends far beyond just the end user.
In many cases, your solution will also affect logistics personnel, trainers, sustainers, or acquisition officers. That’s why it's crucial to gather input across the ecosystem—not just from those who will use the tool directly, but from those who will buy it, integrate it, or support it over time.
This becomes especially relevant when engaging with government buyers like the U.S. Department of Defense, where acquisition processes can be lengthy, intricate, and historically rigid.
Enter the Adaptive Acquisition Framework
To address long-standing inefficiencies, the DoD introduced the Adaptive Acquisition Framework (AAF) in January 2020. Before that, all acquisition projects—whether software, hardware, or weapon systems—followed a one-size-fits-all model. That approach proved inadequate for managing the growing complexity and diversity of defense technologies.
The AAF replaced that outdated model with six distinct acquisition pathways, each designed to accommodate different types of systems and timelines.
For example, there’s now a dedicated Software Pathway, recognizing that digital tools require entirely different development, testing, and sustainment processes compared to hardware like satellites or vehicles.
One of the most relevant tracks for innovation teams is the Rapid Prototyping Pathway. This track is meant for fast, iterative development and is legally capped at five years maximum—not five years and a day. That hard limit is intentional: it’s meant to push for efficiency and avoid the endless cycles of delay that have plagued traditional programs.
Low-Fidelity Prototypes Still Matter
Even if your actual technology is sophisticated, your early prototypes don’t have to be. In fact, they shouldn’t be at first. Teams are encouraged to use low-resolution, fast-to-build mockups to validate key assumptions, whether about form factor, user interaction, or spatial integration.
One particularly creative approach is called bodystorming.
Think of it as brainstorming, but physically acting out interactions with a prototype. Using simple materials—like cardboard boxes or foam cutouts—teams can simulate full-size equipment in realistic environments. This kind of tactile prototyping offers invaluable feedback early in the design phase, especially in hardware-heavy contexts.
For example, a team working on a remote air traffic control system built a cardboard version of a new equipment layout. By inviting real operators to interact with the mockup, they learned that placing a certain device in one spot caused people to bump their elbows constantly. Simply rotating it and repositioning the equipment solved the issue—before anything expensive was manufactured.
It took just an hour to build the mockup and test the concept, but it delivered clear, validated insights from a real stakeholder. This approach—quick, human-centered, and feedback-rich—is exactly what frameworks like AAF are designed to support.
What Problems Does the Adaptive Acquisition Framework Solve?
When thinking about innovation within the Department of Defense (DoD), one of the biggest questions founders and investors face is: What makes a new technology actually adoptable? It's not just about building a product that works—it's about navigating a complex ecosystem of procurement, regulations, and real-world operational needs.
This is especially true in the early stages, before scaling or full production. You may have a functioning prototype, but adoption risk remains high until there’s a clear path into the defense system.
Three Core Problems the AAF Was Designed to Fix
Before its introduction in January 2020, the DoD relied on a rigid, one-size-fits-all acquisition model. Whether the goal was a satellite or a software tool, the same rules applied—often resulting in projects that took too long, cost too much, or failed to deliver the intended impact.
The AAF set out to address this by solving 3 key problems:
Lack of Agility
The old system didn’t adapt well to different types of technologies or project scales. The AAF introduced six distinct acquisition pathways (e.g. software, rapid prototyping) to better reflect the diversity of today’s tech landscape. This helps teams move faster, reduce friction, and work in ways more suited to the kind of solution being developed.Lack of Critical Thinking
Previously, acquisition often followed a checklist mentality: do what’s always been done. With the AAF, practitioners are encouraged—and required—to apply critical thinking. Teams must choose the right pathway (or blend of pathways) for their situation. This strategic flexibility drives better decision-making, tailored timelines, and more appropriate expectations.Lack of Operational Alignment
Perhaps the most important shift: the AAF pushes for closer connection between technology developers and military operators. In practice, this means shortening the feedback loop between those building the system and those using it—reducing the chances of building something that looks good on paper but fails in the field.
Blending Pathways for Tailored Strategies
Another major innovation is that these pathways aren’t siloed—you can blend them. A project might begin with rapid prototyping, then transition into full production using a more traditional pathway. This flexibility reflects the real-world evolution of technology and makes it easier to adapt along the way.
✨ DON’T MISS OUR LATEST MINI-SERIES!
The Industrial Flywheel: How to Engineer Demand Before Building Your Factory
What if your factory could sell before it even exists?
In deep tech, most founders get it backwards: they build the product, then look for the market. But in capital-heavy industries, traction can’t be an afterthought—it has to be engineered from day one.
This exclusive mini-series is your blueprint for building commercial momentum before you scale—packed with frameworks, case studies, and lessons from real founders and operators.
You’ll learn:
How to engage partners early—before your tech is even scaled
How to run pilots that lead to contracts, not just case studies
Why regulatory policy can become your secret demand weapon
What separates marketable prototypes from expensive lab demos
Written for startup founders, technical leads, and investors who back industrial innovation, “The Industrial Flywheel” will reshape how you think about readiness and risk—especially when the next move involves millions in CapEx.