
What If the System Was Built This Way?
Engr. Elaine Macatangay Morales, MPA | 9 July 2025
Not long ago, during a training workshop on internal quality audit, a participant asked a question that surprised the room but not me: “Ma’am, is it possible for a process to be 100% compliant but still ineffective?” It was a sharp question and a real one.
In my years of auditing and consulting for quality, environment, safety, and health systems, I’ve walked into both government offices and private company headquarters that had all the right paperwork in place. The manuals were updated, the records complete, and the procedures followed to the letter—yet meaningful improvement was hardly visible. The system was “working,” technically—but in practice, results lagged, people were disengaged, and the original purpose of the system—whether for quality, safety, or innovation—was being missed. Real progress, if any, was happening too slowly or superficially to make a meaningful difference. This isn’t an isolated case. And it isn’t about bad people. It’s about something deeper: how systems are built.
Systems, Not Symptoms
We often explain problems by pointing to individuals—someone forgot, someone refused, someone didn’t understand. But when the same issues repeat across different people, teams, or cycles, it’s time to look beyond the individual. We need to examine the system: the processes, policies, structures, and relationships that guide how things are done.
Systems determine what’s easy and what’s hard, what gets rewarded and what gets ignored. They influence whether reports lead to corrective action or simply get filed away. When we encounter delays in implementation, inconsistent results, or recurring problems, we often try to fix what’s immediately visible. But these are usually symptoms, not root causes. They are the effects of how the system is designed and what it prioritizes.
Designed for What?
Every system is designed to do something. The real question is: What was it designed to achieve?
Sometimes, systems are structured around completeness rather than effectiveness. A process might require full documentation, multiple reviews, and sign-offs to ensure control and traceability, but not necessarily to generate improvement. In other cases, policies are drafted to meet compliance with external standards, with little space left for adaptation or feedback. The result is a working system that doesn’t actually fulfill its intended purpose.
In science and science governance, this shows up in institutional programs that report success through numbers: trainings conducted, publications produced, technologies transferred, patents commercialized. But quantity doesn’t always reflect quality or impact. When systems focus on producing measurable outputs without examining whether those outputs lead to meaningful outcomes, they may appear successful on the surface, yet still fall short in driving real innovation or earning public trust.
This is what I call design bias—when a system reflects what it values, not necessarily what it needs or is set out to achieve. More often than we admit, those values are shaped by convenience, tradition, or audit checklists, not by long-term vision, real-world relevance, or measurable impact. A system may appear complete on paper but still fail to deliver meaningful results if it is not designed for effectiveness. When systems prioritize compliance over continuous improvement, or process over outcomes, they risk becoming performative or meeting requirements without creating real change in behavior, performance, or public value.
When Quality Becomes a Checkbox
In auditing, one important principle is this: we don’t audit people, we audit the system. We look at whether processes are defined, understood, implemented, and effective. But of course, people shape systems. Their competence, commitment, and understanding determine how well those processes are brought to life or not.
A quality system can create a culture of reflection, learning, and responsiveness. Or it can become a hollow shell of forms and templates. I’ve seen safety systems where every incident is reported, investigated, and filed—yet lessons are poorly communicated. I’ve reviewed innovation frameworks where proposals are submitted and tracked, but never tested or refined. The procedures were in place. The policies were followed. But the system had stopped asking whether it was actually working and working effectively.
Systems That Matter
These patterns aren’t limited to quality systems. In fact, they echo across sectors, including science and innovation.
Scientific work does not happen in isolation. It is embedded in and shaped by the systems that surround and support it: funding mechanisms, regulatory processes, institutional mandates, and broader cultural and political norms. Research priorities are influenced by how budgets are allocated, who approves proposals, and what outcomes are incentivized. For instance, when funding is tied primarily to the number of publications or patent filings, institutions may prioritize quantity over quality or novelty. When approval processes are overly rigid, breakthrough ideas can be delayed or dismissed. And when accountability mechanisms are weak or symbolic, programs risk becoming ineffective. On the other hand, when these systems are intentionally designed to support experimentation, long-term learning, and societal relevance, science can become more responsive, impactful, and trusted.
So maybe the better question isn’t “What went wrong?” but “What if the system was built this way? And if so, what should we be building instead?”