Why secure-by-design is an incentives problem, with Bob Lord

by Dave Lewis
April 14, 2026 - 4 min

Bob Lord has spent decades building and leading security programs, from early internet crypto work at Netscape to roles at Twitter, Yahoo, the Democratic National Committee, and CISA. In this episode of Chasing Entropy, he and host Dave Lewis get practical about why the security advice most people hear doesn’t match how real compromises happen.
Across secure-by-design, AI systems, and software supply chains, security breaks down when organizations treat outcomes like someone else’s problem.
Why secure-by-design is an incentives problem
When Bob talks about secure by design, he is deliberately not trying to write another technical framework. Plenty exist. His question is different.
If we already know how to prevent a long list of common issues, why do we keep shipping the same defects?
Secure-by-design breaks down when companies treat security as a feature or a compliance exercise rather than something they are accountable for delivering as a customer outcome.
Draw a line to quality and safety movements outside software, especially in automotive safety. Car companies used to compete on lifestyle and appearance, not safety. Customers did not know what to ask for. Manufacturers had little reason to prioritize safety until norms, regulations, and accountability shifted.
Software, in Bob’s view, is still in the pre-seatbelt era. We have normalized shipping unsafe components, building with unsafe processes, and delivering unsafe defaults. Then we act as if customers should be able to configure their way out of systemic risk.
From that lens, CISA’s Secure by Design work focuses on three principles:
Take ownership of customer security outcomes. Shipping a patch is not enough if you do not know whether customers update. Measure adoption and remove friction.
Embrace radical transparency. Make vulnerability handling easier, not adversarial. Build a real safe harbor for good-faith research.
Lead from the top. Meaningful change is driven by senior business leadership. You don’t delegate quality to the quality team, nor do you delegate security outcomes to security teams alone.
How AI systems become permission amplifiers
The AI section lands because it stays concrete.
Dave shares a story where an internal LLM was asked, “Who at the company doesn’t like me?” The system reportedly queried HR data and responded, highlighting that agentic systems can become permission amplifiers.
What changes in AI environments is not just the interface, but the speed and scale of access: systems can act across email, chat, HR, internal tools, and business apps faster than most access controls were designed to govern.
In many organizations, no single person can pull data from email, chat, and HR systems and fuse it into a targeted answer. But companies are increasingly giving AI systems broad access paths without mature roles, rights, and auditing. Then we try to patch over it with soft instructions like “don’t be evil.”
The takeaway is pro-accountability. If the system can take actions and surface sensitive conclusions, you need guardrails that reflect that power.
Supply chain reality: “It’s upstream” is not a defense
Open source comes up in the context of underfunded teams that cannot afford premium tooling. Bob agrees the constraint is real, but he pushes back on the industry habit of outsourcing responsibility. Constraints don’t remove accountability when insecure or unmaintained components make their way into customer-facing products.
If a defect ships in your product, it’s yours, even if it came from upstream.
He also calls out a common failure pattern: vendors using unmaintained dependencies for years, sometimes far longer, and not giving customers visibility into what is actually inside the product. SBOM practices exist. Some companies do this well. Many do not.
Whether the issue is insecure defaults, overpowered AI systems, or vulnerable dependencies, the pattern is the same: organizations cannot keep pushing security outcomes downstream and expect users, customers, or open-source maintainers to absorb the risk.
Mentioned in the episode:
Subscribe to Chasing Entropy
Subscribe to Chasing Entropy for honest, expert-led conversations on agentic AI, security, shadow IT, and extended access control from industry leaders.

