Phil Borel
Engineering Manager at RIVET
Phil Borel is a product-minded engineering leader with 12+ years of experience in early & growth-stage software companies. He's laid the groundwork as employee #1 and operated at scale as employee #10,000 - and believes success at any size requires the same things: highly-motivated teams, deep customer focus, and rapid iteration. Today, he's exploring agentic development workflows to ship higher-quality software, faster.
Phil is an Engineering Manager at RIVET, a Detroit-based construction tech SaaS company. He also advises early-stage startups, is deeply invested in the midwest tech ecosystem, and organizes the Detroit Developers meetup. He writes about his learnings at https://www.detroitdevelopers.com.
Upcoming conference sessions featuring Phil Borel
Lightning Talks - AI Is an Amplifier: Harness Engineering for 3x Velocity
Our CEO set a 3x velocity goal. We went all-in on AI coding agents. Eight months in, here's what actually matters - and it's not the model.
The DORA 2025 data across 5,000 professionals confirms what we learned the hard way: AI amplifies whatever's already in your codebase. Good architecture becomes great. Three divergent patterns become three divergent patterns generated at scale. A Carnegie Mellon study of 807 repos puts numbers on the failure mode: 281% more code in month one, velocity back to baseline by month two, code complexity up 41% - permanently.
This is the story of what we built to avoid that outcome - and what we're still working on.
Code quality as agent infrastructure. We had three backend patterns. The agent reproduced whichever it found first. We consolidated into a strict layered architecture with defense-in-depth permissions enforced at every boundary. The same patterns that make code navigable for new engineers make it navigable for agents.
Context engineering that compounds. AI coding agents work best when they have structured, persistent context about your system - not a single prompt, but an evolving knowledge base the whole team maintains. We built indexed documentation the agent loads on-demand, custom skills that turn multi-step workflows into single commands, and mechanical enforcement that catches violations automatically. Documentation drifts. Lint rules don't.
What's not working yet. Multi-agent parallelization requires a level of maturity we haven't earned. Context debt is real - documenting what lived in engineers' heads takes time we underestimated. And the review bottleneck from increased PR volume is stickier than any tooling fix.
You'll leave with concrete steps you can start Monday: the first files to add to your repo, the first skill to try, and how to identify the refactor that will make your codebase agent-navigable.
Get conference pass
Browse all experts
Here