Home GOTO Experts Roberto Perez Al...

Roberto Perez Alcolea

Software Engineer at Netflix

Roberto Perez Alcolea is a Software Engineer at Netflix who focuses on the JVM development lifecycle, spanning build automation and testing infrastructure. With a deep appreciation for the JVM ecosystem and Build Tools, he works on improving how Netflix engineers build, test, and publish software.

Roberto contributes to maintaining Netflix's build-package-publish infrastructure through Nebula (Gradle) plugins, helping with dependency management and artifact publishing across the organization's projects. He's also involved in testing strategy initiatives, including working on E2E testing frameworks that incorporate observability and failure analysis.

Roberto advocates for modern integration testing practices and helps teams adopt container-based testing approaches, which led him to become a Testcontainers Community Champion. He enjoys sharing knowledge at conferences, contributing to open-source projects, and collaborating with engineers on testing best practices.

Roberto believes strongly in community-driven innovation and enjoys both learning from fellow engineers and sharing his experiences to help advance the broader JVM and testing communities.

Upcoming conference sessions featuring Roberto Perez Alcolea

Tag. Publish. Hope.

Think about how carefully we ship applications: staged rollouts, canary analysis, automated rollback, observability at every step. Now think about how we ship libraries: Tag a version. Publish. Hope.

At Netflix, this blind spot caught up with us across thousands of libraries and 5,000+ repositories. Upgrades broke services overnight because nobody could see the blast radius beforehand. Security patches took days because we couldn't answer ""which libraries actually matter."" Teams deprecated libraries with no way to tell consumers ""hey, start planning your move."" So we set out to build a paved road for libraries.

Once we started digging, things got uncomfortable. Many libraries we considered ""actively maintained"" hadn't had a real human change in over a year, just bots keeping the lights on. We kicked off a major migration and couldn't tell which repos would be affected for over half the fleet. Basic questions, no answers. But the hardest part wasn't technical. It was cultural: how do you bring lifecycle governance to an org that values speed and autonomy without becoming the team everyone routes around?

This talk is about what we're learning as we build that paved road. It's an ongoing effort, not a finished product. I'll walk through the five things we think it needs: stability signals, compatibility validation, impact visibility, lifecycle communication, and proportional escalation. I'll share why we bet on tools that inform instead of block, why we started small and earned trust before scaling, and why ""people decide, tools guide"" became our north star.

We're also still figuring some things out. Where's the line between healthy maintenance and slow-motion abandonment? When should advisory become enforcement? How do you measure library health without turning it into a vanity metric?

We're done tagging, publishing, and hoping.

Wednesday Jun 24 @ 11:45 AM @ Accelerate Chicago 2026

Get conference pass

Content featuring Roberto Perez Alcolea

44:47
From Lag to Lightning: Confident, Automated Changes at Scale
From Lag to Lightning: Confident, Automated Changes at Scale
GOTO Copenhagen 2025
Browse all experts

Here