Showing 7 out of 7 results
Expert talk: What’s Next for .Net?
.NET has been undergoing a massive development since its very beginning. Martin Thwaites, developer advocate for Honeycomb, and Hannes Lowette, head of learning and development at Axxes, enjoyed every step of its path. Join them as they reveal important milestones in .NET’s evolution as well as gain practical insights into web performance, running .NET at scale, and how to implement observability.
Using Graph Theory and Network Science to Explore your Microservices Architecture
So your microservice system has been up and running for a while. You know you’ve diligently employed every ounce of your experience and knowledge over time to design a sensible application architecture, with hopefully sensible boundaries. But time is now throwing new questions your way:<br/> Are my boundaries still sensible?<br/> Have any anti-patterns crept in, have I inadvertently created the dreaded distributed monolith? This talk explores how network science and graph theory techniques can be applied to help gain insight into, and explore questions about your microservices architecture.
Observability, Distributed Tracing & the Complex World
In a world of increasingly complex architectures and environments, the concept of observability has emerged to provide tools to watch over the apps and architectures. With new focus on monitoring and alerts, tools are presenting levels of information that are useful in the world of orchestration and Kubernetes, microservices and hybrid clouds. But to quote Sherlock Holmes, “You see, but you do not observe” (A Scandal in Bohemia). Single tools may not show you the whole truth. Come join in for an introduction to the concepts driving observability, with a dive into the deep end of the pool in distributed tracing. We’ll cover how all the pieces, from alerts to code, fit together to give you the total view of your architecture, allowing you to get to root cause issues faster. This talk is from our partner.
Lies, Damned Lies, and Metrics
They say that "you get what you measure", and we've all seen it happen. "We need to get the coverage up!" followed by people frantically writing tests that might not actually test anything. Coverage is up. Quality? Not so much. So what metrics can we use to drive the things we believe in? In this session Roy Osherove covers recommended and un-recommended metrics and how each one could drive our team towards a bleaker, or brighter future. **What will the audience learn from this talk?** - Leading vs lagging indicators and their value - What metrics can hurt your agility - What metrics push towards agility - Influence forces and why people behave in specific ways (and how metrics play a role) **Does it feature code examples and/or live coding?** No **Prerequisite attendee experience level:** (https://gotocph.com/2019/pages/experience-level)
The Proactive Approach: Data Driven Observability & Incident Response
The focus of DevOps is to streamline processes across the board. One way to produce a better experience across teams is to inject observability, leading to a proactive approach toward development and a fully visible situation when incidents occur. With the cross over of tools like PagerDuty and Humio, it’s possible to gain knowledge of the systems you use when they are functioning smoothly, or when they aren’t and emergency response is necessary. We’ll talk about how observability helps the DevOps process from the point of application development to deployment to incidents to postmortem. This talk is from our partner.
Deliver Results, Not Just Releases: Control & Observability in CD
How do companies like Netflix, LinkedIn, and booking.com crush it year after year? Yes, they release early and often. But they also build control and observability into their CD pipeline to turn releases into results. Progressive delivery and the statistical observation of real users (sometimes known as “shift right testing” or “feature experimentation”) are essential CD practices. They free teams to move fast, control risk and focus engineering cycles on work that delivers results, not just releases. Learn implementation strategies and best practices for adding control and observability to your CD pipeline: Where should you implement progressive delivery controls: front-end or back-end? Why balancing centralization/consistency and local team autonomy in your implementation will increase the odds of achieving results you can trust and observations your teams will act upon. What two pieces of data make it possible to attribute system and user behavior changes to any deployment? How “guardrail” metrics can automate observability of unintended consequences of deployments, without adding overhead to teams making changes or tasking your exploratory testers and data scientists to go looking for them. This talk is from our partner
Work Less and Do More: Google Sheets for Developers