testing

Showing 21 out of 21 results

BOOK EPISODE

Modern Software Engineering

What should the modern software engineer know in order to be the best at their job? Dave Farley and Steve Smith explore the books that can help engineers succeed and why iteration and experiments are crucial. The discussion is centered around Dave’s latest book, "Modern Software Engineering."

March 17, 2022
SESSION

High Cost Tests and High Value Tests

There is a value in writing tests and there is also a cost. The currency is time. The trade-offs are difficult to evaluate because the cost and value are often seen by different people. The writer of the test bears much of the short term cost while long term benefits and cost are borne by the rest of the team. By planning around both the the cost and value of your tests, you’ll improve your tests and your code. How much do slow tests cost? When is it worth it to test an edge case? How can you tell if testing is helping? Here are some strategies to improve your tests and code.

SESSION

Breaking Things on Purpose

Failure Testing prepares us, both socially and technically, for how our systems will behave in the face of failure. By proactively testing, we can find and fix problems before they become crises. Practice makes perfect, yet a real calamity is not a good time for training. Knowing how our systems fail is paramount to building a resilient service. At Netflix and Amazon, we ran failure exercises on a regular basis to ensure we were prepared. These experiments helped us find problems and saved us from future incidents. Come and learn how to run an effective “Game Day” and safely test in production. Then sleep peacefully knowing you are ready!

SESSION

Royal Testing: Purple Teaming to Build & Secure Applications Better!

Applications are one of the most exposed parts or any organization, but most companies fall short on knowing how and what to monitor within them. In this presentation, Kevin Johnson of Secure Ideas will use his background as both a developer and a penetration tester to show attendees how to determine these methods. Combining application testing with security control tuning, Kevin will help organizations improve their application monitoring and attack detection.

SESSION

Automatic Testing Meets the Real World

"We are already agile, now can you please automate the test process, and make our company practice Continuous Delivery”… Sounds easy right? But what about the large legacy code base that is not designed for automated tests? What if our software is not a nice collection of microservices? How do you test a complex CAD/CAM GUI? Can we shape the culture of the company to make automated testing a cool thing for 300+ developers and testers? What is the cost, and do we even know the right approach to get a return on our investment? Can we use the results to document our medical software? These and more questions will be answered truthfully in this description of 3Shape’s transition towards automated tests in our build pipeline. The results of our pilot project, the strategy for rolling out to entire R&D department, the tools we used, and what did and did not work.

SESSION

We are agile but... a testers reflections

We are doing agile, but we don’t do testing above unit test within the sprints” Or “we are doing agile, but we have something we call a hardening sprint in the end, which in our case is more of a system/system integration test phase”, or maybe “We are doing agile but we have a separate test team” - Does that sound familiar to you? The core mindset of agile is to build quality in and ensure working software – the only measure of progress. But at the same time, we time and time again see agile projects suffering from: limited unit test, limited test knowledge in the team, separate test phases outside the sprint, and no automated test above unit test level. Futhermore, many projects struggle with getting the business sufficiently involved. These challenges are a risk to not just the quality in classical terms but also to the core principle of delivering software of value to the customer. In this presentation Gitte Ottosen will discuss some of the challenges of an agile transition seen through a testers eyes, and give inspiration to low practical initiatives that will get your team started with a more test infected way of working and thinking – ensuring working software that are of value to our customer.

SESSION

Effective Testing with API Simulation and (Micro)Service Virtualisation

As we work more with distributed systems, microservices and legacy services, we introduce a web of inter-service dependencies that cause us to face many challenges across our development and deployment pipeline. Resource consumption, deployment time, our testing feedback cycle, third party service flakiness and costing can cause problems. This talk addresses these issues by demonstrating how the technique of ‘API Simulation’ (modern service virtualisation) can be used to overcome these issues. We’ll introduce the theory and practice, and use an open source tool named Hoverfly to easily produce and run third party services throughout your stack – from producing test environments, to unit testing, and to being used with custom middleware in staging environments. Come and learn about (micro)service virtualisation in the 21st century, and leave the session with practical techniques to improve your application testing.

SESSION

Life and Death Decisions: Testing Data Science

We live in a world where decisions are being made by software. From mortgage applications to driverless vehicles, the results can be life-changing. But the benefits of automation are clear. If businesses use data science to automate decisions they will become more productive and more profitable. So the question becomes: how can we be sure that these algorithms make the best decisions? How can we prove that an autonomous vehicle will make the right decision when life depends on it? How can we prove that data science works? In this presentation, you will discover how to test the models produced from the application of Data Science. We will discuss the common problems that are encountered and I will show you how to overcome them. You will learn how to evaluate models both quantitatively and visually. And I will explain the differences between technical measures of performance and measures that are better for business use. I will provide context by showing both disastrous and hilarious examples from industry. This talk is designed to be both entertaining and informative. It is primarily focused towards people with exposure to data science due to the use of terminology. But both beginners and those interested in technology will enjoy the talk because the content is thoroughly explained and fun!

SESSION

With Age Comes Wisdom (Hopefully): Lessons Learned in 15 Years of Building Software

Key topics include: - The need for defining and sharing a business and technical vision - Developing software begins with models, both mental (shared) models, and modelling the real world - Benefits of codifying requirements and automating assertions within a continuous delivery pipeline - Using dashboards and communication hacks in order to increase feedback and communication

SESSION

Breaking Things on Purpose

Failure Testing prepares us, both socially and technically, for how our systems will behave in the face of failure. By proactively testing, we can find and fix problems before they become crises. Practice makes perfect, yet a real calamity is not a good time for training. Knowing how our systems fail is paramount to building a resilient service. At Netflix and Amazon, we ran failure exercises on a regular basis to ensure we were prepared. These experiments helped us find problems and saved us from future incidents. Come and learn how to run an effective “Game Day” and safely test in production. Then sleep peacefully knowing you are ready!

SESSION

Life and Death Decisions: Testing Data Science

We live in a world where decisions are being made by software. From mortgage applications to driverless vehicles, the results can be life-changing. But the benefits of automation are clear. If businesses use data science to automate decisions they will become more productive and more profitable. So the question becomes: how can we be sure that these algorithms make the best decisions? How can we prove that an autonomous vehicle will make the right decision when life depends on it? How can we prove that data science works? In this presentation, you will discover how to test the models produced from the application of Data Science. We will discuss the common problems that are encountered and I will show you how to overcome them. You will learn how to evaluate models both quantitatively and visually. And I will explain the differences between technical measures of performance and measures that are better for business use. I will provide context by showing both disastrous and hilarious examples from industry. This talk is designed to be both entertaining and informative. It is primarily focused towards people with exposure to data science due to the use of terminology. But both beginners and those interested in technology will enjoy the talk because the content is thoroughly explained and fun!

SESSION

Continuous Delivery and the Theory of Constraints

How should you actually implement Continuous Delivery? Adopting Continuous Delivery takes time. You have a long list of technology and organisational changes to consider. You have to work within the unique circumstances of your organisation. You're constantly surrounded by strange problems, half-baked theories, off the shelf solutions that just don't work, and people telling you they've worked here for 23 years and Amazon is nothing to worry about. How do you identify and remove the major impediments in your build, testing, and operational activities? How do you avoid spending weeks, months, or years on far-reaching changes that ultimately have no impact on your time to market? The Theory of Constraints is a management paradigm that describes how to improve throughput in a homogeneous workflow. It can be applied to Continuous Delivery in order to locate, prioritise, and reduce constrained activities until a flow of release candidates to production is achieved. In this talk, Steve Smith will explain how easy it is for a Continuous Delivery programme to be unsuccessful, how the Theory Of Constraints works, how to apply the Five Focussing Steps to Continuous Delivery, and how to home in on the constrained activities that are your keys to success. It includes tales of glorious failures and ignominious successes when adopting Continuous Delivery. **What will the audience learn from this talk?**<br> * Continuous Delivery means applying technology and organisational changes to the unique circumstances of an organisation * If a Continuous Delivery programme does not focus on the activities with the most rework and/or queue times, there is a high probability of sub-optimal outcomes * The Theory Of Constraints is a management paradigm for improving organisational throughput, while simultaneously decreasing both inventory and operating expense. * The Theory Of Constraints can be applied to Continuous Delivery, as the build, testing, and operational activities in a technology value stream should be homogeneous * The Five Focussing Steps can be used to identify constrained activities, and then introduce the necessary technology and organisational changes to reduce rework and/or queue times **Does it feature code examples and/or live coding?**<br> No **Prerequisite attendee experience level:** <br> [Level 300](https://gotoams.nl/2019/pages/experience-level)

SESSION

Selenium Tests, the Object Oriented Way

When you are writing Selenium tests to check for elements on your page, by taking the classic approach (checking for each element’s properties at a time), you might get to a large number of assert steps. This increases the lines of code your tests have, make the tests difficult to maintain and tricky to read. Wouldn’t it be nice if the actual checking part of the test would be small, perhaps one line of code? With the approach I am going to present, you can do just that. Hence your tests will be small and clean. All you will need to do is model the pages/modules/items by using an Object Oriented approach. **Does it feature code examples and/or live coding?**<br> Yes, there will be live coding! **Prerequisite attendee experience level:** <br> [Level 100](https://gotoams.nl/2019/pages/experience-level)

SESSION

Lies, Damned Lies, and Metrics

They say that "you get what you measure", and we've all seen it happen. "We need to get the coverage up!" followed by people frantically writing tests that might not actually test anything. Coverage is up. Quality? Not so much. So what metrics can we use to drive the things we believe in? In this session Roy Osherove covers recommended and un-recommended metrics and how each one could drive our team towards a bleaker, or brighter future. **What will the audience learn from this talk?** - Leading vs lagging indicators and their value - What metrics can hurt your agility - What metrics push towards agility - Influence forces and why people behave in specific ways (and how metrics play a role) **Does it feature code examples and/or live coding?** No **Prerequisite attendee experience level:** [100](https://gotocph.com/2019/pages/experience-level)

SESSION

Millisecond Full Stack Acceptance Tests

Are your full stack acceptance tests slow, non-deterministic and hard to maintain? You're not alone. Imagine running hundreds of them in a few seconds, giving the same result every time. How do you think a feedback loop that fast would that affect your team's productivity? In this talk you will see what this workflow looks like. You will learn about the underlying principles and techniques for millisecond full stack acceptance tests. This is primarily a talk for programmers, as the solution to the problem requires refactoring of the system under test as well as tests. I will demonstrate that the basic assumption of the Test Pyramid model are wrong, and suggest a more useful taxonomy of tests, and how to partition testing efforts. **What will the audience learn from this talk?**<br> * How to identify what makes tests slow * Design patterns for decoupling application code and test code * How to run the same acceptance tests against different layers of the app **Does it feature code examples and/or live coding?**<br> Yes, about 10-20% of the presentation is code **Prerequisite attendee experience level:** <br> [Level 300](https://gotoams.nl/2019/pages/experience-level)

SESSION

Disrupting QA – Emerging Trends in Code Quality Automation

Historically, static analysis has been widely used to identify defined sets of security issues via overnight runs across entire code bases. A recent trend has been the evolution of static analysis methods and tools to:<br /> 1) become much more scalable and<br /> 2) leverage machine learning to substantially improve code quality. These improvements allow a much tighter integration into modern agile development processes. At the same time, the scope of these tools has broadened from purely security-relevant bugs to performance and reliability issues like memory leaks and data races. Google and Facebook have pioneered a new model of static analysis deployment that involves improving developer productivity via broad deployment of extremely scalable static analysis (billions of lines of code / thousands of commits per day). This talk will review these recent developments as well as the history of static analysis in commercial software and its evolution in the academic world. It will provide an overview of the current commercial landscape, and conclude with best practices for organizations looking to bring static analysis into their development environment. <p> <b>Who should attend this talk:</b> Developers, engineering managers and executives <p/> <p> <b>Academic level:</b> Introductory <p/> <p> <b>Key takeaway:</b> Why static analysis is useful, overview of commercial tools in the market, and best practices for incorporating static analysis into a development environment. <p/>

SESSION

In Search of the Perfect Cloud Native Developer Experience

With a productive service-based development workflow, individual teams can build and ship applications independently from each other. But with a rapidly evolving cloud native landscape, creating an effective developer workflow using a platform based on something like Kubernetes can be challenging. All of us are creating software to support the delivery of value to our customers and to the business, and therefore, the developer experience from idea generation to running (and observing) in production must be fast, reliable, and provide good feedback. During this talk Daniel will share with you several lessons learned from real world consulting experience working with teams deploying to Kubernetes. **What will the audience learn from this talk?**<br> * Why an efficient development workflow is so important * A series of questions to ask in order to understand if you should attempt to build a PaaS on top of Kubernetes (everyone needs a platform, but how much should be built versus integrated versus bought?) * A brief overview of developer experience tooling for Kubernetes, and how this domain could evolve in the future * The role of Kubernetes, Envoy, Prometheus, and other popular cloud-native tools in your workflow * Key considerations in implementing a cloud-native workflow **Does it feature code examples and/or live coding?**<br> Yes, several code/config examples. **Prerequisite attendee experience level:** <br> [Level 200](https://gotoams.nl/2019/pages/experience-level)

SESSION

What We Know We Don't Know: Introduction to Empirical Software Engineering

There are many things in software we believe are true but very little we know. Maybe testing reduces bugs, or maybe it's just superstition. If we want to improve our craft, we need a way to distinguish fact from fallacy. We need to look for evidence, placing our trust in hard data. Empirical Software Engineering is the study of what actually works in programming. Instead of trusting our instincts we collect data, run studies, and peer-review our results. This talk is all about how we empirically find the facts in software and some of the challenges we face, with a particular focus on software defects and productivity. **Who should attend this talk:** People interested in improving their practices. People interested in how we determine "best practices". Architects and managers who want to know the processes that produce the best software. **Academic level:** Beginner for academics. Intermediate for industry. **What is the take away in this talk:** Why we need to empirically evaluate claims. How we do it. What we know, with hard evidence, improves our software. How to explore the research yourself.

SESSION

All The World’s A Staging Server

I have sad news - staging is a lie and will never be identical to production, because production is unknowable. Trying to replicate it is often prohibitively expensive. But I also have good news - production can contain multitudes, including features you aren’t ready to turn on or activate yet. You can hide in the dark and do integration testing at the same time. It's simplistic to say that you should just kill the idea of a staging server and do everything in production. There are obviously problems with that - you need to do unit testing, you need to avoid things that will take down a service, you may need to do essential cutovers. But it's worth examining what benefit you're getting from staging and whether you could re-allocate that effort. Join me for an exploration of the ways that you might be able to kill staging and perform better. * What is the actual value of a staging environment? * What are some questions to ask about why we have staging? * How can I re-engineer releases to save costs? ``` Sad news Staging is a lie Green is expensive Production is unknowable Good news Production can contain multitudes You can hide in the dark Integration testing takes many forms But what about? Unit testing Bad ideas Essential cutovers? Conclusion Launch darkly Branch by abstraction Test in Production ``` **Who should attend this talk:** People who design software architecture, identify as devops, or are trying to increase the release cadence of their product. **Academic level:** There's nothing an introductory person wouldn't understand, but it will be more relevant to someone with a couple years experience in the field. **What is the take away in this talk:** Think about what value a staging server has to your organization, and how you want to maximize that value while increasing release cadence.

SESSION

Making Mutants Work for You

Mutation testing is a once obscure development technique that dates back to the 1970s. It deliberately introduces bugs into your code, then sees if your tests can find them. Thanks to the open source tool pitest mutation testing has recently become much more widely used in the Java community. When people talk about mutation testing they often talk about ">100% code coverage" but is this what it is really about? **What will the audience learn from this talk?** The audience will learn what mutation testing is and how to use it effectively. Most importantly they'll learn what it is actually useful for, which is different from what many people expect. **Does it feature code examples and/or live coding?** There will be some code examples and a live demo. **Prerequisite attendee experience level:** Level [200](https://gotocph.com/2019/pages/experience-level)

SESSION

You're Testing WHAT?

Gojko presents five universal rules for test automation, that will help you bring continuous integration and testing to the darkest corners of your system. Learn how to wrestle large test suites into something easy to understand, maintain and evolve, at the same time increasing the value from your automated tests. Discover how to bring aspects that people don't even consider automating, such as layout checks and even video into an automated continuously integrated process. **In this talk, you'll learn:** * What are the core things you should know in test automation * How to handle large test suites * How to have an "out of the box" mindset when it comes to automation