data science
Showing 17 out of 17 results
Quantum Computing in Practice
Is quantum computing just a theory, or is it actually applied in practice? Join Murray Thom in an interview with Preben Thorø to discover the real-life implementations of this technology. You’ll learn how you can leverage D-Wave’s platform as a developer, which is open to all researchers to help solve health problems, and learn how quantum computing has evolved from a theoretical discipline to an applied science that helps researchers and developers solve complex problems 3 million times faster, without consuming any energy.

What is Data Science and Where is it Heading?
What is a data scientist doing in their day-to-day job and how will this impact our lives in the future. Join a conversation with Em Grasmeder, Code Witch at Thoughtwork, and Evelina Gabasova, a Principal Research Data Scientist at The Alan Turing Institute, about how data science is currently shaping our lives and what is the potential for the future.

The Importance of Reinforcement Learning with Phil Winder
We recently sat down for a short conversation with Phil Winder, multidisciplinary software engineer and data scientist, about his newly released book, Reinforcement Learning.

Is Machine Learning a Black Box?
Data science has become a bigger part of software engineering. Where does the path lead? What have the changes been over the last couple of years and where are we heading? In this unscripted episode, Dean Wampler takes you on a journey through data science.

What Does It Take To Be a Data Scientist?
Data science is so much more than collecting, sorting and analyzing data. What does it take to be a data scientist and how does a day in the life of a data scientist look like? Ekaterina Sirazitdinova, Prayson Daniel and Nicholai Stålung will give you an insight into this and more.

How AutoML & Low Code Empowers Data Scientists
Over the past decade, AutoML has revolutionized the world of data science, propelling it several layers forward in terms of abstraction. This powerful technology has paved the way for a new era of democratization, empowering experts from all fields to harness the power of data through the concept of the citizen data scientist. Moez Ali, Creator of PyCaret, and Linda Stougaard Nielsen, director of data science at Ava Women, discuss two sides of this discipline and its future.

Software Technologies that Stand the Test of Time
What software technologies have stood the test of time or have had a massive influence over existing systems? Which do you love or hate? We asked these questions to the GOTO Book Club authors and interviewers that made up the lineup for the second season. Find out what Nicki Watt, CTO/CEO at OpenCredo, Eberhard Wolff, fellow at innoQ, Venkat Subramaniam, founder of Agile Developer, Inc., Liz Rice, chief open source officer at Isovalent, Rebecca Nugent, professor in statistics & data science, Phil Winder, CEO of Winder Research, Hanna Prinz, DevOps & software engineer and Eoin Woods, CTO at Endava, had to say. The conversation was moderated by Rebecca Parsons, CTO at ThoughtWorks.

How to Leverage Reinforcement Learning
Find out what reinforcement learning is, how to leverage its unique features and how to start using it with Phil Winder, author of "Reinforcement Learning" and CEO of Winder Research, and Rebecca Nugent, Stephen E. and Joyce Fienberg Professor of Statistics & Data Science at Carnegie Mellon University.

Keep it Clean: Why Bad Data Ruins Projects and How to Fix it
The Internet is full of examples of how to train models. But the reality is that industrial projects spend the majority of the time working with data. The largest improvements in performance can often be found through improving the underlying data. Bad data is costing the US economy an estimated 3.1 trillion Dollars and approximately 27% of data is flawed in the world's top companies. Bad data also contributes to the failure of many Data Science projects. Who can forget Tay.ai, Microsoft's twitter-bot that learned to be genocidal when user's tweets were not cleaned. This presentation will discuss in what circumstances bad data can affect your project along with some high profile case studies. We will then spend as much time as we have to go through some of the techniques you will need to fix that bad data. This is aimed towards those with intermediate-level Data Science experience.

The Meaning of (Artificial) Life
The Hitchhiker's Guide says the meaning of life is 42. Considering that the field of Data Science is going through a period of exponential growth it too could soon find that the meaning of an artificial life is also 42. But if you are not involved on a day-to-day basis, the expansion can seem bewildering. The story of how disparate disciplines have combined to produce Data Science is fascinating. In this talk, we will walk through a journey of scientific discovery. Following how, from humble beginnings, a multitude of sciences (and a surprising number of hacks) converged into the incredible advancements that you see in the media today. With these building blocks, we will be able to succinctly describe what these disciplines are and how they relate. The result will be the decomposition of a "rockstar" data science application; you will see that it is not so complicated after all. But the interesting result is that this generates a philosophical and political minefield; we can decompose the application and clearly see how it is built, but it also mimics or surpasses human capabilities. Are these human qualities? Is a more efficient or productive algorithm better than a human? Can we call them "intelligent"? Attendees will gain a fundamental understanding of the field of data science. You will leave understanding exactly the difference between machine learning and deep learning and how they are different. You will be able to describe how data mining can help your business run analytics tasks to improve efficiencies. You will be able to explain to your children why big data techniques were invented to solve a specific problem. This will suit anyone interested in the history of data science and also serve as a broad introduction to the rest of the day's in-depth talks. So, is the meaning of life 42? Possibly. But maybe all we need is a science algorithm to ask a better question.

Cloud-Native Data Science: Turning Data-Oriented Business Problems Into Scalable Solutions
The proliferation of Data Science is largely due to: ubiquitous data, increasing computational power and industry acceptance that solutions are an asset. Data Science applications are no longer a simple dataset on a single laptop. In a recent project, we help develop a novel cloud-native machine learning service. It is unique in that problems are packaged as containers and submitted to the cloud for processing. This enables users to distribute and scale their models easily. This talk will discuss the Data Science explosion and how that has altered the way engineers work. It will find that the boundaries between Data Science and Software Engineering are becoming even more blurred and techniques born out of Software Engineering can drastically improve many aspects of Data Science. Through a demonstration of business-focused examples, we will follow the process of turning a business problem into a scalable solution. It will make heavy use of containers to smooth the development and productisation of a model. This talk will be enjoyed by all and technical details will also be available for those who are interested.

One does not simply put Machine Learning into Production
When deciding to infuse existing products with machine-learning smarts, or building ML-first products, there are multiple challenges to be aware of. First, you and your organization need to understand important dimensions -- accuracy, cost, maintainability, interpretability -- and trade-offs between them. Second, several technical challenges present themselves when deploying data science experiments into production environments. I will share some lessons learned while building ML products serving billions of predictions to live customers -- and hopefully provide some take-aways for anyone in the audience looking to indeed put machine learning into production.

Life and Death Decisions: Testing Data Science
We live in a world where decisions are being made by software. From mortgage applications to driverless vehicles, the results can be life-changing. But the benefits of automation are clear. If businesses use data science to automate decisions they will become more productive and more profitable. So the question becomes: how can we be sure that these algorithms make the best decisions? How can we prove that an autonomous vehicle will make the right decision when life depends on it? How can we prove that data science works? In this presentation, you will discover how to test the models produced from the application of Data Science. We will discuss the common problems that are encountered and I will show you how to overcome them. You will learn how to evaluate models both quantitatively and visually. And I will explain the differences between technical measures of performance and measures that are better for business use. I will provide context by showing both disastrous and hilarious examples from industry. This talk is designed to be both entertaining and informative. It is primarily focused towards people with exposure to data science due to the use of terminology. But both beginners and those interested in technology will enjoy the talk because the content is thoroughly explained and fun!

Inextricably Linked: Reproducibility and Productivity in Data Science and AI
Because it is more complex and has far more moving parts, Data Science & AI is where Software Development was in 1999: people are emailing and Slacking notebooks to each other, due to a lack of appropriate tooling. There are few CI/CD pipelines and model health monitoring is scarce. A lot that could be automated is still manual. And teams are siloed. This causes problems both for productivity: it's hard to collaborate, and reproducibility: which impacts on governance and compliance. In this talk, Mark shares his team’s research comparing the evolution of Software Development & DevOps with that of Data Science & AI. Mark then presents a proposal for an architecture and a set of open source tools to solve both the collaboration and the governance problem in Data Science & AI. With live demos!

Deliver Results, Not Just Releases: Control & Observability in CD
How do companies like Netflix, LinkedIn, and booking.com crush it year after year? Yes, they release early and often. But they also build control and observability into their CD pipeline to turn releases into results. Progressive delivery and the statistical observation of real users (sometimes known as “shift right testing” or “feature experimentation”) are essential CD practices. They free teams to move fast, control risk and focus engineering cycles on work that delivers results, not just releases. Learn implementation strategies and best practices for adding control and observability to your CD pipeline: Where should you implement progressive delivery controls: front-end or back-end? Why balancing centralization/consistency and local team autonomy in your implementation will increase the odds of achieving results you can trust and observations your teams will act upon. What two pieces of data make it possible to attribute system and user behavior changes to any deployment? How “guardrail” metrics can automate observability of unintended consequences of deployments, without adding overhead to teams making changes or tasking your exploratory testers and data scientists to go looking for them. This talk is from our partner

Robot DJs: Better Spotify Playlists through Music Theory and Discrete Optimization
I am a Spotify addict, former DJ, amateur musician, and professional software engineer. I take special pride in making expertly-curated playlists for myself and friends. It takes a lot of time and energy to set the right mood and tone, and even more time and energy to transition smoothly from one song to another in a way that makes sense and is pleasing to the ear. Through many years of practice, I've observed that making a good playlist is a lot like solving a puzzle; and just like puzzles, there are rules and patterns to follow if you want to produce a cohesive output. In this talk, we'll explore the notion of teaching these rules to a computer, building a planning & optimization algorithm that follows these rules, and letting it loose on a set of tracks to generate delightful playlists on Spotify. We'll also cover the basics of music theory and why certain songs sound better together. There will likely also be fast talking, live keyboard playing, and some unrehearsed demos against a random sample of Spotify playlists submitted by the audience. This talk is from our partner.
