Kafka
Showing 5 out of 5 results
Kafka in Action
Kafka has been on developers’ radars for quite a while now. Viktor Gamov’s co-authored book “Kafka in Action” ensures that you have a list of recipes to dive into. Joined by Tim Berglund, VP DevRel at StarTree, they explore the fundamentals of Apache Kafka. Learn what Kafka can help you achieve, what Viktor’s favorite MCU film is and what “Highway to Mars” by Beast In Black has to do with all of this.

Kafka at the heart of a large corporation
Learn how Nordea have challenged their death star architecture in moving to Kafka. While Kafka has been more widely adopted in IT company’s perhaps less so for critical applications. Kafka offer unique pub/sub pattern that simplifies application integration, the modern fault tolerant architecture ensures high availability. In Nordea Markets the investigation and adoption of Kafka started 3 years ago, aiming at replacing a home-grown integration application which is full of point to point integration with format conversion and business logic which is not maintainable. Learn what Nordea did to offer Kafka as a compliant low touch/no touch internal service suitable for an enterprise with a very large and diverse IT landscape. The challenges Nordea faced and solved: Never lose a message Very large messages on Kafka Dealing with legacy Security and compliance Data discovery and governance Low touch self service

Processing Streaming Data with KSQL
Apache Kafka is a de facto standard streaming data processing platform, being widely deployed as a messaging system, and having a robust data integration framework (Kafka Connect) and stream processing API (Kafka Streams) to meet the needs that common attend real-time message processing. But there’s more! Kafka now offers KSQL, a declarative, SQL-like stream processing language that lets you define powerful stream-processing applications easily. What once took some moderately sophisticated Java code can now be done at the command line with a familiar and eminently approachable syntax. Come to this talk for an overview of KSQL with live coding on live streaming data. **What will the audience learn from this talk?** he audience will learn the very basics of Apache Kafka, why a stream processing framework is necessary, and how to perform common stream-processing operations with KSQL. **Does it feature code examples and/or live coding?** Yes it does **Prerequisite attendee experience level:** This is a [level 200](https://gotocph.com/2019/pages/experience-level) talk

The Database Unbundled: Commit Logs in an Age of Microservices
When you examine the write path of nearly any kind of database, the first thing you find is a commit log: mutations enter the database, and they are stored as immutable events in a queue, only some hundreds of microseconds later to be organized into the various views that the data model demands. Those views can be quite handy–graphs, documents, triples, tables—but they are always derived interpretations of a stream of changes.<br /> Zoom out to systems in the modern enterprise, and you find a suite of microservices, often built on top of a relational database, each reading from some centralized schema, only some thousands of microseconds later to be organized into various views that the application data model demands. Those views can be quite handy, but they are always derived interpretations of a centralized database. Wait a minute. It seems like we are repeating ourselves. Microservice architectures provide a robust challenge to the traditional centralized database we have come to understand. In this talk, we’ll explore the notion of unbundling that database, and putting a distributed commit log at the center of our information architecture. As events impinge on our system, we store them in a durable, immutable log (happily provided by Apache Kafka), allowing each microservice to create a derived view of the data according to the needs of its clients.<br /> Event-based integration avoids the now-well-known problems of RPC and database-based service integration, and allow the information architecture of the future to take advantage of the growing functionality of stream processing systems like Kafka, allowing us to create systems that can more easily adapt to the changing needs of the enterprise and provide the real-time results we are increasingly being asked to provide.

Cloud Native Event Streaming with Kafka and Open Standards
Modern software applications heavily rely on analyzing large volumes of sequences of events or ‘event streams’ that are continuously generated by different sources in real-time to capture actionable insights and immediately respond to business challenges. As modern business applications require to ingest, collect, store and process terabytes of data that comes in the form of event streams, you need to choose an Event Streaming Platform that is performant, interoperable, scalable, reliable, secure, and cost-efficient. The popularity and adoption of Event Streaming platforms such as Apache Kafka, Azure Event Hubs, AWS Kinesis, etc. are increasing. In this session, we’ll take a closer look at the key characteristics of cloud native event streaming platforms. These characteristics include * Multi-protocol: Ability to ingest and consume event streams with a wide array of protocols such as AMQP, Kafka, HTTP, WebSockets, and so on * High-Performance data streaming: Low end-to-end latency, high throughput * Dynamic scaling: Scale event stream ingestion capacity dynamically * Multi-tenanted PaaS with workload isolation * Elasticity * High availability and resiliency: Replicas. availability Zones * Geo Disaster recovery with data and state replication * Security and Compliance * Stream governance: Using schema-driven formats, Fine-grained resource governance
