Apache
Showing 3 out of 3 results
Expert Talk: Unlocking the Power of Real-Time Analytics
Adi Polak and Tim Berglund explore the concept of analytics and what it truly means in the software development world. They delve into the benefits of real-time analytics for product development, highlighting the fine line between compute and storage and the technical requirements for achieving effective real-time analytics. They also discuss the applications of real-time analytics through the lens of Apache Pinot and StarTree Cloud, exploring use cases such as the popular "Who's Watched My Profile on LinkedIn" feature powered by Apache Pinot.

Modern Stream Processing With Apache Flink
In our fast moving world it becomes more and more important for companies to gain near real-time insights from their data to make faster decisions. These insights do not only provide a competitve edge over ones rivals but also enable a company to create completely new services and products. Amongst others, predictive user interfaces and online recommendation can be implemented when being able to process large amounts of data in real-time. Apache Flink, one of the most advanced open source distributed stream processing platforms, allows you to extract business intelligence from your data in near real-time. With Apache Flink it is possible to process billions of messages with milliseconds latency. Moreover, its expressive APIs allow you to quickly solve your problems, ranging from classical analytical workloads to distributed event-driven applications. In this talk, I will introduce Apache Flink and explain how it enables users to develop distributed applications and process analytical workloads alike. Starting with Flink’s basic concepts of fault-tolerance, statefulness and event-time aware processing, we will take a look at the different APIs and what they allow us to do. The talk will be concluded by demonstrating how we can use Flink’s higher level abstractions such as FlinkCEP and StreamSQL to do declarative stream processing.

Cloud Native Event Streaming with Kafka and Open Standards
Modern software applications heavily rely on analyzing large volumes of sequences of events or ‘event streams’ that are continuously generated by different sources in real-time to capture actionable insights and immediately respond to business challenges. As modern business applications require to ingest, collect, store and process terabytes of data that comes in the form of event streams, you need to choose an Event Streaming Platform that is performant, interoperable, scalable, reliable, secure, and cost-efficient. The popularity and adoption of Event Streaming platforms such as Apache Kafka, Azure Event Hubs, AWS Kinesis, etc. are increasing. In this session, we’ll take a closer look at the key characteristics of cloud native event streaming platforms. These characteristics include * Multi-protocol: Ability to ingest and consume event streams with a wide array of protocols such as AMQP, Kafka, HTTP, WebSockets, and so on * High-Performance data streaming: Low end-to-end latency, high throughput * Dynamic scaling: Scale event stream ingestion capacity dynamically * Multi-tenanted PaaS with workload isolation * Elasticity * High availability and resiliency: Replicas. availability Zones * Geo Disaster recovery with data and state replication * Security and Compliance * Stream governance: Using schema-driven formats, Fine-grained resource governance
