New Confluent Cloud for Apache Flink capabilities simplify real-time AI development

1

Confluent has introduced new Confluent Cloud for Apache Flink capabilities that simplify real-time AI application development. Flink Native Inference enables teams to run open-source AI models directly in Confluent Cloud, eliminating workflow complexity. Flink search provides a unified interface for retrieving real-time data across vector databases, while built-in ML functions bring AI-driven forecasting and anomaly detection into Flink SQL, making advanced analytics effortless. These innovations help businesses enhance customer engagement and decision-making with real-time AI.

“Building real-time AI applications has long been too complex, requiring multiple tools and deep expertise,” said Shaun Clowes, Chief Product Officer, Confluent. “With our latest advancements, we remove these barriers—bringing AI-powered streaming intelligence within reach of any team, with built-in security and cost efficiency.”

The AI boom is here—92% of companies plan to increase AI investments in the next three years (McKinsey). However, building real-time AI apps remains challenging due to fragmented workflows, multiple tools, and inefficient data processing, leading to slowdowns and AI hallucinations.

Simplifying AI Adoption
“Confluent accelerates copilot adoption by providing teams with real-time organizational knowledge,” said Steffen Hoellinger, CEO, Airy. “Flink AI Model Inference simplified our tech stack, enabling seamless work with LLMs and vector databases for retrieval-augmented generation (RAG). As a result, our customers achieve greater efficiency and productivity.”

As the only serverless stream processing solution unifying real-time and batch workloads, Confluent Cloud for Apache Flink removes the need for separate processing solutions. Its new AI, ML, and analytics features enhance efficiency, streamline workflows, and reduce operational overhead. These capabilities are now available through an early access program for Confluent Cloud customers.

Key AI Innovations
Flink Native Inference: Run AI models in Confluent Cloud without extra infrastructure, ensuring greater security and cost savings.

Flink search: Retrieve data from multiple vector databases (MongoDB, Elasticsearch, Pinecone, etc.) through a single interface, eliminating complex ETL processes.

Built-in ML functions: Enable real-time forecasting and anomaly detection directly in Flink SQL, making AI more accessible to developers.

“Integrating real-time, contextualized, and trustworthy data into AI will give businesses a competitive edge,” said Stewart Bond, VP, Data Intelligence & Integration, IDC. “Flink’s ability to orchestrate inference and vector search for RAG within a cloud-native, fully managed platform will make real-time AI more accessible for the future of generative and agentic AI.”

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here