Akka: enabling the cloud to edge continuum

Akka: enabling the cloud to edge continuum

4 minute read

At Akka, our mission is to reduce cloud complexity, empower developers to achieve more with less, and accelerate progress with predictability by climbing the ladder of abstraction.

In October of last year, we announced Akka 22.10. We shared our ambitious plans for the future of Akka, and our Edge roadmap—a unified platform for the Cloud-to-Edge Continuum—introduced the first module towards this goal.

Today, we are thrilled to announce that, with Akka 23.05, we have successfully delivered our next step for Akka at the Edge with Akka Distributed Cluster. This innovative feature set empowers you to—accelerate data delivery to your users; maintain availability even in the event of a cloud provider outage; cut costs by minimizing data storage expenses and reducing server data traffic; and conserve developer time by integrating these features into Akka, the most powerful tool for distributed computing. As a result, you can expand your application across multiple data centers while maintaining performance, consistency, and reliability—whether it is multi-region, multi-cloud, hybrid cloud, or on-premise.

Akka Distributed Cluster is comprised of the following:

  • Brokerless Pub/Sub—Experience low-latency, high-performance Publish-Subscribe over gRPC, with guaranteed delivery, eliminating the need for message broker management. This ensures efficient operation in memory-constrained environments, from the central Cloud to the Far Edge. To learn more, read the introduction article and the benchmark comparison with Kafka—a feature introduced in Akka 22.10.
  • Brokerless Pub/Sub Event Filtering—Building on Brokerless Pub/Sub, this enhancement allows dynamic event filters on the producer or consumer side to prevent sending and processing unnecessary data. Consequently, network costs are reduced, and hardware resources are freed up by only transferring the required data when needed. This feature is particularly crucial at the Edge.
  • Active-Active Event Sourcing—Utilize distributed event journal replication—running on gRPC, leveraging CRDTs, and the new Brokerless Pub/Sub for ultra-efficient, low-latency replication. This guarantees strong eventual consistency of event-sourced actors/entities across different data centers or Point-of-Presences at the Far Edge.
  • Durable State Queries—Search for data on multiple fields without additional read models, reducing the cost and time spent storing and interacting with duplicate data.

Akka Distributed Cluster maximizes Cloud capabilities, ensuring efficiency, correctness, and resilience. But this is just the beginning. For more information, reference docs, use cases, and sample code see this page. Akka Distributed Cluster sets the stage for our next step in creating the ultimate platform for the Cloud-to-Edge Continuum—Akka Edge.

Enterprises are increasingly turning to Edge Computing to capitalize on new opportunities and use cases that allow processing more data, serving more users faster and more reliably. However, Edge Computing introduces challenges, such as the cost and increased latency of transferring data to the Cloud for processing. To maintain profitability, enterprises must retain as much data as possible at the Edge, closer to users, while shifting processing to the Edge. Successfully achieving this yields numerous benefits, including real-time processing, reduced latency, faster responses, enhanced resilience and availability, and improved resource and cost efficiency.

Akka Edge aims to help you overcome these challenges without adding complexity or creating new issues and providing a seamless extension of an Akka-based application from the Cloud to the Far Edge and every point in between. In practical terms, this means replicating data between data centers, between data centers and Points-of-Presences (PoP), and between different PoPs (enabling services to collaborate directly at the Edge in a Local-First manner).

With Akka Edge, we will materialize our vision for the Cloud-to-Edge Continuum embodying the following principles:

  1. Seamless integration—The choice between Cloud or Edge should not dictate design, development, or deployment decisions. Instead, services should be flexible to move within the continuum as needed, optimizing for locality, availability, and performance.
  2. Adaptive data availability—Data should always be accessible wherever and whenever required, but only for the needed duration.
  3. Co-location of data and processing—Data should always be physically co-located with processing and the end-user, ensuring ultra-low latency, high throughput, and resilience.
  4. Dynamic data and compute movement—Data and compute resources should adaptively move with the end-user.
  5. Unified programming model—A cohesive programming model and Developer Experience (DX) should be maintained, preserving semantics and guarantees throughout the continuum.

We are very enthusiastic about Akka's future and plan to launch Akka Edge later this year. In the meantime, we encourage you to try out Akka Distributed Cluster and share your feedback via Twitter or Forum.

Far Edge refers to the computing infrastructure and devices located at the outermost limits of a network, often in remote or harsh environments. These edge devices are typically responsible for collecting, processing, and transmitting data to centralized or cloud-based systems. In contrast, Near Edge devices are located closer to the center of a network, typically within an enterprise or data center environment. Near Edge devices are often more powerful and capable of handling complex computations than Far Edge devices.

Stay Responsive
to Change.