“Cloud native?” Check! Apply the same principles at the Edge? Hmmm…! How do I operate Apps across 1000s of locations, which are often hidden behind layers of NAT? How do I run AI apps on nodes that are too small to fit the AI model? How to make it operationally simple? Let’s discuss and demo!
We’re all familiar with “cloud native” - but once we start to operate applications at the edge, we have to adopt a new set of principles and evolve our cloud-native paradigms. We deploy Apps at the edge to achieve lower latency or higher performance, to comply with data sovereignty regulations, to reduce transit cost or to perform near real-time decision making on local data sources.
Developing and operating Edge apps requires us to answer questions like:
How do I operate Apps across 1000s of locations, which are often hidden behind layers of NAT and have spotty cloud connectivity?
How do I run computation heavy tasks, like AI apps, on a set of nodes where each node does not have sufficient CPU and memory to run the entire model?
How do I deal with a heterogeneous environment, with x86 and ARM-based devices?
Which additional tools do I need to assure compliance to data-privacy rules, run AI models that just don’t fit a single compute element, or perform federated learning in an efficient way?
This session will address those questions and introduce the “edge native” paradigm. It will provide an understanding of how to design and operate Edge apps. A set of demos and example use-cases complements the conceptual discussion - making “edge native” a reality for the audience.
Ещё видео!