This applies broadly to external and internal clients: if your users demand speed (as they often do), then latency is your enemy. Bringing traditional infrastructure resources as close as possible to where applications are running and/or data is processed can help reduce latency.
“ at the edge to deliver low latency, a key consumer demand for applications,” says Shahed Mazumder, global director of telecom solutions at Aerospike. “Latency depends on the distance traveled over the network, the compute load, and the volume of database transactions/processing time required.”
Consumer expectations—how long it takes for a website to load or for a video to start playing—may be the most obvious example, but they easily carry over into everyday business environments. In fact, the shift to remote/hybrid workspaces highlights the need for high-performance applications.
“As remote work continues, video conferencing canada mobile database productivity/collaboration tools are some of the most common and important edge tasks,” Mazumder notes.
When time is of the essence—especially time measured in tiny increments—edge computing can be very useful. Mazumder says that fraud detection, for example, will be a critical area for edge computing: “Fraud detection typically requires sub-milliseconds.”
Like other experts, he expects more and more overlap between AI/ML and Edge: “Object/video recognition scenarios benefit from low latency and are increasingly gaining traction in telecoms.”
Bottom line: Edge computing is useful in many scenarios where low latency is an important requirement. And it should rarely be considered as a standalone technology or strategy — it will increasingly go hand in hand with other core components of an IT leader’s portfolio, including hybrid/multi-cloud, AI/ML, and IoT.
Companies can deploy applications
-
- Posts: 529
- Joined: Mon Dec 23, 2024 3:13 am