Edge computing is doing compute actions and calculations with low powered devices sitting on the edge of network connectivity. It is bringing the compute as close as possible to the source of the data being generated and then consumed. It prevents the need for all data to be sent back up to a cloud database system. As edge hardware continues to get more powerful and energy efficient, these devices continue to be able to do increasingly more. In some cases, the extra compute power enables the devices themselves to filter out unnecessary data or do other ML or AI related computations assuming they have a fast enough way to access and store their data.
To get the most rapid response from data, it needs to be consumed at the source or with something directly connected to it, particularly in cases where AI is mission critical. We are seeing very many uses cases in Autonomous Driving where the delay of sending the data over a network to a powerful cloud compute engine then back again would not be fast enough. A self driving application running on a car must be able to see and process all incoming data from multiple cameras, LIDAR, RADAR, ultrasonics and quickly combine that with relevant map data and possible user input. This compute must occur on the car itself as it is entirely possible for a car to be in a situation with no cell data access or the necessary 5G networks haven’t been setup yet. Without 5G, the delay between a cloud to car communication would be too great for the required immediate response. In healthcare, medical devices all have processing power now and can make relevant and important information immediately available to the device user by filtering and processing the information it is generating.
The edge device must be capable of some compute power and have a supporting way of storing data. Whether it be pure RAM or a connected physical storage medium. From there, a connection from the edge device to either other edge devices or a cloud backend is ideal. The connection does not have to be constant though. This would enable the device to utilize the data that it is generating but also send important feedback/information up the chain for further analysis or human response.
The rest of the data is consumed and used for compute on the edge device itself, saving the need for the data transfer to the cloud. As the data is incoming, an edge database would allow for the data to be stored then filtered through. Then actions can be taken based on that data. From there, only the important information can be sent up to the cloud data center with the rest being discarded. It can be safely discarded as the necessary actions have already been taken due to the edge compute capability of the devices.
Raima is the most performant edge database system. It helps free up the processor and ram in an edge compute device because Raima only requires very minimal amounts of CPU processing power and RAM (200kb) to efficiently and compactly store data while also allowing for highly efficient and fast lookup and retrieval of that data. With this freed up CPU and RAM, the application developer can now dedicate part of the edge device’s compute power to act upon that important data.
With the massive amounts of data now being gathered in the world, cloud compute is not going to be nearly enough to keep up with it. Edge compute is necessary to act as a first filter of these large amounts of incoming data so that only the important information is sent upwards.