![]() We do so to ensure privacy by not sending the video stream itself but by sending a transformed version of it, i.e., a small sized locally trained model. We do this not by sending the complete video stream or data but instead uploading a locally trained small piece of the model if and when network becomes available. However, to make the model updates to facilitate better inferences from real-time data, the system does use network. Further, this SQS can also run on existing hardware with little or no changes. Thus, we propose this system which is easily accessible and implementable, requires very low deployment costs, consumes lower power, does not have a huge impact of network and also gives a lower latency. This is particularly helpful on a retail or transportation scenario where low latency in the application would be required. The proposed system performing the computation on the device itself also provides a lower latency as the algorithms are now being run on the device itself (Zhang 2017). This also saves costs of maintaining a server and ensuring up-time. Since the proposed system no longer requires to send or receive data from a server a cloud-based system, we further make this system easily accessible and implementable as all the processing is done on the device itself. The edge system we propose also reduces the power consumption by a great amount as it does not have to send the video streaming data to a cloud-based server greatly saving on the power costs. Queue management strategies would further be helpful in many such scenarios (see Appendix A for use case requirements). This could help to ensure more efficient manufacturing in factories. This would help reduce waiting times in public transportation too.Īpart from the above-mentioned scenarios, the ideas of using queue management could particularly be also helpful in allotting workers efficiently in a factory environment. The queue management ideas could also be helpful in transportation scenarios and manage queues. Retail stores often having a lot of customers make it further difficult to efficiently manage queues in these scenarios increasing the wait time drastically for a customer creating a bad experience. People can lose time in transportation because of unwanted waiting or waiting long behind a billing counter. Often in retail stores or transportation queues, people end up wasting a lot of time, due to improper or manual queue management. Queue management systems (QMSs) are the go-to solution for handling queues, allowing for easy management and a streamlined experience while reducing wait times and increasing efficiency. The reason for queuing is to accomplish or get the service intended in a fair and organized manner. In daily activities, queuing always occurred whenever people have interest in mutual service at a same particular period and there are limited capabilities to satisfy or provide service to all the interested individual at once, for example a queue of people at ticket windows or vehicles waiting in line at the toll. We can define a queue as an arrangement of people or vehicles waiting in line for their turn to get a service or move forward in an activity, while queuing is the act of taking place in such an arrangement (Lee 2019). Experimental results show that deploying a SQS on edge is very promising. We validate our results by testing it on multiple edge devices, namely CPU, integrated edge graphic processing unit (iGPU), vision processing unit (VPU) and field-programmable gate arrays (FPGAs). SQS demonstrates how to create a video AI solution on the edge. This gives it the ability to run the queuing system deep learning algorithms on pre-existing computers which a retail store, public transportation facility or a factory may already possess, thus considerably reducing the cost of deployment of such a system. In this paper, we focus on edge deployments to make the smart queuing system (SQS) accessible by all also providing ability to run it on cheap devices. A smart queue management can be the key to the success of any sector. OpenVINO is a toolkit based on convolutional neural networks that facilitates fast-track development of computer vision algorithms and deep learning neural networks into vision applications, and enables their easy heterogeneous execution across hardware platforms. Recent increases in computational power and the development of specialized architecture led to the possibility to perform machine learning, especially inference, on the edge.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |