ArticlesAn Efficient Intrusion Detecting Method Using Multiple Sensors and Edge Computing
- Eun-Seok Lee1 , Yun-Im Lee1 , Young-Cheol Kim1 , Eun-Kyung Jo2 , and Byeong-Seok Shin2,*
Human-centric Computing and Information Sciences volume 13, Article number: 15 (2023)
Cite this article 2 Accesses
https://doi.org/10.22967/HCIS.2023.13.015
Abstract
Critical facilities need a security system to protect against intruders. Most security systems use CCTV cameras to monitor the boundary areas of critical facilities. We propose an edge computing system that uses multiple sensors to detect intruders in a wider area. The proposed system uses light detection and ranging (LiDAR) and radar sensors to detect intruding objects in critical facilities. The sensors are widely used to scan distant objects in real time and monitor the outside of facility from the inside. These sensors can monitor a wider area in detail than existing video management systems that use video cameras. However, the output data are 3D point clouds, and the number of data is larger compared to video data. The larger the area to be monitored, the more sensors are required, which creates a bottleneck in the network of the monitoring system. We have solved this problem through edge computing. The proposed edge computing service compresses point cloud data to centralize all data to the main server without bottlenecks. This makes it possible to monitor a larger area in real time. This distributed most of the computing to edge computers, allowing the existing GPU-based high-performance server to be replaced with a low-end cloud server.
Keywords
Edge Computing, Security System, Radar, LiDAR, Data Transformation, 3D Object Detection
Introduction
As industry continuous to develops, the number of important facilities such as military units, factories, and smart cities, is increasing. To ensure the security of these facilities, it is necessary to be vigilant against intruders. In the early days of security systems, security guards patrolled the boundaries of a facility. This method has advanced so that intruders can be monitored more efficiently by a small number of people in one place with video management system (VMS) [1] using closed-circuit television (CCTV) and cameras. Recently, algorithms such as deep learning [2, 3] and generative adversarial network (GAN) [4] have made it possible to perform object recognition such as face recognition [5], more accurately from camera data. These methods enable the automatic detection of human intrusion from CCTV images.
However, because video cameras have a short monitoring distance, in most cases they must be installed outside the facility to monitor intruders. Therefore, the larger the security area, the more cameras needed for VMSs.
Radar (radio detection and ranging) is a widely used sensor system to detect distant object [6], which detects objects surrounding airplanes and submarines using radio waves. A radar sensor can detect objects at distances of more than 100 m. It can also detect objects regardless of the weather or lighting.
However, radar sensors detect objects inaccurately due to high noise. This problem can be solved by using a light detection and ranging (LiDAR) sensor [7]. A LiDAR sensor measures distance using a laser and can measure objects faster than radar. As a result, more points can be measured in the same amount of time; thus, the shape of an object can be scanned in more detail than radar.
LiDAR technology is being developed in various types of sensors. Mechanical scanning LiDAR uses a motor to physically rotate the LiDAR sensor. This rotation makes it suitable to scan the surroundings of a sensor. Therefore, this system has a wide horizontal field of view compared to other systems. Solid-state LiDAR is stationary without a rotating device. Micro-electromechanical system (MEM) [8, 9] and optical phase array (OPA) [7, 10] methods are mainly used for solid-state LiDAR. Compared to radar or CCTV, LiDAR has the advantage of obtaining high-resolution and accurate 3D information. Therefore, it is advantageous for a security system that detects objects at a distance of 10–50 m. However, as LiDAR sensors provide high-resolution 3D data compared to other sensors, a considerable number of data must be transmitted from the sensor to the server.
A LiDAR sensor provides a 3D point cloud as an output. For data, for example, a simple rotating LiDAR sensor rotates 20–30 times per second, providing a scanned 3D dataset. This means that if the sensor scans 500,000 points at a time, it will scan up to 1.5 million points per second. Transferring all these raw data to the server computer is inefficient. Large amounts of data require real-time compression and decompression methods, such as video codecs [11].
VMS is a system that detects intrusion through video collected from cameras. Therefore, it analyzes the images taken in various spaces and determines whether there is an intruder or not. On the other hand, LiDAR and radar are mainly used to detect whether another object is approaching from a moving object, such as an aircraft or a car. Existing methods mainly performed all calculations in the VMS server to detect intruders using these devices. It is advantageous to identify an intruder by utilizing the advantages of each device. However, it is necessary to solve the bottleneck of the network with large-capacity 3D point cloud data and video.
In general, network bottlenecks are always a problem because 3D point cloud data is large. The main idea of the proposed method is to connect each sensor group with an edge computer and reduce the number of data through preprocessing. The edge computer transforms the device's local coordinate system into the server's world coordinate system and filters out unnecessary data so that data can be easily managed on the central server. This can effectively reduce the transmission of data, and additionally distribute the server's computational load. Therefore, the proposed method makes it possible to use a low-cost cloud server that does not use graphics processing unit (GPU) instead of the existing security server that used high-performance GPU for data processing.
Therefore, we propose an edge computer-based security system that monitors intruders using a combination of cameras, radar, and LiDAR. Fig. 1 shows the overall proposed method. The system consists of three main parts. It consists of a hardware part composed of devices and edge computers, a server part built in the cloud, and a user interface (UI) part made of web clients. Hardware part collects and refines the data from the sensors. Server part processes the collected data to determine if there are intruders or not. UI part can control the system through the sensor manually using web UI. The raw data of the LiDAR and radar sensors are sent to the server by compressing the data in an edge computer. The transmitted data are restored as a 3D point cloud in the integrated space that must be monitored to determine whether the object is an intruder. The system is designed to move a CCTV camera to the relevant area and transmit video information about the intruder to the user.

Fig. 1. System architecture and overall procedure of the proposed method.
The additional contribution of this system is as follows: unlike current VMSs, there is no need to install additional sensors or cameras outside the facility. In addition, long-distance monitoring using radar and accurate object recognition at a short distance using LiDAR and a camera are possible. Finally, the bottlenecks that may occur in data transmission are solved by using a dedicated edge computer to process LiDAR data in real time.
To detect intruders at critical facilities, static point data and dynamic point data must be separated. In general, non-moving data, such as trees, streetlights, and rocks, do not change their position. However, moving objects, such as animals, people, and vehicles, can break into key facilities. The proposed method uses a compression method that extracts these dynamic points from an edge computer and leaves only meaningful data. These data can be restored to the original raw data at any time by combining them with the initially scanned static points. By refining the numerous point data collected by the sensor with edge computing, the number of data can be reduced efficiently. This means that as the number of sensors used in the system increases, the proposed method becomes more efficient.
Related work is described in Section 2, and the main idea of the proposed system is described in Section 3. The user application for monitoring the system and the data visualization method are introduced in Section 4. The experimental results of the proposed method are presented in Section 5, and the paper is concluded in Section 6.
Related Work
VMSs are widely used to monitor intruders in facilities [1]. A VMS collects video from cameras and other sources, and records that video to a storage device. Then, the images or logs handled by the VMS are encrypted [12] for data security. An interface is provided for safe access to live and recorded images, and through this, monitoring services are also provided. To efficiently monitor these important facilities, various Internet of Things (IoT) sensors have been utilized, and various types of security systems have been developed.
Radar, also known as radio direction finder (RDF) or high-frequency direction finder (HFDF), emits strong electromagnetic waves and measures the distance to an object by analyzing the electromagnetic waves that hit and bounce off the object. When a radar sensor uses a low-frequency wave, the attenuation of the radio wave is small, so it can be detected from a distance; however, accurate measurement is impossible. Conversely, high-frequency waves (short wavelengths) are easily absorbed or reflected by water vapor, snow, and rain in the air.

Fig. 2. Principle of the radar sensor using radio waves.
The principle of a radar sensor is shown in Fig. 2 [
13,
14]. Radar can calculate the angle and distance at which an object is located through time until the electromagnetic wave generated continuously by the transmitter is reflected off the object and arrives at the receiver. This is divided into various methods, such as the pulse method, the Doppler method [
15], and frequency-modulated continuous wave (CW) radar [
16]. Depending on the signal used, it can be used for a short distance, such as the Doppler method, or for a long distance, such as the pulse method. Depending on the measurement period, the movement of an object can also be detected. The faster the object, the easier it is to detect.
However, radar sensors have a long measurement cycle for objects that are far away (The shorter the period, the more distant the signal reflected from the object, which may not reach the sensor until the next signal transmission).
A LiDAR sensor measures distance by calculating the return time between the sensor’s current location and an object with a high-power pulse laser, as shown in Fig. 3. When distance measurements using the laser are performed continuously at various angles, the location information of the reflected object can be converted using the distance information and the target angle. This cluster of data is called a point cloud and is used to calculate coordinates to predict the surface information of an object.
Fig. 3. Principle of a LiDAR sensor using a laser.
Because LiDAR uses a 905–1,550 nm laser, it can detect objects more accurately than radar [
17], which uses low-frequency electromagnetic waves. A detection system using a LiDAR sensor [7, 8, 18] has the advantage of being able to detect the shape of an object more accurately than a radar sensor. Therefore, LiDAR is used to triangulate a large-capacity point cloud so that it can be visualized by scanning the ground around an aircraft [
6]. As point clouds created in this way are large-capacity data that cannot be processed by a single PC, methods for accelerating triangulation in various offline methods by using cloud computing
[
19,
20] or general-purpose GPU (GPGPU) [
21,
22] have been investigated.
As LiDAR has become popular, various studies have been conducted to utilize LiDAR data in real time in various industries. Unlike solid-state LiDAR, which is mainly aimed at scanning objects, mechanical scanning LiDAR [
23], which combines a motor and a laser transceiver, has been utilized. Autonomous vehicles scan surrounding objects using rotating LiDAR [
24,
25]. This allows faster motor rotation and multiplexing of the laser projection channels to scan a larger area quickly. Recently, LiDAR sensors have also been installed in mobile phones so that algorithms [
26] can distinguish between people and cars, and algorithms [
27] that model a scanned interior as 3D objects in real time are being investigated.
As LiDAR scans a wide area, various objects are detected. Research that classifies these objects [
28–
32] sorts objects using neural networks, such as convolutional neural networks (CNN). It can be utilized in applications that require precise analysis, as well as real-time detection.
The proposed method aims to automate a real-time security task that detects an object evaluated as a risk factor by the security system and immediately reports the location to the user. Because the security field must detect intrusions precisely, radar alone lacks accuracy in terms of intrusion detection, and methods that use weight or temperature sensors [
33] have limited coverage and a very small range. The proposed system, which uses mechanical scanning LiDAR and radar, can check a wider area and detect intrusions by utilizing the characteristics of both types of sensors.
Security System for Critical Facilities Using Edge Computing
When monitoring a wide area, such as a military base or a smart city, it is very important to combine various sensors according to the monitoring environment. Generally, radar, LiDAR, and CCTV cameras are widely used in current security systems [34]. These devices generate different types of data as outputs. The security system should be designed to unify and manage these various types of data so that users can employ them.
In the proposed method, radar is used to detect a distant object. A rotating four-channel LiDAR sensor is used to detect objects at close range. CCTV cameras are then used to accurately identify objects. In general, when object detection is performed with a CCTV camera, feature detection is performed based on the color value of the 2D image. This often shows false detection because it is necessary to extract 3D information from 2D data. However, LiDAR or radar sensors provide 3D data to users. This method is simpler and more accurate than using a camera to detect objects, because there is no need to transform the dimensions. Therefore, the proposed method determines whether radar or LiDAR detects an object. When detection occurs, the CCTV camera is focused on the corresponding position, and the camera checks again.
In security systems, radar and LiDAR sensors can detect moving objects. When an object is detected, the purpose is to point the CCTV camera at the detected location and notify the user. Therefore, when the central management service receives the LiDAR and radar data and detects a moving object, it is necessary to issue a command to rotate the camera to face the object. The three sensors described above must be able to convert input/output data into a single coordinate system. Sensors must transmit data continuously. Transmission control protocol (TCP) may have the problem of accumulating packets if the bandwidth is small, as in a cloud environment, unless the device provides a congestion control algorithm [35]. However, hardware with a built-in congestion control module is rare, and most use the user datagram protocol (UDP).
Datagrams can be lost or out of order in the UDP. These problems usually occur depending on the network status of the device and the server computer. When these devices are used, processing can be performed more accurately on the local server than in the cloud computing environment. We propose to solve these problems by using edge computing. As shown in Fig. 4, an edge computing server that can receive and process data directly from LiDAR and radar sensors is used. Edge computing aims to improve response times and save bandwidth by bringing computation and data storage closer to the data source. Therefore, the proposed method utilizes edge computing to quickly collect data and perform data transformation on the edge server to transmit the data to the main server in the cloud.

Fig. 4. Three-dimensional view of the proposed method: (a) the monitored area is visualized in the top view and (b) the monitored area is visualized with a quarter view.
To transform the data, the radar and LiDAR sensor data must be simplified. The proposed method scans static objects before the system operates and transmits the data to the cloud server. Because data about static objects transmitted in advance are data composed only of non-moving objects, the data are detected at the same location every time the objects are scanned. Therefore, it is possible to reduce the number of data transmitted per second by removing these data in advance. The reduced data must be transmitted to the cloud server and converted to unified 3D spatial coordinates. This makes it possible to check the data values of different sensors in a unified 3D world space coordinate. The proposed method uses UTM coordinates widely used in maps for x and z among x, y, and z coordinates, and uses the geoid height model for y, which represents height.
LiDAR and radar sensors use the location of the site where the sensor is installed as the origin values, and the angle and distance at which the object is detected as the output values. For 2D coordinates (x, y), the distance d between the origin O and the object, and the angle θ, have the following characteristics:
(1)
Given the distance d and the angle θ, Equation (1) can be calculated as follows through the rotation matrix [
36]:
(2)
After substituting 0 for x and d for y, the desired coordinate value can be obtained using Equation (2). When the ground is the xy plane in the top view, the results are shown in Fig. 4(a). To monitor these data more realistically, the proposed method should unify the coordinates based on a 3D coordinate system. Therefore, the z-axis is added. When LiDAR or radar is installed on-site, the angle of inclination must be calculated in terms of three axes, x, y, and z, and the vector of the rotation axis must be calculated. The x, y, and z values of the points are obtained by applying the rotation matrix to this axis. The 3D data drawn on the screen are shown in Fig. 4(b).
In monitored facilities, as the area detected by radar is generally wider than that detected by LiDAR, the proposed system requires more LiDAR sensors than radar sensors, as shown in Fig. 5. LiDAR sensors can scan the same area faster and more precisely than radar sensors. Therefore, as the number of LiDAR sensors increases, the number of point data to be processed by one LiDAR edge server gradually increases. This can cause network bottlenecks. The proposed method can solve this problem through the data compression method.
Fig. 5. An example of a system that uses the proposed method.
The compression method is performed through a similarity check of coordinates with data previously collected by sensors. In general, multi-channel LiDAR, which is mainly used in security systems, has a wide field-of-view (FOV) and a long distance. Because the motor rotates 360°, the LiDAR sensor supports up to 360° of FOV, and as it supports multi-channels, it has the advantage of being able to detect as many vertical axes as the number of channels. However, as a single motor is used, only 2D plane sensing is possible for each channel. As the motor of the LiDAR sensor rotates, the detection range is expressed as a 2D circle per channel. The proposed method detects a moving object in a 2D space using the Hausdorff distance [37]. The Hausdorff distance measures how far two subsets of a metric space are from each other. This method uses the phase difference. If only the ordering is done in the order of the datagram sent from the LiDAR sensor, the phase difference can be easily obtained only with the distance value of a specific group. This method is shown in Fig. 6.

Fig. 6. A method for detecting only moving objects using the Hausdorff distance.
In Fig. 6, the blue line represents the distance data for the initially detected topographical feature. When the red object is detected, the detected distance value corresponding to the red arrow is transmitted through the datagram, unlike the original distance measured in blue. As the motor of the LiDAR sensor rotates 360° with a constant laser measurement cycle, the distance value is always measured for the same angle. Let the set of detected point data o be O. The set of o corresponding to the indices surrounding the newly detected point p among the elements of O is called O'.
We can obtain H, which is the Hausdorff distance for o, which is farthest from the point p, where the red object is detected, using the following equation:
(3)
In the formula, d is used as a subscript of p, and o represents the distance to the point. The index values of o in the range of τ degrees above and below the index of p are used to find H. If the H value obtained in this way is larger than the object to be recognized, a new object is detected.
However, in the case of a LiDAR sensor, noise may occur, or an object may not be detected when there is interference with the laser, such as rain or snow, or when a reflector is used. We found two solutions to this problem.
The first method recognizes objects by clustering the detected p. This sets the minimum detectable area and the number of points according to the size of the object and forms a cluster for the first detected p. Whenever a new p is detected, whether it is within the range corresponding to the size of the object in the center of the cluster is checked, and if it is not within the range of the cluster, a new cluster is created. Clustering is performed using the k-means algorithm. In this case, the number of k is not the number of objects. When the maximum number of points that LiDAR can detect cannot be detected, the proposed method assesses the data as erroneously detected and discards them. This can easily solve the problem of noise. As this operation can be performed in parallel for each point, when GPGPU is used, objects can be detected much faster than when only the CPU is used in the cloud.
The second method detects an object with a non-LiDAR sensor and compares it to the object detected by the LiDAR sensor to determine whether it was found at a similar point. In a research field such as quality-of-service (QoS) [
38], it is very important to determine the priority of a service. The proposed method establishes a process for recognizing objects with a concept similar to that of prioritizing important tasks in these fields. There are differences between devices, such as the time it takes to recognize an object and the tolerance range. For this reason, the execution cycle of the algorithm is set to the processing speed of the device, which takes the longest time for object recognition. In the proposed method, the error is reduced by using the camera’s object recognition algorithm and the algorithm that compares the radar sensor (the same method as LiDAR) with the result of the LiDAR sensor. This is because the LiDAR sensor shows the most accurate data in object recognition.
Fig. 7. An example of a system that monitors facilities with the proposed method.
The second method is shown in Fig. 7. Each device is responsible for extracting data from the edge computer. The first method, clustering, is performed on the edge device. After the noise is filtered out, the points are grouped by object to distinguish them from each other. Radar, which is used for long-range sensing, detects objects later than LiDAR because the scanning period is longer than that of LiDAR sensors. When LiDAR or radar detects an object, the CCTV camera rotates toward the object. Thus, among the three methods, the CCTV camera recognizes the object last.
Due to the parallax that occurs for each sensor, the same object may be detected at different locations. The proposed method should check the position of the detected object at the same time through time synchronization. An object detected close to the facility is the highest priority. Data farthest from the monitoring area are the lowest priority. Therefore, if the processing order is determined based on the distance of the data, the detection events will be processed beginning with the high priority.
The proposed method defines risk in three stages: warning, D1, and D2. In the case of warning, when an object is detected at a distance that only radar can detect, it is treated as a warning. In the LiDAR detection area, when an object is detected only by LiDAR or radar is detected, a warning is sent, and when it is detected by both, a D1 alert is sent. The D1 alert rotates the CCTV camera to the corresponding position and logs the current time. When the camera rotates to see the object direction and detects an object, a D2 alert is generated, which actively informs the user via a method such as a pop-up window. This method utilizes all three methods to actively inform the user about the danger. This can effectively solve the inconvenience of false detection in existing systems [
38].
Visualization and Management Service
A security system should alert users when it detects a dangerous situation. Users need a management system that can look at the site through warnings or alert events. This system should be able to monitor the area where danger is detected by controlling the CCTV camera, and it should control the sensors or monitor the data output from the sensors. This management system is shown in the lower left corner of Fig. 3. It provides a 3D Viewer through the integrated world coordinate system, a Camera Viewer to check the camera information, and a Settings Page to set and adjust sensor settings.
For visualization, existing systems mainly access the device and receive data from the sensor directly or from the cloud server through the bypass service. This may cause a delay, but it has the advantage of sending data stably and safely in the form of a content delivery network (CDN) [32], when needed. This is effective for data transmission when monitoring is performed from a remote location away from critical facilities.
The proposed method reuses data in the user’s web client application to solve problems related to real-time rendering, which had previously been a bottleneck. The security system uses equipment (LiDAR, radar, and CCTV cameras) that collects a substantial number of data in real time. As explained in Section 3, the proposed method dramatically reduces the number of server’s network traffic. This also applies to client applications that need to visualize data.
The client application of this system is based on web service. It provides monitoring services in the form of software-as-a-service (SaaS) in cloud computing. The proposed monitoring service provides a screen, as shown in Fig. 8. This system can directly control or control options for devices while communicating with the edge server through cloud services and monitoring the collected data in real time through cloud services. In Fig. 8, the image on the right is a 3D view that visualizes the data received from LiDAR or radar in a unified spatial coordinate system. The white and red squares are the main surveillance areas specified by the user through the UI. Areas where monitoring is active are marked in red, and areas inactive are marked in white. If an object is detected here, a pop-up alert is issued, and the camera view can be monitored in a new window.
Similar to the proposed data compression algorithm, camera software reduce the capacity with compression codecs, such as H.264 [39]. This method was also produced with the concept of restoring a p-frame in which the pixel values of images that change between the original i-frames are recorded. Therefore, the video is directly in the cloud, and the client can view the video through the bypass service. However, in the case of radar or LiDAR, there is no real-time compression codec like video. Therefore, the proposed system compresses and decompresses point cloud data in real time, such as a video compression codec, and restores it.
Point cloud data are divided into static point cloud data and dynamic object data. Static data, such as data processed by the server, are data composed of coordinates and indexes of static points on a device-by-device basis. As the number of points that can be scanned at one time is determined for each LiDAR and radar sensor, the maximum value of the index is determined. Therefore, when creating a vertex buffer for each device for visualization and managing the vertex count by the number of indexes on the client side, even if only the detected object is downloaded, only dynamic data need to be processed; therefore, performance is not significantly affected.

Fig. 8. User interface, where the device options and the detection area to check whether an object is invading can be set.
Experimental Results
An experiment was designed to measure whether the proposed system detects an intruder’s approach in an empty space. The security system shown in Fig. 5 was used.
Radar is used for long-distance measurements instead of LiDAR because it is not affected by environmental conditions, such as weather. LiDAR can scan quickly and accurately at short distances. Therefore, the number of LiDAR sensors required in the system is greater than the number of radar sensors. A radar sensor with a sensing range of 100 m and a four-channel mechanical scanning LiDAR sensor with a sensing range of 25 m were used. Both types of sensors transmit data using the UDP. CCTV uses a remote rotary motor that can magnify 360° and an iPhone camera. One radar sensor, 10 LiDAR sensors, and two cameras were used. For the edge computer, three devices were used because the sensors were divided into three groups. For the world coordinates, the location where the device is installed is set for each device based on UTM coordinates. 3D point data was converted through these coordinates, and the camera identified the position of the detected object through depth and angle.
Amazon Cloud was used for the main system, and the LattePanda Alpha model was used as the edge computer. An edge computer provides a data transformation service that compresses and transforms data by receiving data directly from LiDAR and radar sensors. The main server in the cloud draws the object in a 3D view and send events to client application when a moving object is detected in the system.
The system was installed on an outdoor single-row barbed wire fence, and an intruder moved into the detection area to determine whether it was detected. The camera used a framework called an ML kit [34] in iOS to detect moving objects, and the radar was set up to detect objects in the same way as LiDAR. The resulting image when an object is detected is shown in Fig. 9, and a level 2 warning is displayed. The yellow vertices are point cloud data scanned by the sensors. The red area is the active sensing area. The active detection area turns green when an object is simultaneously detected by radar and LiDAR, as shown in Fig. 9(b).

Fig. 9. The resulting image when an object is detected in the monitored area. The green square indicates that an object has been detected.
The effectiveness of the proposed method is demonstrated with two experiments. The first experiment is a comparison experiment on the data transmission amount of the proposed method. This can prove the advantage of solving the network traffic problem of the existing VMS server caused by converging different sensors. The second experiment is an experiment on load distribution. This can prove the advantage of being able to operate the system on a cloud server without an expensive GPU.
For the experiment, the method of fusion of radar and camera [
40] and the method of fusion of LiDAR and camera [
38] are explained. Both methods are systems that use radar and LiDAR sensors together with cameras to detect intrusions in the server. By comparing the traffic and computational cost of these methods with the proposed method, the effectiveness of the proposed method using an edge computer can be demonstrated.
Tables 1 and 2 show numerically the results of comparing network traffic and computational load. The experiment was conducted in an environment in which five people walked freely at a point that was 10–20 m from the sensor. In all methods, LiDAR and radar data are compressed with Huffman encoding. The proposed method is more efficient than the existing methods that performed all calculations on the server, as shown in Table 1, because all detections are processed by the edge computer and only events are transmitted. Also, as shown in Table 2, since GPU operation is not required, server costs can be effectively saved. But there are not only advantages. As more sensors or cameras are used, the number of edge computers also increases, which can be an additional cost. Therefore, the proposed method is advantageous when building a system in a cloud environment.
The results for the comparison of the accumulated number of data transmitted to the cloud’s main server for 20 seconds are shown in Fig. 10. The proposed system transmitted 29,031,473 bytes when the edge computer was not used. When the edge computer was used, 2,031,458 bytes were transmitted. The proposed edge computing service reduced the number of data by 93.1%.
Table 1. Comparison of average data transfer rate per second between the proposed method and existing methods
Method |
Average data transfer rate per second (MB) |
Ratio (%) |
Radar+Camera |
0.8 MB |
800 |
LiDAR+Camera |
1.8 MB |
1,800 |
Radar+LiDAR+Camera |
2.2 MB |
2,200 |
Proposed method |
101 kB |
100 |
Table 2. Average computation comparison between the proposed and existing methods
Method |
Average GPU usage (%) |
Average CPU usage (%) |
Radar+Camera |
21 |
22 |
LiDAR+Camera |
42 |
47 |
Radar+LiDAR+Camera |
68 |
54 |
Proposed method |
0 |
10 |
Fig. 10. Comparison of the accumulated number of data transmitted to the cloud’s main server for
20 seconds. The red line shows the total number of data transmitted by the security system when
the server is used without the edge computer, and the blue line shows the total number of
compressed data when the proposed edge computing method is used.
When an edge computer was not used, data loss occurred due to the UDP, as the number of data to be transmitted accumulated. This occurred mainly in the LiDAR sensor with many point clouds, and it caused 11 detection errors out of a total of 400 scanned point clouds. This is because datagrams that are delayed in transmission are lost as the network load increases as the datagrams accumulate. When an edge computer was used, all data arrived correctly, and there were no detection errors.
A comparison of the speed of rendering the LiDAR and radar data in real time is shown in Fig. 11. In the experimental environment, the main server was built in a private cloud. Therefore, it was impossible to connect directly to a device on a public network. In this experiment, the method of direct rendering by the client by collecting data from the device was compared with the method using an edge computer. The proposed method reused the frame buffer visualized at more than twice the average speed.
Fig. 11. Experimental results for the rendering speed of the 3D view.
The two experiments showed that the proposed method is more accurate and faster by reducing the datagram loss rate and network load. In addition, the monitoring system effectively reduces the number of data required for visualization, decreasing the amount of computation on the client PC and reducing network bottlenecks.
Conclusion and Future Work
Security systems at critical facilities use various sensors to detect moving objects in real time to prevent accidents caused by unauthorized intrusions. To perform such pre-sensing, the proposed method uses LiDAR sensors, radar sensors, and cameras. Radar sensors are excellent for long-distance sensing, and LiDAR sensors are fast and accurate. By fusing the characteristics of these two sensors with a camera’s object detection method, it is possible to monitor intrusion more efficiently. However, the number of point cloud data scanned by the LiDAR and radar sensors is too large for the network bandwidth. In the proposed method, the main idea is to connect each group of sensors with an edge computer, reduce the number of data through preprocessing, and distribute the network and computational load by converting it to coordinates in the integrated coordinate system. In addition, the client application can render the detection screen faster through compressed data rather than directly receiving raw data from the sensor. This method has advantages in terms of performance and cost compared to a camera-based system, such as a conventional VMS. The camera used in a VMS has various features, such as depth and infrared. Therefore, the cost is high when many cameras are used. The proposed system can reduce the number of camera installations with the long detection ranges of LiDAR and radar. In addition, when object recognition fails due to a camera’s blind spot or weather, the sensors can be used. However, LiDAR and radar sensors cannot detect areas occluded by objects. This problem occurs in a space where various objects are arranged. Therefore, it is advantageous to have no objects in the monitoring area.
It is very important for security systems to control access rights to data. In future work, we will investigate methods that use a distributed server [41] or a serverless model [42] to examine the authority and security of data in various systems. By applying the latest blockchain technology, fake data will be identified through the flow of sensed data, user, and log information. Performance improvement through the weight-function using these data will also be an important issue in the future work [43, 44].
Author’s Contributions
Conceptualization, BSS. Project administration, YIL. Writing of the original draft, ESL, BSS. Writing of the review and editing, YCK. Formal analysis, EKJ.
Funding
This work was supported by the Technology Development Program (No. S3166785) funded by the Ministry of SMEs and Startups (MSS, Korea).
Competing Interests
The authors declare that they have no competing interests.
Author Biography
Please be sure to write the name, affiliation, photo, and biography of all the authors in order.
Only up to 100 words of biography content for each author are allowed.

Name: Eun-Seok Lee
Affiliation: Dept. of VR Games & Apps, Yuhan University
Biography: Eun-Seok Lee is a professor in in department of VR·Games and Applications at the Yuhan University of Korea. He received his B.S., M.S., and Ph.D. in Computer and Information Engineering from the Inha University of Korea. His research interest includes human computer interaction, virtual reality, real-time rendering, game engine, middleware and dynamic data organization for optimal graphics hardware usage.

Name: Yoon-Yim Lee
Affiliation: Dept. of VR Games & Apps, Yuhan University
Biography: Yoon-Yim Lee is a professor in department of VR·Games and Applications at the Yuhan University of Korea. She received her M.S. and completion of a doctor’s degree in Game Design from the SangMyung University of Korea. Her research interest includes HCI, Quality Assurance, Big Data, Log Data Analysis etc.

Name: Young-Cheol Kim
Affiliation: Dept. of VR Games & Apps, Yuhan University
Biography: Young-Chul Kim is a professor in department of VR Games & Applications at the Yuhan University of Korea. He received his M.S., and Ph.D. in Computer and System Software from the SoongSil University of Korea. His research interests includes human computer interaction, compiler and digital plagiarism research.

Name: Eun-Kyung Jo
Affiliation: Dept. of Computer Science and Engineering, Inha University
Biography: Eun- Kyung Jo is a M.S. candidate in Computer Science at the Inha University of Korea. She received B.S. degree in Department of Law from the Sangmyung University of Korea. Her research interests include human computer interaction, data science and deep learning.

Name: Byeong-Seok Shin
Affiliation: Dept. of Computer Science and Engineering, Inha University
Biography: Byeong-Seok Shin is a professor in the School of Computer and Information Engineering, Inha University, Korea. He received his B.S., M.S., and Ph.D. in Computer Engineering from the Seoul National University in Korea. Current research interests include human computer interaction, volume rendering, real-time graphics, and medical imaging.
References
[1]
O. B. Kwon and M. K. Park, “Design and implementation of video management system using smart grouping,” IAENG International Journal of Computer Science, vol. 45, no. 1, pp. 22-26, 2018.
[2]
M. T. H. Fuad, A. A. Fime, D. Sikder, M. A. R. Iftee, J. Rabbi, M. S. Al-Rakhami, et al. “Recent advances in deep learning techniques for face recognition,” IEEE Access, vol. 9, pp. 99112-99142, 2021.
[3]
S. Agrawal, S. Sarkar, O. Aouedi, G. Yenduri, K. Piamrat, M. Alazab, S. Bhattacharya, P. K. R. Maddikunta, and T. R. Gadekallu, “Federated learning for intrusion detection system: concepts, challenges and future directions,” Computer Communications, vol. 195, pp. 346-361, 2022.
[4]
M. Menke, T. Wenzel, and A. Schwung, “Improving GAN-based domain adaptation for object detection,” in Proceedings of 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 2022, pp. 3880-3885.
[5]
A. A. Khan, A. A. Shaikh, Z. A. Shaikh, A. A. Laghari, and S. Karim, “IPM-Model: AI and metaheuristic-enabled face recognition using image partial matching for multimedia forensics investigation with genetic algorithm,” Multimedia Tools and Applications, vol. 81, no. 17, pp. 23533-23549, 2022.
[6]
M. E. Lee and K. H. Ryu, “Design and implementation of flying-object tracking management system by using radar data,” The KIPS Transactions: Part D, vol. 13, no. 2, pp. 175-182, 2006.
[7]
C. V. Poulton, A. Yaacobi, D. B. Cole, M. J. Byrd, M. Raval, D. Vermeulen, and M. R. Watts, “Coherent solid-state LIDAR with silicon photonic optical phased arrays,” Optics Letters, vol. 42, no. 20, pp. 4091-4094, 2017.
[8]
H. W. Yoo, N. Druml, D. Brunner, C. Schwarzl, T. Thurner, M. Hennecke, and G. Schitter, “MEMS-based lidar for autonomous driving,” Elektrotechnik & Informationstechnik, vol. 135, pp. 408-415, 2018.
https://doi.org/10.1007/s00502-018-0635-2
[9]
D. Wang, H. Xie, L. Thomas, and S. J. Koppal, “A miniature LiDAR with a detached MEMS scanner for micro-robotics,” IEEE Sensors Journal, vol. 21, no. 19, pp. 21941-21946, 2021.
[10]
M. S. Pak, K. T. Kim, M. S. Koo, Y. J. Ko, and S. H. Kim, “Design of robot arm for service using deep learning and sensors,” KIPS Transactions on Software and Data Engineering, vol. 11, no. 5, pp. 221-228, 2022.
[11]
F. Nardo, D. Peressoni, P. Testolina, M. Giordani, and A. Zanella, “Point cloud compression for efficient data broadcasting: a performance comparison,” in Proceedings of 2022 IEEE Wireless Communications and Networking Conference (WCNC), Austin, TX, 2022, pp. 2732-2737.
[12]
A. A. Khan, A. A. Laghari, A. A. Shaikh, M. A. Dootio, V. V. Estrela, and R. T. Lopes, “A blockchain security module for brain-computer interface (BCI) with multimedia life cycle framework (MLCF),” Neuroscience Informatics, vol. 2, no. 1, article no. 100030, 2022.
https://doi.org/10.1016/j.neuri.2021.100030
[13]
Q. Liu, Y. He, and C. Jiang, “Localization of subsurface targets based on symmetric sub-array MIMO radar,” Journal of Information Processing Systems, vol. 16, no. 4, pp. 774-783, 2020.
[14]
F. Engels, P. Heidenreich, M. Wintermantel, L. Stacker, M. Al Kadi, and A. M. Zoubir, “Automotive radar signal processing: research directions and practical challenges,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 4, pp. 865-878, 2021.
[15]
E. Yavari, H. Jou, V. Lubecke, and O. Boric-Lubecke, “Doppler radar sensor for occupancy monitoring,” in Proceedings of 2013 IEEE Topical Conference on Power Amplifiers for Wireless and Radio Applications, Austin, TX, 2013, pp. 145-147.
[16]
G. L. Charvat and L. C. Kempel, “Synthetic aperture radar imaging using a unique approach to frequency-modulated continuous-wave radar design,” IEEE Antennas and Propagation Magazine, vol. 48, no. 1, pp. 171-177, 2006.
[17]
M. Zhao, A. Mammeri, and A. Boukerche, “Distance measurement system for smart vehicles,” in Proceedings of 2015 7th International Conference on New Technologies, Mobility and Security (NTMS), Paris, France, 2015, pp. 1-5.
[18]
P. M. Chu, S. Cho, J. Park, S. Fong, and K. Cho, “Enhanced ground segmentation method for Lidar point clouds in human-centric autonomous robot systems,” Human-centric Computing and Information Sciences, vol.9, article no. 17, 2019.
https://doi.org/10.1186/s13673-019-0178-5
[19]
H. Ma and Z. Wang, “Distributed data organization and parallel data retrieval methods for huge laser scanner point clouds,” Computers & Geosciences, vol. 37, no. 2, pp. 193-201, 2011.
[20]
C. K. Tsung, C. T. Yang, R. Ranjan, Y. L. Chen, and J. H. Ou, “Performance evaluation of the vSAN application: a case study on the 3D and AI virtual application cloud service,” Human-centric Computing and Information Sciences, vol. 11, article no. 9, 2021.
https://doi.org/10.22967/HCIS.2021.11.009
[21]
D. Oryspayev, R. Sugumaran, J. DeGroote, and P. Gray, “LiDAR data reduction using vertex decimation and processing with GPGPU and multicore CPU technology,” Computers & Geosciences, vol. 43, pp. 118-125, 2012.
[22]
M. Schutz and M. Wimmer, “Rendering point clouds with compute shaders,” in Proceedings of the SIGGRAPH Asia 2019 Posters, Brisbane, Australia, 2019.
[23]
J. Huang, S. Ran, W. Wei, and Q. Yu, “Digital integration of LiDAR system implemented in a low-cost FPGA,” Symmetry, vol. 14, no. 6, article no. 1256, 2022.
https://doi.org/10.3390/sym14061256
[24]
M. E. Warren, “Automotive LIDAR technology,” in Proceedings of 2019 Symposium on VLSI Circuits, Kyoto, Japan, pp. C254-C255.
[26]
B. Lv, H. Xu, J. Wu, Y. Tian, Y. Zhang, Y. Zheng, C. Yuan, and S. Tian, “LiDAR-enhanced connected infrastructures sensing and broadcasting high-resolution traffic information serving smart cities,” IEEE Access, vol. 7, pp. 79895-79907, 2019.
[27]
W. Zhang and D. Yang, “Lidar-based fast 3D stockpile modeling,” in Proceedings of 2019 International Conference on Intelligent Computing, Automation and Systems (ICICAS), Chongqing, China, 2019, pp. 703-707.
[28]
W. Song, L. Zhang, Y. Tian, S. Fong, J. Liu, and A. Gozho, “CNN-based 3D object classification using Hough space of LiDAR point clouds,” Human-centric Computing and Information Sciences, vol. 10, article no. 19, 2020.
https://doi.org/10.1186/s13673-020-00228-8
[29]
W. Song, S. Zou, Y. Tian, S. Fong, and K. Cho, “Classifying 3D objects in LiDAR point clouds with a back-propagation neural network,” Human-centric Computing and Information Sciences, vol. 8, article no. 29, 2018.
https://doi.org/10.1186/s13673-018-0152-7
[30]
X. Zhang, L. Bai, Z. Zhang, and Y. Li, “Multi-scale keypoints feature fusion network for 3D object detection from point clouds,” Human-centric Computing and Information Sciences, vol. 12, article no. 29, 2022.
https://doi.org/10.22967/HCIS.2022.12.029
[31]
W. Song, Z. Liu, Y. Tian, and S. Fong, “Pointwise CNN for 3D object classification on point cloud,” Journal of Information Processing Systems, vol. 17, no. 4, pp. 787-800, 2021.
[32]
E. Felemban, “Advanced border intrusion detection and surveillance using wireless sensor network technology,” International Journal of Communications, Network and System Sciences, vol. 6, pp. 251-259, 2013.
[33]
R. Sabatini, A. Gardi, and M, Richardson, “LIDAR obstacle warning and avoidance system for unmanned aircraft,” International Journal of Mechanical, Aerospace, Industrial and Mechatronics Engineering, vol. 8, no. 4, pp. 718-729, 2014.
[34]
F. Chiariotti, A. A. Deshpande, M. Giordani, K. Antonakoglou, T. Mahmoodi, and A. Zanella, “QUIC-EST: a QUIC-enabled scheduling and transmission scheme to maximize VoI with correlated data flows,” IEEE Communications Magazine, vol. 59, no. 4, pp. 30-36, 2021.
[35]
E. S. Kim and S. Y. Park, “Extrinsic calibration between camera and LiDAR sensors by matching multiple 3D planes,” Sensors, vol. 20, no. 1, article no. 52, 2019.
https://doi.org/10.3390/s20010052
[36]
A. Singh, A. Kamireddypalli, V. Gandhi, and K. M. Krishna, “Lidar guided small obstacle segmentation,” in Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, 2020, pp. 8513-8520.
[37]
A. A. Khan, Z. A. Shaikh, L. Baitenova, L. Mutaliyeva, N. Moiseev, A. Mikhaylov, A. A. Laghari, S. A. Idris, and H. Alshazly, “QoS-ledger: smart contracts and metaheuristic for secure quality-of-service and cost-efficient scheduling of medical-data processing,” Electronics, vol. 10, no. 24, article no. 3083, 2021.
https://doi.org/10.3390/electronics10243083
[38]
T. Ojanperä, J. Makela, O. Mammela, M. Majanen, and O. Martikainen, “Use cases and communications architecture for 5G-enabled road safety services,” in Proceedings of 2018 European Conference on Networks and Communications (EuCNC), Ljubljana, Slovenia, 2018, pp. 335-340.
[39]
T. Ojanperä, J. Makela, O. Mammela, M. Majanen, and O. Martikainen, “Use cases and communications architecture for 5G-enabled road safety services,” in Proceedings of 2018 European Conference on Networks and Communications (EuCNC), Ljubljana, Slovenia, 2018, pp. 335-340.
[40]
A. A. Khan, Z. A. Shaikh, L. Belinskaja, L. Baitenova, Y. Vlasova, Z. Gerzelieva, A. A. Laghari, A. A. Abro, and S. Barykin, “A blockchain and metaheuristic-enabled distributed architecture for smart agricultural analysis and ledger preservation solution: a collaborative approach,” Applied Sciences, vol. 12, no. 3, article no. 1487, 2022.
https://doi.org/10.3390/app12031487
[41]
T. Sproull and D. Shook, “Machine learning on the move: teaching ML kit for firebase in a mobile apps course,” in Proceedings of the 54th ACM Technical Symposium on Computer Science Education, Toronto, Canada, 2022, pp. 1182-1182.
[42]
S. Agrawal, A. Chowdhuri, S. Sarkar, R. Selvanambi, and T. R. Gadekallu, “Temporal weighted averaging for asynchronous federated intrusion detection systems,” Computational Intelligence and Neuroscience, vol. 2021, article no. 5844728, 2021.
[43]
A. A. Khan, A. A. Wagan, A. A. Laghari, A. R. Gilal, I. A. Aziz, and B. A. Talpur, “BIoMT: a state-of-the-art consortium serverless network architecture for healthcare system using blockchain smart contracts,” IEEE Access, vol. 10, pp. 78887-78898, 2022.
About this article
Cite this article
Eun-Seok Lee1 , Yun-Im Lee1 , Young-Cheol Kim1 , Eun-Kyung Jo2 , and Byeong-Seok Shin2,*, An Efficient Intrusion Detecting Method Using Multiple Sensors and Edge Computing, Article number: 13:15 (2023) Cite this article 2 Accesses
Download citation
- Received18 September 2022
- Accepted28 November 2022
- Published15 April 2023
Share this article
Anyone you share the following link with will be able to read this content:
Provided by the Springer Nature SharedIt content-sharing initiative
Keywords