ArticlesAdaptive ECG Signal Compression Method Based on Look-Ahead Linear Approximation for Ultra Long-Term Operating of Healthcare IoT Devices
Abstract
Signal compression is an important study in the electrocardiogram (ECG) signal analysissince ECG signals require a long time measurement. Linear approximation shows a high signal compression rate, and is efficient in detecting ambiguous fiducial points. Existing research improved the execution time to enable real-time linear approximation, but the existing algorithm selected the number of vertices arbitrarily. Thus, the existing linear approximation does not guarantee that the conditions of compression ratio (CR) or reconstruction errormeasured by percentage root-mean-square difference (PRD) will be satisfied. In this study, we improve the algorithm to enable a linear approximation based onthespecified CR or PRD. We propose a quantitative approach to determinethe optimal number of vertices that satisfiesthe specified CR through inverse computation. Additionally, we extend the cost matrix in advance and selectthe optimal number of vertices in a look-ahead method, thereby performing signal compression according to the PRD. From experimental results, we confirmed an average PRD of 0.78% in the given CR of 10:1, and an average CR of 12.7:1 in the given PRD of 2%.
Keywords
ECG, Signal Compression, Linear Approximation, Signal Reconstruction, Compression Ratio, Percentage Root-Mean-Square Difference
Introduction
Bio-signalsare widely used for healthcare, security, human activity, and so on [1,2]. A health monitoring system acquires and transmits signals, detects abnormal signals, and performsreal-time health checks on the user [3, 4]. The electrocardiogram (ECG) signal is one of the representative bio-signals used for early heart disease diagnosis, and bio-signal monitoring research on the ECG signal is being actively conducted as the mortality rate from heart disease rises [5–8].
ECG signal analysis examinesthe types and frequency of rarely occurringabnormal beats. Therefore, a long time measurement is necessaryfor accurate ECG signal analysis, and accordingly, a signal compression method must effectively store a signal. Particularly, an ECG signal compression method suitable for embedded devices is required for real-time transmission of the obtainedECG signals.
Linear approximation is a time-domain signal compression that approximates a signal with a few vertices [9]. In the linear approximation, a curvature-based linear approximation step [10] obtains the initial vertices for each signal separated by RR interval, and a sequential linear approximation step [11] obtains additional vertices within the initial vertices. Further, dynamic programming [12] optimizes the approximation error (or reconstruction error) globally. The linear approximation has a problem in that its complexity increases exponentially with the signal length and the number of vertices in the optimization step. Leeand his colleagues [13,140] optimized the size and operation area of the cost and base matrix used in dynamic programming and improved the algorithm to enable real-time processing. These methods enable effective linear approximation, but cannot achieve signal compression that satisfies the given compression ratio (CR) or reconstruction error, measured by the percentage root-mean-square difference (PRD) because the number of vertices is arbitrarily selected.
This study investigates the optimal number of vertices for signal compression based on a specified CR or PRD in linear approximation. The minimum number of vertices that satisfies the specified signal CR can be obtained by inverse calculation since CR depends on the number of vertices. Therefore, we can skip the initial and additional vertex selection steps of the existing linear approximation, and proceed directly with the linear approximation based on to the obtained minimum number of vertices using dynamic programming.
We can measure the PRD after the approximation is completed. Therefore, we must repeat the process of measuring the PRD by increasing the number of vertices until the measured PRD satisfies the specified PRD. We can reduce the number of computationssince dynamic programming minimizes redundant computation via memoization. However, the cost matrix increases gradually during the iteration. Additionally, dynamic programming calculates the cost matrix by column-wise operation, but the row of the cost matrix increases during the iteration. In this study, we prevent dynamic allocation of the cost matrix and perform a stable column-wise operation by generating an expanded cost matrix in a look-ahead method.
Fig. 1 shows the proposed algorithm flow compared with the existing linear approximation.
This paper is organized as follows. Section 2 introduces the composition of ECG signals. Section 3 introduces the process of linear approximation in detail. Section 4 introduces improved linear approximation based on the specified CR or PRD. In Section 5, we confirm the performance of the proposed method through an experiment, and the conclusion is presented in Section 6.

Fig. 1. Proposed algorithm flow.
Electrocardiogram Signal
In the atrium and ventricular depolarization and repolarization processes, the P-wave, QRS complex, and T-wave are measured in the ECG signal [15]. The feature values of each waveform are obtained using the intervals, segments, and amplitude difference between the fiducial points.
Fig. 2 shows the composition of the fiducial points and feature values of the ECG signal.
Fig. 2. The composition of fiducial points and features of the ECG signal.
Among fiducial points, the R-peak has the highest amplitude value. Therefore, R-peak detection is relatively easy and can be accomplished accurately through post-processing correction [
16,
17]. Based on these characteristics, the R-peak becomes a criterion for classifying beats, and is used for detecting other fiducial points.
Each beat is classified through the R-peak, and two methods are used to separate the beats. The first method divides the beats into the region of 275 ms before and 375 ms after the R-peak [
18]. This region includes the P-wave, QRS complex, and T-wave, which are mainly used for ECG signal analysis; however, some data samples are missing. The RR interval is used to divide the beats in the second method. This minimizes the approximation error by approximating all data. Fig. 3 shows the two beat separation methods.
In this study, we separate the beats according to the RR interval as shown in Fig.3(b).
Fig. 3. The two beat-separation methods: (a) centering on the R-peak and (b) RR interval.
Existing Linear Approximation
Fig. 4 shows the concept for linear approximation.
Fig. 4. Illustration of linear approximation: (a) the existing method, (b) linear approximation.
Detection of the fiducial points such as the onset or offset is difficult because the fiducial points do not have a maximum or minimum amplitude(Fig.4(a)). Additionally, it is ambiguous to select a fiducial point because the samples around fiducial points have similar feature values.
However, the linear approximation has the advantage of representing the fiducial point corresponding to the boundary between the waveform region with a large amplitude change and the baseline region with a small amplitude change as a vertex because it represents a signal with fewvertices(Fig. 4(b)). Additionally, it is possible to emphasize the feature values of the vertices by representing them as a few vertices.
The input signal is divided into RR intervals based on R-peak detection, and an independent linear approximation is performed for each beat. The linear approximation proposed by Lee et al. [
9] proceeds in three stages: the initial vertex selection, additional vertex selection, and error optimization. Lee and colleagues [
13,
14] also improved dynamic programming used in the optimization step based on the ECG signal’s characteristics.
Initial Vertex Selection
First, a point with a large curvature is obtained as an initial vertex using curvature-based linear approximation [
10]. The curvature is the amount by which a curve deviates from being a straight line and it is calculated by using the included angle between the three points as (1):
$δ(Y;X,Z)=\frac{\overrightarrow a× \overrightarrow c}{a} =\frac{(x_1-z_1 )(x_2-y_2 )-(x_2-z_2 )(x_1-y_1 )}{\sqrt{(x_1-z_1 )+(x_2-z_2 )^2}}, x_1>y_1>z_1$(1)
Fig. 5 shows the concept of the curvature calculation process and an example of the approximated signal.
In dynamic programming, complexity increases exponentially with the signal length and the number of vertices. The initial vertex selection can reduce the complexity by dividing the RR interval into several sub-intervals.
Additional Vertex Selection
Curvature-based linear approximation satisfactorily expresses a fiducial point as an initial vertex because it is a point with a large curvature. However, if the waveform’s amplitude change is small, it may not be captured as the initial vertex, as shown in QRS-onset and P-offset in Fig. 5(b). To correct this, additional vertices are obtained using sequential linear approximation for the inside of each initial vertex [
11].
Fig. 5. The curvature-based linear approximation: (a) concept of curvature and (b) an example of the method.
Fig. 6 shows the concept of the sequential linear approximation process and an example of the approximated signal of Fig. 5(b).
Fig. 6(a) shows that a vertex is added when the error exceeds the threshold $D_{max}$ and the signal was effectively approximated as shown in Fig. 6(b).
Fig. 6. Sequential linear approximation: (a) concept of method and (b) approximation result of Fig. 5(b).
The approximation result depends on the number of vertices because dynamic programming optimizes the vertices globally. Thus, the additional vertex selection appropriately selects the number of vertices for dynamic programming.
Error optimization
Global error optimization calculates the costs for all cases, and detects the minimum case as an optimization result. Therefore, the problem of optimizing the position of $N$ vertices for $L$ samples has a complexity of O($L^N$). Dynamic programming [
12] is a global optimization technique in which the optimal path between two points is optimized as the global optimal path between any two points on the global path using Bellman’s optimal principle. The recursive approach simplifies and optimizes the problem, particularly using memoization to remember the computational results, which eliminates redundant operations. Dynamic programming enables high-speed global optimization but requires additional memory for memoization, which consisting of cost and base matrices. Thus, the size of the cost and base matrices required for memoization isO($L^2N$).
Fig. 7 shows the cost and base matrices used for memoization of dynamic programming.
Fig. 7. The cost and base matrices, which are used for the memoization of dynamic programming.
For a signal of length L, the optimization of the partial signal from i to j, including the k vertices, is recursively calculated as (2):
$C_k (i,j)=\min\limits_{v_k ∈ [1,⋯,L]} (C_{k-1} (i,v_k) + C_0 (v_k,j) ),$(2)
where $v_k$ denotes the position of the $k_{th}$ vertex, and when k is 0, C_0 ($i,j$) is calculated as an error between the input signal and the line connecting the $i_{th}$ and $j_{th}$ sample.In this study, we calculate the error as the length of the perpendicular lineas shown in Fig. 6(a) because it is effective in approximating the QRS complex with rapid amplitude change.
Fig. 8 shows the results of optimization using the dynamic programming shown in Fig. 6(b).
The signal is well represented with few vertices, and dynamic programming minimizes the reconstruction error.
Fig. 8. Optimization results of Fig. 6(b).
Optimization of Dynamic Programming
Lee et al. [
13] optimized the calculation of the cost and base matrices based on the ECG signal’s characteristics, thereby improving the performance and enabling real-time operation even in embedded devices.
Fig. 9 shows the improvements of the spatial and time complexity of the cost and base matrices used in dynamic programming.
The following is a detailed description of each step of Fig. 9.
Fig. 9. Summary of the dynamic programming improvement process.
Improvement based on characteristics of ECG signals
$Characteristic 1$
The cost and base matrices are symmetric because the approximation error of the ECG signal is not affected by the measurement direction.
$Characteristic 2$
The vertices’ time information has a monotonic increasing property because each vertex of the ECG signal is selected according to the passage of time.
$Improvements$
The first and last vertices are fixed as initial vertices corresponding to both ends of the input signal. Therefore, $i$ always becomes 1 in $C_k(i,j$) of (2) and the existing cost matrix, $C_k(1,j$),can be expressed as C($k,j$),as shown in (3):
$C(k,j)=\min\limits_{1 < v_k < j}(C(k-1,v_k )+C_0 (v_k,j) )$(3)
By expressing $C_k(1,j$)as $C(k,j)$, the cost matrix of the size of $O(L^2 N)$ is reduced to the size of $O(NL)$. Thus, $C(N,L)$ is an optimized error value when N additional vertices exist between the first sample and the L_thlast sample and it represents the dynamic programming result. Additionally, the existing dynamic programming method optimizes by performing calculations in a recursive, top-down method. However, improving the cost matrix fixes the area required for the calculation. Thus, it can be optimized through a bottom-up operation without a recursive approach.
Add limit of the time difference between vertices
$Problem$
Many data bits are required to record the vertices’ time information because the ECG signal is measured for a long time.
$Improvement$
The amount of information is minimized by representing the vertex information as a time difference from the previous vertex. The maximum interval between the vertices is limited by the number of bits ($N_{Bit{$ = $2^5$ in this study) used to indicate the time difference between the vertices.The cost matrix $C(k,j)$ is calculated only when the interval between the $k_{th}$ vertex and the $(k+1)_{th}$ vertex does not exceed $N_{Bit}$, as expressed by (4):
$C(k,j) = \min\limits_{v_(k+1)-N_{Bit} ≤ v_k < v_(k+1)} (C(k-1,v_k )+C_0 (v_k,j) )$(4)
The computation for the base matrix is reduced because the time difference between the two vertices is limited by $N_{Bit}$.
Row-wise operation
$Problem$
The first row of the cost matrix is calculated based on the result of the base matrix operation. Similarly, the $(k+1)_{th}$ row of the cost matrix is sequentially calculated using the k_th row of the cost matrix and base matrix’s corresponding components.This row-wise operation requires a call to a large-area base matrix, making it difficult to improve the base matrix.
$Improvement$
The called base matrix is sequentially termed column-by-column when each component of the cost matrix is calculated in a column-wise operation. That is, the $j_{th}$ column of the base matrix is used only in the calculation of the cost matrix’s $j_{th}$ column. Thus, the base matrix $C_0 (v_k,j)$ in (4) can be expressed as (5):
$C(k,j) = \min\limits_{v_{k+1}-N_{Bit} ≤ v_k < v_{k+1}} (C(k-1,v_k )+C_0^j (v_k-(v_{k+1}-N_{Bit} )+1) )$(5)
As shown in (5), the memory usage of the base matrix, $C_0^j$, can be overwritten by $C_0^{j+1}$. Accordingly, the size of the base matrix can be minimized from $O(L^2)$ to $O(N_{Bit})$ in a column unit vector.
Improved Linear Approximation
Compression Ratio-Based Approximation
For the input signal with length $L$, CR of linear approximation with $N_V$ vertices can be calculated as (6):
$CR=\frac{Total bit of vertices}{Total bit of samples} = \frac{Bit_{F_V}+N_V × (Bit_T+Bit_A)}{L×Bit_A}$(6)
where $Bit_{F_V}$ denotes the bits of the first vertex’s time information (64 bits in this study), $Bit_T$ denotes the threshold for the time difference between vertices (5 bits in this study),and $Bit_A$ denotes the resolution of the sample’s amplitude (11 bits in this study).
At this time, $N_Bit (2^{Bit_T}=32)$ restricts the time difference between vertices,and we can obtain the minimum number of vertices ($N_{Min}$) can be obtained as (7):
$N_{Min}= \lceil\frac{L}{N_{Bit}}\rceil $(7)
If the CR when $N_V$ = $N_{Min}$ is less than the specified CR, we guarantee that thelinear approximation is possible to satisfy a given CR. Otherwise, the CR becomes the maximum when $N_V$ = $N_{Min}$.
As shown in (6), we can estimate the CR by the number of vertices since the other variables are predefined constant values. Thus,we can obtain the minimum number of vertices ($N_V$) to satisfy a specified CR by inverse calculation as (8):
$N_V=max(\lceil\frac{CR×L×Bit_A-Bit_{F_V}}{(Bit_T+Bit_A}\rceil,\lceil\frac{L}{N_{Bit}}\rceil )$(8)
Additionally, the first vertex’s time information is required only necessary for the first interval’s linear approximation, and the given interval’s first vertex coincides with the previous interval’s last vertex. Thus, the $N_V$ after the first interval is calculated as (9):
$N_V=max(\lfloor\frac{CR×L×Bit_A}{Bit_T+Bit_A}+1 \rfloor,\lceil\frac{L}{N_{Bit}}\rceil )$(9)
The existing linear approximation proceeds with the initial vertex selection using curvature-based linear approximation, additional vertex selection using sequential linear approximation, and optimization using dynamic programming. The initial vertex selection divides the RR interval to reduce the number of computations, whereas the additional vertex selection is used to select the number of vertices.However, we improved dynamic programming to enable real-time calculation without splitting the RR interval based on the initial vertex selection, and the additional vertex selection is unnecessary because the number of vertices is determined by inverse calculation as shown in (9).
Therefore, we can simplify the process of linear approximation according to CR in two steps: (i) selecting the number of vertices using (9) and (ii) optimizing the vertices using dynamic programming.
PRD-Based Approximation
Unlike CR, calculating the number of vertices required to satisfy the PRD is challenging. The number of vertices used for linear approximation should be increased until the approximation satisfies the PRD. In the iteration process, the cost matrix size gradually increases according to the number of vertices.
Fig. 10 shows the cost matrices according to the number of vertices.
As shown in (7), we can obtain the minimum number of vertices ($N_{Min}$). From $N_V=N_{Min}$, the $N_V$ increases by 1 until the PRD of $N_V$ optimized vertices satisfy the PRD condition (2% in this study).
However, this iteration causes problems when implemented in low-capacity embedded devices because the cost matrix’s size continuously increases, requiring dynamic memory allocation. Additionally, it is inefficient to calculate the new row component of the extended cost matrix because Leeet al. [
13]optimized the execution time by column-wise bottom-up operation.
Therefore,instead of expanding and recalculating the cost matrix repeatedly, the cost matrix’s row sizewas expanded by $N_{Max}$, andall essential components were calculated as $N_V$ increases from $N_{min}$ to $N_{Max}$(Fig. 10(d)).Then, the minimum number of optimal verticesthat satisfy the given PRD was determined.
The average CR of beatsis about 10:1 to 12:1, butit decreases to about 3:1 to 5:1 when a sudden amplitude change occurs due to an abnormal beat or noise. Thus, we set the size of the extended cost matrix $N_{Max}$ to five times that of the $N_{Min}$.
Fig. 10. The size and calculation area of the cost matrix according to the number of vertices:
(a) $N_{Min}$ vertices, (b) $N_{Min}$+1 vertices, (c) $N_{Max}$ vertices, and (d) the calculation area required from $N_{Min}$ to $N_{Max}$ vertices.
Experiment
We experimented using the Massachusetts Institute of Technology-Beth Israel Hospital Arrhythmia Database (MIT-BIH ADB) provided by PhysioNet [19]. The MIT-BIH ADB is a database composed of 48 records obtained about 30 minuteslong with 360 Hz sampling frequency and 11 bits amplitude resolution. In the experiment, linear approximations according to CR and PRD are performed, respectively. We excluded datum 102, 104, 107, 113, and 217 which were measured using a pacemaker and the intervals in which R-peak detection was abnormally performed. Additionally, we used a 1–25 Hz band-pass filter for noise suppression, such as low-frequency baseline fluctuation and power supply noise of 30 and 60 Hz.
PRDfor Linear Approximation
To evaluate the experimental performance, we used PRD as (10):
$PRD=\sqrt\frac{∑_{n=1}^L E[n]^2}{∑_{n=1}^LO[n]^2}×100\%, E[n] = O[n] - C[n]$(10)
where O[n] and C[n] denote the input and reconstructed signal of the compressed signal, respectively, and E[n] denotes the PRD between two signals.
However, the input signal requires removing the baseline movement or the DC component for reliable measurement because the average amplitude of the input signal has a great influence on the PRD. Therefore, we subtract the average value $\overline O$ from the input signal O[n] as (11):
$PRD=\sqrt\frac{∑_{n=1}^L E[n]^2}{∑_{n=1}^L (O[n]-O)^2}×100\%, E[n] = O[n] - C[n]$(11)
This is important in the ECG signal because the entire average of the signal can be removed as 0 through preprocessing, but the local average of each beat may not be 0.Further, we propose correcting the measurement of the error E[n]. The error is frequently measured in a region where the amplitude changes rapidly, such as in the QRS complex, despite being sufficiently approximated because the E[n] measures the amplitude difference between two signals at the same time. This results in unnecessary iterations of compression, therebydecreasingCR.
In this study, wecalculate the error as the length of the perpendicular line.The time scale of the x-axis and amplitude scale of the y-axis was used as mV and sec units, respectively. This is based on the average amplitude of R-peak and the RR interval of about 1mV and 1sec, respectively.Moreover,E[n] using an error with C[n] has a low reliability for error measurement in the QRS complex which has large amplitude changes. Consequently, we proposed using the length of the perpendicular line as the error E^' [n]as expressed in(12):
$E^' [n]=\frac{(O_x[n] - V_x[k]) × (V_y[k+1] - O_y[n]) - (V_x[k+1] - O_x[n]) × (O_y[n] - V_y[k])}{\sqrt{(V_x [k+1]-V_x [k] )^2+(V_y [k+1]-V_y [k] )^2}},$
$O_x[n] ∈ [V_x [k],V_x [k+1] ]$(12)
where $O[n]=(O_x [n],O_y [n])$ denotes the $n_{th}$ sample of the input signal including the interval between the $k_{th}$ vertex and $k+1_{th}$ vertices, which represents V[k]=(V_x [k],V_y [k]) and V[k+1]=(V_x [k+1],V_y [k+1]), respectively.
Thus, we can obtain the PRD as (13):
where O[n]=(O_x [n],O_y [n]) denotes the n_th sample of the input signal includingthe interval between the k_th vertex and 〖k+1〗_thvertices,which representsV[k]=(V_x [k],V_y [k]) and V[k+1]=(V_x [k+1],V_y [k+1]), respectively.
Thus, we can obtainthe PRDas (13):
$PRD=\sqrt\frac{∑_{n=1}^L E^'[n]^2}{∑_{n=1}^L (O[n]-\overline O)^2}×100%$(13)
The quality of PRD is rated as 0%–2%, implying very good, and 2%–9%, implying good [
20]. It is sufficient to compress the threshold of PRD to 9%, the upper limit of the good range because a similar rhythm or shape is repeated in the ECG signal. However, this study limits the threshold of the PRD to 2% to provide more accurate diagnosis information since the abnormal beat contains important information for arrhythmia diagnosis.
Compression Ratio-Based Approximation
Weselected the number of vertices according to the given CR using (9). Fig. 11 shows the result of the linear approximation of MIT-BIH ADB records when the given CR is 10:1.
Fig. 11 represents the average and standard deviation of the PRD for each interval by the record. Additionally, the overall average of the PRDis 0.78%, implying an excellent performance.
Fig. 11. PRD distribution of linear approximation for each record of MIT-BIH ADB.
PRD-Based Approximation
In the proposed method, we generate the expanded cost matrix according to the specified PRD and then search the number of vertices that satisfy it. In this study, the specified PRD is 2%, the upper limit of the very good range.Fig. 12 shows the result of the linear approximation when the specified PRD is 2%.
As shown in Fig. 12, we confirmed a high CR on average (12.7:1). Particularly, the simpler the signal shape and the wider the RR interval, the higher the signal compression efficiency (20.73:1 maximum). In contrast, the higher the signal-to-noise ratio (SNR) and the shorter the RR interval, the lower the signal compression efficiency (2.98:1 minimum).
Fig. 12. CR distribution of linear approximation for each record of MIT-BIH ADB.
Fig. 13 compares datum 214 with the highest average CR (16.6:1) and datum 222 with the lowest average CR (6.9:1).
As shown in Fig. 13(a), the simpler the signal, the higher the CR due to sufficient approximation with few vertices. However, in the case of Fig. 13(b), numerous vertices are required due to the high signal to noise ratio, resulting in a sharp decrease in CR.
Table 1 shows the comparison result of CR and PRD with conventional algorithms
By expressing the signal with few vertices, the proposed method not only facilitates the detection of the fiducial point, but also confirms that the CR is not inferior to conventional methods.
Fig. 13. The comparison of CR: (a) datum 214 and (b) datum 222.
Table 1. Comparison of CR and PRD with conventional algorithms
Algorithm |
CR |
PRD (%) |
AZTEC [21] |
10 |
28 |
Turning point [22] |
2 |
5.3 |
Coordinate-reduction-time-encoding system [23] |
4.8 |
7 |
Fourier descriptors [24] |
7.4 |
7 |
Wavelet transform [25] |
8 |
2.6 |
Wavelet & SPIHT [26] |
8 |
1.8 |
B-spline approximation [27] |
9.29 |
1.86 |
Proposed method |
12.7 |
2 |
|
10 |
0.78 |
Conclusion
In this study, we improved the linear approximation algorithm to enable compression signal based on specified CR or PRD. In the case of compression according to the specified CR of 10:1, the average PRD was 0.78%, which was in the very good PRD range. Further, compression based on the specified PRDof 2%, which was the upper limit of the very good range PRD, exhibited a high CR of 12.7:1 on average.
The proposed method simplifies the algorithm because it removes the initial and additional vertex selection step and by inversely calculating the number of vertices according to the given CR or by sufficiently expanding the cost matrix and searching the minimum number of vertices that satisfy the given PRD. Particularly, stable real-time processing is possible because each interval measures CR and PRD independently. However, in the case of an interval with a high SNR, unnecessary vertices are required, which significantly lowers the CR. The local noise suppression filter according to interval may further improve the CR in future studies. Further, we expected that CR can be improved by studying a method that can compress the normal beats, which occupy most of the signal and are repeated periodically.
Acknowledgements
Not applicable.
Author’s Contributions
Conceptualization, SL, DP. Funding acquisition, DP. Investigation and methodology, SL, DP. Project administration, SL. Resources, SL. Supervision, DP. Writing of the original draft, SL. Writing of the review and editing, SL, DP. Software, SL. Validation, SL, DP. Formal Analysis, SL. Data Curation, SL, DP. Visualization, SL.
Funding
This study was supported by the BK21 FOUR project funded by the Ministry of Education, Korea (4199990113966, 10%), Basic ScienceResearch Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT(NRF-2019R1A2C2005099, 10%), and Ministry of Education (NRF-2018R1A6A1A03025109, 20%, NRF-2020R1I1A1A01072343, 20%). This work waspartly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Koreagovernment (MSIT) (No. 2021-0-00944, Metamorphic approach of unstructured validation/verification for analyzing binary code, 40%)
Competing Interests
The authors declare that they have no competing interests.
Author_Biography
Name : Seungmin Lee
Affiliation : School of Electronic and Electrical Engineering, Kyungpook National University, Daegu, Korea
Biography : He received the B.S. and M.S. degrees in mathematics and the Ph.D. degree in electronics engineering from Kyungpook National University (KNU) in 2010, 2012, and 2018, respectively. He expanded his research topics to bioinspired signal processing algorithms and electronics systems. He holds a postdoctoral position in KNU. His research interests include signal processing, image processing, bioinspired signal processing, and compact system implementation. He is focusing his research on the bio-signal processing as the research director of project supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education.

Name : Daejin Park
Affiliation : School of Electronic and Electrical Engineering, Kyungpook National University, Daegu, Korea
Biography : He received the B.S. degree in electronics engineering from Kyungpook National University (KNU) in 2001, the M.S. degree and Ph.D. degree in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2003, and 2014, respectively. He was a Research Engineer in SK Hynix Semiconductor, Samsung Electronics over 12 years, respectively and have worked on designing low-power embedded processors architecture and implementing fully AI-integrated system-on-chip. Prof. Park is now with School of Electronic and Electrical Engineering as full-time assistant processor in KNU, since 2014. He has published over 180 technical papers and 40 patents.
References
[1] T. M. Li, H. C. Chao, and J. Zhang, “Emotion classification based on brain wave: a survey,” Human-centric Computing and Information Sciences, vol. 9, article no. 42, 2019.
https://doi.org/10.1186/s13673-019-0201-x
[2] S. Lee, J. Kim, and N. Moon, “Random forest and WiFi fingerprint-based indoor location recognition system using smart watch,” Human-centric Computing and Information Sciences, vol. 9, article no. 1-14, 2019.
https://doi.org/10.1186/s13673-019-0168-7
[3] Y. Meng, S. H. Yi, and H. C. Kim, “Health and wellness monitoring using intelligent sensing technique,” Journal of Information Processing Systems, vol. 15, no. 3, pp. 478-491, 2019.
[4] W. Lee, N. Kim, and B. D. Lee, “An adaptive transmission power control algorithm for wearable healthcare systems based on variations in the body conditions,” Journal of Information Processing Systems, vol. 15, no. 3, pp. 593-603, 2019.
[5] A. Arshad, S. Khan, A. Z. Alam, R. Tasnim, and R. I. Boby, “Health and wellness monitoring of elderly people using intelligent sensing technique,” in Proceedings of 2016 International Conference on Computer and Communication Engineering (ICCCE), Kuala Lumpur, Malaysia, 2016, pp. 231-235.
[6] S. Lee, D. Park, and K. H. Park, “QRS complex detection based on primitive,” Journal of Communications and Networks, vol. 19, no. 5, pp. 442-450, 2017.
[7] S. Lee and D. Park, “Improved dynamic programming in local linear approximation based on a template in a lightweight ECG signal-processing edge device,” Journal of Information Processing Systems, to be published.
[8] S. Lee and D. Park, “A real-time abnormal beat detection method using template cluster for ECG diagnosis on IoT devices,” Human-centric Computing and Information Sciences, vol. 11, article no. 4, 2021.
https://doi.org/10.22967/HCIS.2021.11.004
[9] S. Lee, Y. Jeong, J. Kwak, D. Park, and K. H. Park, “Efficient communication overhead reduction using polygonal approximation-based ECG signal compression,” in Proceedings of 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Okinawa, Japan, 2019, pp. 058-061.
[10] F. Mokhtarian and R. Suomela, “Robust image corner detection through curvature scale space,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1376-1381, 1998.
[11] K. J. O'Connell, “Object-adaptive vertex-based shape coding method,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 7, no. 1, pp. 251-255, 1997.
[12] R. E. Bellman and S. E. Dreyfus, Applied Dynamic Programming. Princeton, NJ: Princeton University Press, 2015.
[13] S. Lee, Y. Jeong, J. Kwak, D. Park, K. H. Park, “Advanced real-time dynamic programming in the polygonal approximation of ECG signals for a lightweight embedded device,” IEEE Access, vol. 7, pp. 162850-162861, 2019.
[14] S. Lee and D. Park, “Enhanced dynamic programming for polygonal approximation of ECG signals,” in Proceedings of 2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech), Kyoto, Japan, 2020, pp. 121-122.
[15] R. J. Huszar, Basic Dysrhythmias: Interpretation & Management. St. Louis, MOL Mosby/Elsevier, 2007.
[16] J. Pan and W. J. Tompkins, “A real-time QRS detection algorithm,” IEEE Transactions on Biomedical Engineering, vol. 32, no. 3, pp. 230-236, 1985.
[18] A. Li, S. Wang, H. Zheng, L. Ji, and J. Wu, “A novel abnormal ECG beats detection method,” in Proceedings of 2010 The 2nd International Conference on Computer and Automation Engineering (ICCAE), Singapore, 2010, pp. 47-51.
[19] G. B. Moody and R. G. Mark, “The MIT-BIH arrhythmia database on CD-ROM and software for use with it,” in Proceedings Computers in Cardiology (Cat. No.90CH3011-4), Chicago, IL, 1990, pp. 185-188.
[20] Y. Zigel, A. Cohen, and A. Katz, “The weighted diagnostic distortion (WDD) measure for ECG signal compression,” IEEE Transactions on Biomedical Engineering, vol. 47, no. 11, pp. 1422-1430, 2000.
[21] J. R. Cox, F. M. Nolle, H. A. Fozzard, and G. C. Oliver, “AZTEC, a preprocessing program for real-time ECG rhythm analysis,” IEEE Transactions on Biomedical Engineering, vol. 15, no. 2, pp. 128-129, 1968.
[22] W. C. Mueller, “Arrhythmia detection program for an ambulatory ECG monitor,” Biomedical Sciences Instrumentation, vol. 14, pp. 81-85, 1978.
[23] J. P. Abenstein and W. J. Tompkins, “A new data-reduction algorithm for real-time ECG analysis,” IEEE Transactions on Biomedical Engineering, vol. 29, no. 1, pp. 43-48, 1982.
[24] B. S. Reddy and I. S. N. Murthy, “ECG data compression using Fourier descriptors,” IEEE Transactions on Biomedical Engineering, vol. 33, no. 4, pp. 428-434, 1986.
[25] M. L. Hilton, “Wavelet and wavelet packet compression of electrocardiograms,” IEEE Transactions on Biomedical Engineering, vol. 44, no. 5, pp. 394-402, 1997.
[26] Z. Lu, D. Y. Kim, and W. A. Pearlman, “Wavelet compression of ECG signals by the set partitioning in hierarchical trees algorithm,” IEEE Transactions on Biomedical Engineering, vol. 47, no. 7, pp. 849-856, 2000.
[27] C. Ryu, “Adaptive ECG signal compression based on detection of unusual heartbeat,” Ph.D. dissertation, Kyungpook National University, Daegu, Korea, 2014.
About this article
Cite this article
Seungmin Lee and Daejin Park*, Adaptive ECG Signal Compression Method Based on Look-Ahead Linear Approximation for Ultra Long-Term Operating of Healthcare IoT Devices, Article number: 11:30 (2021) Cite this article 5 Accesses
Download citation
- Received29 January 2021
- Accepted19 July 2021
- Published30 July 2021
Share this article
Anyone you share the following link with will be able to read this content:
Provided by the Springer Nature SharedIt content-sharing initiative
Keywords
- ECG
- Signal Compression
- Linear Approximation
- Signal Reconstruction
- Compression Ratio
- Percentage Root-Mean-Square Difference