This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Uncategorized

HUFFMAN QUANTIZATION APPROACH FOR OPTIMIZED EEG SIGNAL COMPRESSION WITH TRANSFORMATION TECHNIQUE

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

CHAPTER 3

HUFFMAN QUANTIZATION APPROACH FOR OPTIMIZED EEG SIGNAL COMPRESSION WITH TRANSFORMATION TECHNIQUE

 

3.1. Prologue

The target of this dissertation is to analyze the Electroencephalogram (EEG) signals efficiently. The extracted EEG signals are compressed and transmitted through WBASN for establishing an efficient transmission process. This chapter presents the complete overview of EEG transmission through Wireless Body Area Sensor Network (WBASN) with the application, limitations, and the proposed solutions.

3.2. Research Objectives

The main objectives of this work are as follows:

  1. To increase the maximum compression rate of the input signal without loss of information.
  2. To evaluate information security and protection in the Internet of things applications.
  3. To increase the performance of lossless EEG signal decomposition.

3.3. Overview

Before providing a detailed understanding of the proposed compression technique, an overview of EEG signal analysis is provided. This chapter discusses the background information related to EEG in section 3.4. This section includes generation, recoding process, frequency bands, and EEG applications. The next section elaborates on the idea of EEG data transmission over WBASN with the shortcomings encountered in existing approaches. The proposed model is discussed, and finally, a summary of this chapter is provided in the section given below.

 

3.4. Background study

In this section, the background information of EEG signal transmission is analyzed. Before analyzing the proposed methodology, it is essential to know the baseline idea of EEG signals, how it is generated, the frequency band, the signal recording, and the corresponding application. The section below discusses the terminologies and the related information of this research work.

3.4.1. Electroencephalogram (EEG) signals

The human brain is composed of millions of neurons that play an essential role in regulating the human body’s behavior is associated with motor, sensory, internal, and external stimulation. It acts as an information carrier among the brain and the human body. The cognitive nature of the brain is analyzed with a better understanding of brain images or signals. EEG is an influential physiological factor that evaluates the brain’s electrical activity. It is measured as the golden neurological and neuro-physiological standards of research over various years. In general, EEG is used to measure the brain’s functionality and predict neurological disorders by the physicians and the researchers.

3.4.2. EEG generation

The current flows are produced during the brain cells (neurons) stimulations. It significantly evaluates the current that flows in the cerebral cortex during synaptic excitations of brain cells’ dendrites. A considerable amount of activated neurons can generate appropriate electrical activity to offer a recordable signal. It is essential to process and amplify the recorded EEG signal. It records both critical or regular brain signals. Hence, it is measured as an appropriate tool in the medical science field.

3.4.3. EEG Recording

EEG signal recording comprises diverse electrodes, filters, amplifiers, and computers/monitors. The electrodes attain the brain’s electrical signal. In general, the electrode pairs are connected to the amplifier. EEG is a micro-volt signal (1 to 100 μV amplitude). It has to be amplified before digitization. Finally, the recorded EEG projects continuous graphical representation of brains’ electrical activities over the computer screens.

The typical EEG system is shown in Fig 3.1. The EEG signals are recorded in two diverse ways based on their location from where the signals are recorded in the head. The first method is non-invasive or scalps EEG, placed on the scalp with sufficient electrical and mechanical contact. The next way is termed Electrocorticogram (ECoG) or intracranial EEG in which the electrodes are implanted directly into the cerebral cortex during brain surgery. This type of EEG recording is also known as invasive EEG. There are diverse kinds of electrodes like a needle electrode, cap electrode, reusable disc electrodes (with gold, silver, tin compositions), disposable conductive gel (Ag-Cl), and headband electrode.

Fig 3.1 Estimating activation patterns of the brain using EEG

The EEG system can have 128 or 256 electrodes based on its use. It is known as a multi-channel EEG system. In general, each electrodes pair specifies a single channel that provides a signal from the EEG system. Generally, the placement of the electrodes over the non-invasive EEG recording follows international standardization with 10-20 systems. Here, the number ’10’ to ’20’ specifies the total distance among the two neighborhood electrodes, which is either 10% to 20% of the entire skull from right to left and front to back. It is shown in Fig. 3.1. The measurement is taken from the starting nasion reference point over the head to the other reference point known as inion. However, the other measures are considered from the right ear to the top of the head or left ear. The electrodes are provided with letter marks that specify the brain lobe position, for instance, O- Occipital, P- Parental, C-Central, T- Temporal, and F- frontal, respectively. The electrodes positioned over the mid-line are marked as ‘Z.’ However, numbers are also used for defining the position of the electrode on the hemisphere like even number specifies the right hemisphere position and odd numbers determine the left hemisphere. For instance, F8 is located in the brain’s right frontal lobe.

 

Fig 3.2 EEG signal band power

3.4.4. EEG signal nature

The brain signal frequency is an essential factor that assists in predicting any kind of neurological disorders. The EEG signal frequency varies based on the human state like physical condition, awakens, or sleeping state. Based on the frequency band and the mental state of EEG, the rhythms are categorized as specific groups known as delta (0.1-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz), and gamma (>30 Hz) respectively. Fig 3.2 depicts the five different types of EEG rhythms’ waveform and the frequency band and related mental state of humans, as in Table 3.1.

Table 3.1 EEG rhythms types, frequency band, and mental states

EEG rhythm Frequency Band Mental state
Delta () 0.1 Hz – 4 Hz Awake condition, brain disorder, profound sleep
Theta () 4 Hz – 8 Hz Frustration, disappointment, unconscious state, creative thought, profound meditation
Alpha () 8 Hz – 13 Hz Relaxation, sub-conscious, eyes closed
Beta () 13 Hz – 30 Hz Concentration, active thinking, problem-solving, conscious
Gamma () 30 Hz – 100 Hz Motor and cognitive functions, hyper-alertness

 

3.4.5. EEG signal applications

It is utilized to predict neurological disorder, physical abnormalities, and brain diseases used for research purposes. The clinical EEG applications are given below:

Epilepsy prediction and region of seizure location

  • Drowsiness and sleeping disorder investigation
  • Anesthesia monitoring
  • Coma, brain death, and consciousness monitoring
  • They are predicting the harm region after stroke, tumor, brain injury, and so on.

The research applications are given below:

  • In neuroscience, psychophysiological, and neuro-science researches

Based on these applications, it is evident that the EEG signal’s scope is based on analysis, transmission, and processing with advanced techniques. The EEG interpretation is easier for the clinicians.

3.4.6. WBAN-based EEG transmission

With typical EEG (wired) systems, patients have to remains extremely nearer to the monitoring device and medical expert. This process leads to interruption over the patients’ daily activities and is also considered a hurdle over the EEG-application based research. For instance, in real-time epileptic seizure detection, the patients’ EEG has to be continuously monitored for a longer time, which is considered an expensive and time-consuming effort, as shown in Fig 3.3. Also, it consumes both physicians’ time and hospital resources. Along with this, the patients’ are separated from their regular routines. Thus, the related variables influence their epilepsy.

 

Fig 3.3 Intelligent systems for data transmission in WBASN

In recent times, EEG telemonitoring via WBAN has turned to be an emerging model for home-based e-health care monitoring. It is a way to capture the EEG signals of patients’ always over an outpatient’s environment with portable devices that the patients can carry out without any interruption over their regular activities. WBAN based EEG system is composed of several EEG sensor devices. These devices are either placed over the patients’ scalp to attain a non-invasive EEG signal or directly implanted over the brain to attain an ECoG signal.

The sensor node also compresses the EEG signal, which is transmitted to the neighboring remote computer or mobile phones through Bluetooth. It is then transferred over the Internet to the receiver/medical server where the original EEG is reconstructed over the computer. Therefore, this system facilitates the patients to maintain track of medical status without visiting the hospitals frequently. Henceforth, the evolution of WSN has enabled real-time patients’ monitoring systems and healthcare applications. The abrupt development of wireless communication technologies establishes a way to assist healthcare and medical procedures in enhancing the treatment quality and reducing the related expenses. WBAN is an emerging technology that performs remote monitoring of patients’ health with specific embodied sensors. It collects the health information of patients. Fig 3.3 depicts the generic architectural view of WBAN.

WBAN system architecture includes various sensors placed either on the surface of patients’ bodies or implanted inside the human body. The SNs collect essential information from the human body. There are diverse kinds of sensor nodes based on the users’ requirements, like the EEG sensor. It is employed to examine the patients’ brain activities. Similarly, it gathers the information regarding the patients’ heart activities where the ECG SNs are integrated. Some other SNs are used for measuring body temperature, blood pressure, and so on. As this research work concentrates on the EEG signal transmission, EEG SNs are considered in the WBASN architecture. When the EEG SNs are placed on the patients’ head, it initiates the collection of EEG signals from the human brain. The acquired EEG signal has a massive volume of data that has to be compressed before starting the transmission process. WBASN allows various compression techniques for efficiently compressing the physiological signals. These compressed EEG signals are transmitted to the nearer server location via ultra-low-power short-haul radios like Zigbee, Bluetooth, Ultra Wide Band (UWB), and Medical Implant Communication Service (MICS), Wireless Fidelity (WiFi), and so on. The computers and mobile phones are used as a personal server, which pretends to manage the WBASN and transfers the physiological information to a remote location through internet connectivity.

The remote location is considered emergency medical services (EMS) as the ultimate target of transferring signals to observe the patients’ physical condition. In the medical server, the original signals are retrieved from the computer for further processing. The medical database preserves all the registered patients’ health information and offers various services based on the user’s requirements. For instance, when the patient needs immediate observation, then the emergency service is provided. The patients’ are not intended to visit healthcare centers frequently due to portable monitoring devices. Also, wireless network connectivity ensures the location of independent healthcare services. For example, when the users are at their home/work environment by performing their regular activities, the health of the patients is monitored continuously and flawlessly.

The physiological data attained from these wireless networks are used for different research purposes. The black box specifies the research database where the collected data performs further analysis devoid of any noise and transmits the medical servers’ decision.

3.4.7. Advantages of WBASN

There are enormous advantages that rely on the use of WBASN when compared to the prevailing wired system, which is discussed below:

  • WBASN gives real-time data acquisition, transmission, processing, and monitoring of patients’ conditions from remote locations.

 

  • The major disadvantage with the existing wired system is they are composed of various location-specific SNs. These SNs are clumsy. However, WBASN provides location-independent services.

 

  • It also validates the user’s mobility due to its wearable and portable devices. The users can perform their routine activities by carrying mobile WBASN devices. Thereby, the device can continuously gather and transfer the health information to the healthcare center that is monitored ubiquitously.

3.4.8. Challenges in WBASN based EEG signal transmission

Despite enormous WBASN advantages, there is a specific constraint that needs to be considered during network design. In which energy consumption is the primary factor to be considered. The WBASN is battery-driven, which is restricted in their longer connectivity duration. Hence, it is essential to save energy to increase battery life as much as possible by pulling the system’s overall efficiency.

The next challenge with WBASN based EEG system is the vast data volume of the EEG signal, which has to be compressed to a certain extent before performing transmission. The foremost reason for this high-level compression is transmission links like Bluetooth, which is utilized to transfer data with limited transmission capacity. Additionally, mobile phones are used as a personal server to store the data initially. Henceforth, it is essential to ensure data volume, which should not overwhelm the phone’s capacity by interrupting the preliminary tasks such as texting, calling, and browsing.

Similarly, the hardware expenses are other constraints. The minimal equipment expenses give portable online-monitoring devices more feasibly and economically as accepted by the users. Moreover, reasonably economic equipment specifies that the data processing and data reconstruction process is significantly less complicated. Thus, the proposed model helps overcome the limitations encountered in the available WBASN with the design of an improved model.

3.5. Proposed Methodology

This section discusses the compression techniques used for data transmission in WBASNs. The block diagram of the proposed compression model is shown in Fig 3.4.

3.5.1. Improving WBASN efficiency

Consider a scenario involving a set of people with three sensors like temperature, blood glucose level monitoring, and heartbeat sensors. Here, heartbeat sensors use non-invasive techniques for taking measurements from the skin surface. Optical methods are applied to identify bold volume with a photodetector. It utilizes IR to illuminate one side of the finger. Similarly, photodetection is used to evaluate light intensity variations where light is absorbed by blood vessels. It predicts it reflected or transmitted light to compute heart rate. LM35 is used to assess the body temperature that works under -55o C to +150o C. It is more appropriate for remote based applications and calibrated in Celsius directly.

Fig 3.4 Block diagram of the proposed compression model

 

Similarly, Dexcom G5 sensors are used for constant glucose monitoring in a real-time environment. It observes the blood glucose level to provide approval. The data sensed from the Central Base Unit (CBU) gives the information to the server.

Like other sensors, the temperature sensors are also placed under the armpit to evaluate the individual persons’ temperature. The sensor for monitoring glucose level is placed on a fingertip to monitor the glucose level over blood in a periodic manner. The heartbeat sensors are placed on the fingertip to evaluate the health rate of individuals. Similarly, it is placed in the ear lobe and placed in individuals’ wrist as a wearable band form. It is used explicitly to acquiring sensor values constantly and transmitting that information to remote servers.

The experimentation is done with 40 different employees considering age and sex. Here, 20 women and 20 men are involved in performing this experimentation by classifying the people under four age groups: The age of people relies on over 20 – 30, 31- 40, 41 – 50, and 51- 60 years. Here, five persons are chosen randomly from every category. The selected individuals are provided with three sensors that are mentioned above. The initial stamina level (average) of a person is considered to be 2500 joules. Assume the stamina level to be maximal before initiating the work. The time progresses owing to the extreme changes in the physical body metabolism level. This work concerns changing glucose level, body temperature, and heartbeat with body sensors.

Resting Energy Expenditure (REE) is depicted as the amount of energy needed by an individual to rest. The average stamina level of an individual is considered to be 2500 joules. It is essential for the appropriate functioning of human organs. This minimal energy level is considered as a threshold value. When it reaches 1500 joules, the individual possesses a lower stamina level. For experimentation level, this work feels the energy consumption model with a reception and transmission energy as given below in Eq. (3.1) & Eq. (3.2):

  (3.1)
  (3.2)

 

Here,  is the number of bits transmitted. Temperature and blood glucose sensor make use of 240 bits while heartbeat sensors make use of  bits.  is the path loss exponent. For Line of Sight Communication (LOS), the value is considered to be 3.3778, and non-LOS values are assumed as 5.778.

  • Network coding techniques

The primary challenge behind modeling the data compression algorithm is the adoption of changing correlation with sensed data. The prevailing data compression algorithms modeled for sensor networks are based on network processing, and it is more appropriate for sensing the high correlation data. The basic idea behind using the coding technique is to adopt the variation encountered in sensor data by constructing a tree model. Moreover, the level of tree construction significantly relies on the coding model. This section explains the flow of the anticipated model and the Huffman coding for enhancing the data reliability without packet loss.

  • Pre-processing

In every investigation, pre-processing acts as an essential role with the EEG dataset; the scalp signals are lost without any appropriate representation. The EEG data comprises enormous noise and some weaker EEG signals (See Fig 3.5). Therefore, it is essential to separate the original signals from the recorded EEG signals. With this pre-processing step, the unwanted signals are removed or eliminated for signal noises.

Fig 3.5 Flow diagram of data pre-processing and compression process

 

Fig 3.6 EEG system (Source: Adel et al., 2018)

This pre-processing unit’s function is to read the EEG data records, standardize, and segment the signal (See Fig 3.6). The standardization process provides the EEG data with a standard distribution scale and makes the compression unit give a higher compression ratio. This process shifts the EEG data means. Therefore it is centered to zero with SD as in Algorithm 3.1. Consider  as EEG data vector,  as standardized EEG data, which is expressed as in Eq. (3.1):

) (3.1)

 

Here,  is the time of lossy compression algorithm,  is thresholding time,  is the time of the inverse lossless algorithm, and  is a lossy inverse algorithm. Thus, the minimal sampling time  is expressed as in Eq. (3.2):

 

  (3.2)

The compression and the reconstruction time is expressed as

 

 

Algorithm 3.1
Input: EEG data

Output: pre-processed EEG data

Initialization: ;

Process:

1. while  do

2. if then

3. EEG data

4. else

5. initially processed signal

6. signal attained

7. EEG data

8. end if

9. if then

10. vector mapping

11. Data

12. end if

13.

14. end while

15. end process

 

Then, the compression algorithm is applied to attain a higher compression ratio due to higher redundancy over the transformed data.

 

  • Sampling

Sampling is depicted as the process of choosing appropriate data where signals are transformed into numerical form. The ultimate target is to diminish the work time and cost. Sampling data has to be analyzed with various measures for enhancing the model performance. It is used to transmit data effectually without loss as the data size is portable and offers lossless functionality. Fig 3.7 shows the EEG based time-series measurement.

 

Fig 3.7 EEG Time series

  • Discrete cosine transform (DCT)

DCT is a transformation process as it converts a time-series signal to preliminary frequency components. It is utilized for dataset reduction and feature extraction successfully. DCT prevails with massive energy compaction with correlated signals. Transform coefficients are considered to be either small number or zero where the minimal amounts of coefficients are considered higher, as in Fig 10. The data is compressed with the first coefficients. EEG signal input is acquired with diverse electrodes and merged with a column by column matrix. DCT specifies image information with lesser coefficients. The time series of EEG signals are given in Fig 8. The significant advantages rely on DCT are given below:

  • Resolves the data filtering issues
  • Reducing data size
  • Reducing the time needed for classification and training

With the DCT algorithm, the input and output are considered a transform coefficient set. EEG signal based pixel intensity is depicted as  and  respectively. It transforms time series to frequency signals. Here, encoding is performed in a prioritized manner. The input signal correlation is considered from DCT features, and it concentrates on a minimal amount of transform coefficients. The subsequent coefficients are eliminated.

The main feature of DCT is its capability to concentrate on input signal energy at the initial coefficients of the output signal. These features are extensively analyzed in the data compression field. Consider  as the input EEG signal of DCT, which is composed of  EEG data samples, and  is the DCT output signal, which is composed of  coefficients. 1D-DCT is expressed as in Eq. (3.3) and Eq. (3.4):

  (3.3)
  (3.4)

The co-efficient  is represented as a DC component, and the remaining are AC components.     DC component is mean value of original  signal; while AC components specify  frequency which is independent of average. The inverse DCT process considers  coefficient as input and transforms it into . The DCT inverse is expressed as in Eq. (3.5):

  (3.5)

The DCT coefficient values of DCT are significantly smaller; sometimes, it is approximated to zero. Fig 3.8 depicts the DCT coefficient measure.

 

Fig 3.8 DCT coefficient

  • Compression

When considering data compression for reducing data loss, it significantly reduced the space needed to signal storage and transmission time. Here, Huffman coding is adopted for performing the compression process. This method works like conjunction to attain a higher compression ratio with lesser computational complexity; therefore handles these approaches in simple and faster encoding/decoding approaches.

  • Lossless Compression

With lossless compression techniques, a higher compression rate can be attained. Without lossless compression, a higher compression rate cannot be achieved.  The significant advantages rely on this lossless compression is given below:

  • No loss of information
  • The original file can be restored during the decompression process.

 

  • Compression unit

 

The compression unit includes both lossy and lossless algorithms. Here, DCT is applied as a lossy compression algorithm. Followed by this, the thresholding process is used after lossy compression to increase transformed data redundancy. The transformed data values are set as 0. Hence, the variation in threshold value decreases or increases the number of zero coefficients. However, the compression system accuracy is managed by the threshold value.

 

  • Need for Huffman coding

 

Generally, lossless compression produces a statistical data model and maps data to strings. However, while considering lossy data compression, it is often transformed into a new space with suitable kernel function transformations. In modern areas, signal energy or data information is focused on specific coefficients generally. Therefore, compression can be attained with entropy coding and quantization. Huffman coding is an entropy-based encoding algorithm utilized for lossless data compression. It is known as the reference to the variable-length code table for an encoding source symbol. The Variable-length code table is derived from the probability occurrence of values from characters. Khalid Sayood et al. (2004) applies Huffman coding utilizes specific techniques for selecting every source symbol that results in prefix code. The Huffman coding needs prior knowledge towards source symbols; this procedure is based on optimum prefix codes.

 

  1. In optimum code, symbols occur regularly, i.e., they show higher probability occurrence with lesser codeword than characters with lesser frequency occurrence.
  2. with optimum code, two symbols have less frequent occurrence with a longer length.

3.5.10. Huffman coding

Huffman coding is determined as a variable-length source coding technique proposed by Huffman. It is extensively used in the encryption and communication field due to its higher efficiency. It is a prefix code that reduces the average coding length. The symbol with higher probability is specified as shortcode words, while symbols with lower probability are specified as long code words. Therefore, this coding technique is based on the possibility of symbol occurrence. When the possibility of a symbol appears to be nearer, then the average coding length is higher. This process is explained below in Algorithm 3.2.

Algorithm 3.2
1. Count the symbol probability  over the symbol sequence.

2. Construct binary tree based on statistical probability. In the tree, there is only one root node with probability The sub-tree remains to be empty.

3. The tree with minimal root node probability over the binary tree is chosen for constructing a newer binary tree.

4. The root node probability of newer binary tree is the sum of probabilities of sub-tree (right and left nodes)

5. Delete the tress with lesser probability and include the newer binary tree.

6. Repeat steps 3 to 5 until only one binary tree is reached.

7. After constructing the Huffman tree, the left side node is coded as ten, and the right side node is set as

8. Forwards, the search from the last binary tree, , is set of all codes on each symbol.

9. The average coding length is attained from the length of every symbol, which is expressed as:

 

Here,  is several 0 and 1.

 

The probe sequence is , which is explained to better understand Huffman coding, as in Fig 3.9.

 

Fig 3.9 Huffman tree

The average code length based on the probability sequence is computed by (!). Hence, the intermediate code length of the probe sequence is given as 2.2.

 

Similarly, Huffman coding is utilized to acquire effectual compression among 20% to 90%. The compression technique includes encoding bit messages in binary bits form, and the decoding process is performed after encoding using the Huffman tree track. It shows lower computational complexity and reduces the average code length for specifying the alphabet symbol. It substitutes every character with variable-length code depending on relative character frequency. This work considers EEG data for performing the Huffman coding. The data combination is quantized with entire data files. The flow diagram of Huffman coding is shown in Fig 3.10.

Fig 3.10 Flow diagram of the Huffman compression process

  • Huffman Encoding

Huffman coding diminishes the number of data values. Here, the input is a sequence of symbols and requires encoding. The symbol tracing during the fundamental level and every symbol positions contain a certain index number. Encoding is based on letter frequency and produced an unreadable data format. This encoding process is used for a specific sequence, and some symbols are explained. It substitutes every character by variable-length code depends on relative character frequency and reduces average code length.

  • Inverse Discrete Cosine Transforms (IDCT)

The input of every transform coefficients and transform it into time series. It is used in every coefficient set and transforms it into time series. Both IDCT and DCT show high computational intensive. The inverse process of the compression unit is applied for the reconstruction of original EEG data. The inverse of DCT is used to reconstruct the original EEG data completely. It is expressed in Algorithm 3.3.

Algorithm 3.3
Input: Pre-processed EEG data

Output: Reconstructed EEG data

//Lossy compression

1. If DCT is chosen, then

2. DCT (pre-processed data) à transformed data

3. Else

4. Only pre-processed data

5. End if

6. Compute threshold value

7.

8.

9. For data length, do

10. If then

11.

12. Else

13. Break

14. End if

15. End for

16. Transform the data

//Lossless compression

1. If Huffman coding is needed, then

2. Compressed data

3. Else

4. Data

5. End if

// Data reconstruction

1. If Huffman coding is used, then

2. Decode EEG data

3. Else

4. Decode EEG data

5. End if

6. If DCT is applied for encoding, then

7. EEG data reconstruction

8. Else

9. EEG data reconstruction

10. End if

Output: Reconstructed EEG data with better quality

 

  • Numerical results and discussion

This work proposes an algorithm based on discrete cosine transform (DCT) and Huffman coding based entropy computation. Here, no other multivariate signal techniques are used to decompose the EEG signals like independent or principal component analysis, as it can avoid some parts of the EEG signals, where the artificial discrimination is time-consuming and not so reliable. By evaluating the entropy of every EEG signal, the decomposing EEG signals can find the defects. For instance, when the IMF is generated in low frequency is not ideal, or the frequency band of the intrinsic modal is attained is significantly wider. EEG signal decomposing by DCT is performed to resolve this issue and acquire the narrowband signals and then utilize this decomposition to achieve concentrated signals for appropriate frequency band signals. The entropy is then computed, which offers an alternative source to motion discriminate and enhance the compression process.

 

Fig 3.11 Packet sent from Men.

 

Fig 3.12 Packet sent from women.

 

The readings of every five people who come under this category are taken. It is averaged for reducing the error where a 95% confidence level is attained. Specific performance metrics are considered for validating the performance. They are PSNR, MSE, and CR. The anticipated compression techniques are executed with the EEG dataset attained from (https://archive.ics.uci.edu/ml/datasets/eeg?database). These recordings are taken from seizures and subjects. The constraint towards security risks and conventional models is utilized for EEG dataset compression for resolving these issues.

 

Fig 3.13 Stamina level of men

 

Fig 3.14 Stamina level of women

Fig 3.11 and Fig 3.12 depicts the packets generated from both men and women and transmitted to the remote location for monitoring the stamina level. Fig 3.13 and Fig 3.14 shows the stamina level of both men and women and the energy consumed by them to transmit the data. The energy is measured in Joules.

  • Compression Ratio (CR)

CR is depicted as compression power. These metrics give data size reduction with the DCT algorithm. Also, it evaluates the complexity. CR is expressed as in Eq. (3.6):

  (3.6)
  • Peak Signal-to-Noise Ratio (SNR)

PSNR is the measure of the ratio among maximal positive signal power and corrupted noise power. The expression for SNR is given in Eq. (3.7):

  (3.7)
  • Percentage of Root-mean square Difference

PRD is measured as a distortion quantity, which is an intermediary to source and alternative EGG waveforms. It is specified with the Eq. (3.8):

  (3.8)

Here,  and  are original and reconstructed EEG data, respectively.

  • Quality score (QS)

It is defined as the ratio of CR to PRD. Quality score is an essential performance metric that provides a selection of appropriate compression operation by considering the compensation to the error during reconstruction.

  • Mean Squared Error

The mean square error is the average error measure that shows the difference among the estimated values to anticipated values. Also, it measures the estimation quality. It gives some positive outcomes and not zero. MSE is computed with Eq. (3.9):

  (3.9)

From the above analysis, it is known that the signal quality is based on the compression ratio, where the compression reconstruction time is constant. It is observed that the compression ratio for the given dataset is 11.27%. Similarly, the peak-to-peak amplitude is a unique signal property. When the amplitude is higher, it reduces the correlation among the adjacent samples.  With this, the compression time is also drastically reduced. The compression ratio is 11.27%, which is 1.71% higher than the PRD. It leads to reduced reconstruction time.

Table 3.2, it is observed that a higher compression ratio is attained with high-cost PRD while using a complex compression algorithm. Moreover, the DCT model provides quality outcomes with an adequate compression ratio. The results validate that DCT based compression is practically executed with lesser cost, low portable devices to reduce wireless communication bandwidth.

Table 3.2 Overall performance of the proposed model

Performance Metrics Outcomes Time (Sec)
CR 11.27 4.18
MSE 0.2
PSNR 37.89
PRD 9.56 2.54

 

 

 

Table 3.3 PRD comparison

Methods Values
HC-DCT 9.56
DCT 11.09
Chen DCT 15.17
Loeffler DCT 11.07
BinDCT 11.08

 

Table 3.4 Construction time comparison

Methods Time (Sec)
HC-DCT 0.86
DCT 2.62
Chen DCT 0.93
Loeffler DCT 1.00
BinDCT 0.94

 

 

Fig 3.15 PRD comparison

 

Fig 3.16 Construction Time comparison

Table 3.3 and Table 3.4 depicts the comparison of proposed HC-DCT with other methods like DCT, Chen DCT, Loeffler DCT, and BinDCT, respectively. The PRD value of HC-DCT is 9.56, which is comparatively lesser than the other models. Similarly, the signal construction time is 0.86 seconds, which is lesser than the other models (See Fig 3.16). When the PDR value is higher, it consumes a higher amount of time for signal reconstruction, causing time complexity (See Fig 3.15).

 

Fig 3.17 Performance metrics comparison

 

Fig 3.18 Performance metrics based on records

 

Fig 3.19 CR comparison

 

Fig 3.20 MSE comparison

 

Fig 3.21 RMSE comparison

 

Fig 3.22 Overall performance measure

Fig 3.17 shows the performance metrics comparison proposed with prevailing approaches like SPC system, Fast DCT, respectively. The anticipated Huffman coding-based compression gives superior results concerning CR, PRD, and PSNR, respectively. Fig 3.18 shows the performance metrics comparison with a particular record monitoring, and Fig 3.19 represents the CR of the anticipated model with existing models. Fig 3.20 and Fig 3.21 depicts the root mean square error and mean square error of the expected model. The error rate is lesser for the proposed model while comparing it with other approaches. Table 3.4 and Fig 3.22 show the overall comparison of the performance metrics.  From the above results, it is known that the anticipated model gives a better compression ratio without spoiling the data reliability over WBASN and reduces the error while measuring the EEG signals. Therefore, it is known that the proposed Huffman coding based compression is well suited for compressing the medical data and provides better reliability without affecting the source data.

  • Summary

This chapter discusses the method for effectual reconstruction of EEG and compression of signals for performing transmission over WBASN. The vanishing moment and compression sensing are needed to be considered at the same time. Here, DCT/IDCT is used for the reconstruction of EEG signals, while Huffman coding is applied for attaining a better compression ratio. The overall performance is measured in terms of CR, MSE, PSNR, and PRD, where the proposed HC works efficiently when compared to other models. Therefore, it is more appropriate for compressing the related medical data and transmits it to WBASN. The complexity of compression is reduced and proven to work effectually for various clinical purposes.

 

 

 

 

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask