logo
  • userLoginStatus

Welcome

Our website is made possible by displaying online advertisements to our visitors.
Please disable your ad blocker to continue.

Current View

Biomedical Engineering - Biomedical Signal Processing and Medical Images

Notes of Signal Theory

Divided by topic

SYSTEM THEORY A system is defined by a function T{·} which transforms the input sequence x[n] in the output sequence y[n]. LTI systems Linear Time Invariant systems. Z-Transform The Z-transform converts a discrete -time signal , which is a sequence of real or complex numbers , into a complex frequency -domain representation. Definition Representing the discrete -time signal as the sequence x(n) we obtain: Properties Z-Transform and Laplace Transform The Z -transform can be considered as a discrete -time equivalent of the Laplace transform (s-domain). where ⥚ = ⧵+ ⥑⧼ then ⥡ = ⥌⸤ⷘ ⥌ⷨ⸫ ⷘ hence ␌⥡␌= ⥌⸤ⷘ ⟬⥡ = ⧼⥁ Continuous -time Fourier transform is evaluated on the Laplace s -domain's imaginary line, and it is stable in the left half -plane. Each frequency component of the signal is mapped as a point on the imaginary axes: Discrete -time Fourier transform is evaluated over the unit circle of the z -domain, and it is stable within the unit circle. Each frequency component of the signal is mapped as a point on the unit circle: ⧼ = ╿⧳⥍ ❧ ⥍= ⧼␋╿⧳ and ⥁ = ╾␋⥍⥚ so ⧼⥁ = ╿⧳ ⥍␋⥍⷗ Linearity ⥇[⥈⥟ⵀ(⥕)+⥉⥟ⵁ(⥕)]= ⥈⥅ (⥡)+⥉⥅ (⥡) Time shifting ⥇[⥟(⥕−⥒)]= ⥡ⵊⷩ⥅(⥡) Mult. by exponential ⥇[⥈ⷬ⥟(⥕)]= ⥅(ⷸ ⷟) Convolution ⥇[⥟ⵀ(⥕)⡟ ⥟ⵁ(⥕)]= ⥅ⵀ(⥡)⥅ⵁ(⥡) Differentiation ⥇[⥕⥟ (⥕)]= −⥡ⷢⷜ(ⷸ) ⷢⷸ Complex conjugate ⥇[⥟⟦(⥕)]= ⥅⟦(⥡⟦) Time reversal ⥇[⥟(−⥕)]= ⥅[⥡ⵊⵀ] wrt the CONTINUOUS – LAPLACE DOMAIN SIGNAL THEORY Signals are representation of quantitative measurements in function of an independent variable. 1-D SIGNAL defined in time : An ordered sequence of numbers that describes the variations and trends of a quantity in time. ➔ The order of the numbers often determined by the order of measurements (or events) in «time» ➔ time is the axis that identifies the order (= ordering axes) MULTIDIMENSIONAL SIGNALS : Multidimensional sequence of numbers ordered in all dimensions ➔ IMAGE: a 2 -D sequence of data where number s are ordered in all (2) dimensions = in space (for both dimensions) Main task of SIGNAL PROCESSING : to extract important knowledge that may not be clearly visible/understandable/interpretable to the human eye. Statistical moments In mathematics, the moments of a function are quantitative measures related to the shape of the function evolution . If the function is a stochastic signal (= defined in terms of probability distribution) , then : I. the first moment is the expected value , II. the second central moment is the variance , III. the third standardized moment is the skewness , IV. the fourth standardized moment is the kurtosis . For a distribution of probability on a bounded interval, the collection of all the moments (of all orders, j = 0 to ∞) uniquely determines the distribution . Skewness is a measure of the lack of symmetry. A distribution, or data set, is symmetric if it looks th e same to the left and right of the centre point. Kurtosis is a measure of whether the data are heavy -tailed or light -tailed relative to a normal distribution. That is, data sets with high kurtosis tend to have heavy tails, or outliers. Data sets with lo w kurtosis tend to have light tails, or lack of outliers. (NB: A uniform distribution is the extreme case) Many classical statistical tests and intervals depend on normality assumptions. → Significant skewness and kurtosis clearly indicate that data do not follow a normal distribution. If a data set exhibits significant skewness or kurtosis (as indicated by a histogram or the numerical measures), what can we do about it? One approach is to apply some type of transformation to try to make the data norma l, or nearer to normality. → The Box -Cox transformation is a useful technique for trying to normalize a data set. Taking the log or square root of a data set is often useful for data that exhibit moderate right skewness . Classification of signals ➔ these properties exist in their pure version only in artificial systems (mathematical models). Biological systems : it is possible to verify that strictly deterministic or stochastic systems do not generally exist. Biological (and living) systems are mai nly deterministic (as ECG and ABP) or mainly stochastic (as EEG). The procedure to process and analyse mainly deterministic and mainly stochastic signals can be learned for a specific signal and then it can be applied to any other signal or image from the same category . CONTINUOUS vs DISCRETE SIGNALS • Continuous: The independent variable is defined in the continuous domain. • Discrete: The independent variable is a set of ordered samples identifying a set of values of the independent variable Discrete signals are often obtained from the continuous signal due to the acquisition system or due to the higher efficiency provided by numerical elaborations. If we call T the time interval between two successive measures of the continuous signal (defined as sam pling interval), we can denote the corresponding discrete signal as a function of nT, where n is an integer number (x(nT)). However often it is preferred to represent a discrete time as a simply sequence of number xn=x(nT). REAL vs COMPLEX • Real: if signal’s values are real numbers for any t • Complex : if signal’s values are complex numbers for any t PERIODIC vs NON -PERIODIC • Periodic: if the signal repeats itself at every interval t= T0(x(t+ T0)=x(t))  T0 is called period  f0 = 1/T0 is called fundamental frequency  If the signal is discrete, we can define it periodic of period M [defined in samples instead of in time] , if it is identical to itself every M samples. TRANSIENTS Transient signals : have fixed durations and finite energy DETERMINISTIC vs STOCHASTIC • Deterministic: if signal’s values are precisely known for any t the parameters of interest are a priori known; it is only necessary to measure them. > Hence, a process is defined as deterministic when it is possible to predict th e future evolution of it based on the previous events. • Stochastic: if only the probability distribution of signal’s values is known Parameters are defined by means of statistical moments. > A stochastic process can have many outcomes, due to its randomness, and a single outcome of a stochastic process is called a sample function or realization Stochastic signals are defined in terms of Probability density function: PROBABILITY DISTRIBUTION FUNCTION: ⤳(⥟) = ⤽[⥅ ≤ ⥟] So F(x) represents the probability that the casual variable X assume values equal or less than x. ➔ The function is limited ⤳(⥟) ⟛[╽⏬╾] ➔ The function is continuous, it admits at most a disco ntinuity of first type (jump) ➔ ⤽[⥈ < ⥅ ≤ ⥉] = ⤳(⥉) − ⤳(⥈) ➔ If the variable is continuous, we can define the distribution function ⤳(⥟)= ⾼ ⥍(⥟)⥋⥟ ⷁ ⵊⷁ ❧ ⥍(⥟)= ⥋⤳ (⥟) ⥋⥟ = where f is called probability density function . Statistical moments: However, these functions are not exhaustive, the process (the signal) can be characterized by other properties or functions (see Stochastic signals below) STATIONARITY A stochastic process is called stationary in the strict sense if all the statistic moments are constant and not dependent on time . A stochastic process is called stationary in the weak sense if the mean value is constant and not dependent on time and th e autocorrelation function depends just on the delay m=n 2-n1 and not on the considered temporal moment . ERGODICITY A process is defined as ERGODIC if all the statistical properties can be estimated using a single realisation. Then, from a single realisation the mean value and the autocorrelation function can be estimated using the following expressions: The hypothesis of ergodicity (in weak sense) allows to estimate first and second order statistical moments of a stationary process from a single realization of the same process . Given a: ➔ this hypothesis has a great importance because we usually have a unique realization of the process. Fiducial Points A fiducial point in a specific class of signals is a recognizable and approximately stable point, usually related to a specific event, th at can be identified (thanks to any peculiarity) in each cycle of an almost periodic signal. Examples are: - ECG: R peak (QRS complex), P wave peak, T wave peak (from the strongest to the weakest) - ABP: Systolic and diastolic peaks Energy and power Energ y of a signal x is defined as: Continuous signal ⤲ = ⾼ ␌⥟(⥛)␌ⵁ⥋⥛ ⷁ ⵊⷁ Discrete signal ⤲ = ⿘ ␌⥟(⥕)␌ⵁ ⷁ ⷬⵋⵊⷁ Power of a signal x is defined as: Continuous signal ⤽ = ⊙⊖⊚ⵣ❧ⷁ ╾ ⥁ ⾼␌⥟(⥛)␌ⵁ⥋⥛ ⷘⵁ ⵊⷘⵁ Discrete signal ⤽ = ⊙⊖⊚ⵝ❧ ⷁ ╾ ╿⤻ + ╾ ⿘ ␌⥟(⥕)␌ⵁ ⷒ ⷬⵋⵊⷒ If the signal has finite energy, it is said to be defined in energy and has null power. If the signal has infinite energy but finite power , it is said to be defined in power. ➔ PERIODIC SIGNALS If the signal is periodic power is defined over the period: Continuous signal, with period T 0 ⤲ = ⾼ ␌⥟(⥛)␌ⵁ⥋⥛ ⷁ ⵊⷁ ⤽ = ╾ Tⴿ ⾼ ␌⥟(⥛)␌ⵁ⥋⥛ ⷘ⸷ⵁ ⵊⷘ⸷ⵁ Discrete signal, with period M ⤲ = ⿘ ␌⥟(⥕)␌ⵁ ⷁ ⷬⵋⵊⷁ ⤽ = ╾ M ⿘ ␌⥟(⥕)␌ⵁ ⷑⵊⵀ ⷬⵋⴿ STOCHASTIC SIGNAL S AUTO -CORRELATION and AUTO -COVARIANCE NB: definition is the same for both continuous and discrete signals. “Auto” because evaluates the correlation of the process with itself. Auto -correlation ⤿ⷶ(⥛ⵀ⏬⥛ⵁ)= ⤲[⥟(⥛ⵀ)⥟(⥛ⵁ)]= ⾼ ⾼ ⥜▹⥝▹⥍ⷶ⻪⸸ⷶ⻪⸹(⥜⏬⥝) ⷁ ⵊⷁ ⷁ ⵊⷁ ⥋⥜⥋⥝ Auto -covariance ⤰ⷶ(⥛ⵀ⏬⥛ⵁ)= ⤲⽮(⥟(⥛ⵀ)− ⧯ⷶⷲⵀ)(⥟(⥛ⵁ)− ⧯ⷶ⻪⸹)⽲= ⤲[⥟(⥛ⵀ)⥟(⥛ⵁ)]− (⧯ⷶⷲⵀ⧯ⷶ⻪⸹)= ⤿ⷶ(⥛ⵀ⏬⥛ⵁ)− (⧯ⷶⷲⵀ⧯ⷶ⻪⸹) Therefore, the Auto -correlation function (ACF) is synonym of the Auto -Covariance function, if the multiplication of the mean of the signal in the two instants is subtracted. STATIONAR ITY if the signal is Stationary , then: Auto -correlation ⤿ⷶ(⧷)= ⤲[⥟(⥛)⥟(⥛− ⧷)] Auto -covariance ⤰ⷶ(⧷)= ⤿ⷶ(⧷)− ⧯ⷶⵁ The autocorrelation function (ACF) of a stationary process (at least in weak form) represents the correlation between two instants of the process depending on the delay τ that divides them . REAL SIGNAL S The ACF is symmetric , centred in zero, where r 0 is always = variance. - ⤿ⷶ(⥛⏬⥛)= ⤿ⷶ(╽)= ⧵ⷶⵁ (ACF at 0 delay is the variance of the signal) - ⤿ⷶ(⧷)= ⤿ⷶ(−⧷) (ACF is an even function ) CROSS -CORRELATION and COVARIANCE NB: definition is the same for both continuous and discrete signals. Statistical metrics, t o evaluate the similarity among two signals. Cross -correlation ⤿ⷶⷷ (⥛ⵀ⏬⥛ⵁ)= ⤲[⥟(⥛ⵀ)⥠(⥛ⵁ)] Covariance ⤰ⷶⷷ (⥛ⵀ⏬⥛ⵁ)= ⤲⽮(⥟(⥛ⵀ)− ⧯ⷶⷲⵀ)(⥠(⥛ⵁ)− ⧯ⷷ⻪⸹)⽲= ⤿ⷶⷷ (⥛ⵀ⏬⥛ⵁ)− (⧯ⷶⷲⵀ⧯ⷷ⻪⸹) INDEPENDENCE AND UNCORRELATION NB: definition is the same for both continuous and discrete signals. From the probability theory, two aleatory variables are said independent if their joint probability density function is equal to the product of their densi ty functions . Therefore, the cross -correlation function of two independent variable xn1 and yn2, is equal to the products of their mean values: It is important to notice that this is only a necessary condition, but not suffic ient to state the independence of the two variables. In fact, if the covariance is null , the variable s are said uncorrelated , but it is not sure that are independent as well . Statistical moments estimation If a process is ergodic, then given a realizat ion of N samples the first and the second statistical moments can be estimated. Estimators are Sample V ariables (with the hat ^) and depend on the considered realization. Considering a N big enough to evaluate enough typical oscillations of the phenomenon, the sample mean estimates the real mean with uncertainty ⷫⷱ ⷣ⻣⻛⻗⻤ ◉ⷒ . What means that the standard deviation of the Sample mean (= estimation of the real mean) reduces by a factor of √N as the number of samples of the realization (N) inc reases. Therefore, looking for a sufficient time at the signal (⥁❧ ◆), we supposed to have an exact statistic representation of values and value combinations of the process. • MEAN Continuous signal ⧯⏃ⷶ= ⵀ ⷘ⟷ ⥟(⥛)⥋⥛ ⷘ ⴿ (Sample mean) (If ergodic in the mean) ⧯ⷶ= ⊙⊖⊚ⷘ❧ⷁ⧯⏃ⷶ (Real mean) Discrete signal ⧯⏃ⷶ= ⵀ ⷒ◎ ⥟(⥕) ⷒⵊⵀ ⷬⵋⴿ (Sample mean) (If ergodic in the mean) ⧯ⷶ= ⊙⊖⊚ⷒ❧ⷁ⧯⏃ⷶ (Real mean) • VARIANCE Continuous signal ⧵⿧ⷶⵁ= ⵀ ⷘ⟷ ␌⥟(⥛)− ⧯⏃ⷶ␌ⵁ⥋⥛ ⷘ ⴿ (Sample variance ) (If ergodic in the variance) ⧵ⷶⵁ= ⊙⊖⊚ⷘ❧ⷁ⧵⿧ⷶⵁ (Real variance ) Discrete signal ⧵⿧ⷶⵁ= ⵀ ⷒ◎ ␌⥟(⥕)− ⧯⏃ⷶ␌ⵁ ⷒⵊⵀ ⷬⵋⴿ (Sample variance ) (If ergodic in the variance) ⧵ⷶⵁ= ⊙⊖⊚ⷒ❧ⷁ⧵⿧ⷶⵁ (Real variance ) • ACF Continuous signal ⤿⿫ⷶ[⧷]+ ⧯ⷶⵁ= ╾ ⥁⾼ ⥟(⥛)⥟(⥛+ ⧷)⥋⥛ ⷘ ⴿ (If ergodic in the ACF) ⤿ⷶ[⧷]= ⊙⊖⊚ⷘ❧ⷁ⤿⿫ⷶ[⧷] Discrete signal ⤿⿫ⷶ[⥔]+ ⧯ⷶⵁ= ╾ ⤻ ⿘ ⥟(⥕)⟦⥟(⥕+ ⥔) ⷒⵊⵀ ⷬⵋⴿ (If ergodic in the ACF) ⤿ⷶ[⥔]= ⊙⊖⊚ⷒ❧ⷁ⤿⿫ⷶ[⥔] Actually, the ACF can be estimated in two ways: • Biased (or polarized estimation): ⤿⿫ⷶ[⥔]= ╾ ⤻ − ╾ ⿘ (⥟(⥕)− ⧯⏃ⷶ)(⥟(⥕+ ⥔) ⷒⵊⷩⵊⵀ ⷬⵋⴿ − ⧯⏃ⷶ) • Unbiased (or non -polarized estimation) ⤿⿫ⷶ[⥔]= ╾ ⤻ − ⥒− ╾ ⿘ (⥟(⥕)− ⧯⏃ⷶ)(⥟(⥕+ ⥔) ⷒⵊⷩⵊⵀ ⷬⵋⴿ − ⧯⏃ⷶ) Dividing the value of the delayed product for the total number of possible products , you get rid from the final estimation of all samples you have already used to compute the previous values, so basically you don’t consider the same values multiple times. White and colored processes • WHITE NOISES : Signals with complete uncorrelation between any 2 different samples . 㑕⤿⿫ⷶ[⥔]≠ ╽ ⥐⥍ ⥔ = ╽ ⤿⿫ⷶ[⥔]= ╽ ⟕ ⥔ ≠ ╽ and ⤿⿫ⷶ[⥔]≠ ⧵ⷶⵁ • COLORED NOISES: Signals with any correlation between samples. In case of ideal sinusoids, this correlation comes in periodic peaks (alo ng delay values). ANALOG TO DIGITAL CONVERSION 1. not all systems are analysed directly measuring the electrical signal (as ECG, EEG ), but all signals are soon transformed in electrical signals by means of transducers → continuous signals 2. sampling : we multiply the Dirac delta in every period → we obtain the digital signal 3. quantization : amplitude discretization and encoding in digital values SAMPLING and QUANTIZATION are often reduced to a single A/D block. Sampling process Multiplying the signal with Dirac comb (also known as impulse train or sampling function) : set of Dirac pulses at each n*T instant n=1,2,.. (T=sampling interval) ⥟[⥕]= ⥟(⥛)▹⿘ ⧧(⥛− ⥕⥁ ) ⷁ ⷬⵋⵊⷁ ➔ we obtain the sequence of signal’s value at each specific n*T instant n=1,2,.. Sampling interval (T) T to properly sample the signal = to maintain all the information of the signal in a discrete sequence = without distortion ?  Shannon’s theorem Sampling theorem: Shannon theorem Given a signal that is: • continuous • bandlimited with maximum frequency component is fc REMARK: this must be the highest frequency that is actually present in the signal (INCLUDED NOISE) Then it can be completely recovered without distortion if it is sampled at a rate : ⥍ⷱ≥ ╿⟦⥍ⷡ Given the sampling rate (fs), we can identify the Nyquist frequency as: ⥍ⷬ= ⥍ⷱ ╿ = ⧳ NB: Nyquist frequency EXISTS ONLY once we have defined the sampling frequency as its half and once we have the discrete signal the Nyquist frequency is basically the highest frequency content present in the signal . From the spectrum graph : The spectrum o f a signal is symmetric wrt to f=0 and periodic with period ╿⧳= ⥍ⷱ, then it can be univocally defined in the interval ⥍⟛[╽⏬⧳], where the highest frequency is the Nyquist frequency : Nyquist freq. = 100 Hz Sampling freq. = 2 * 100 Hz ➔ it is good practice to select as sampling frequency and thus as Nyquist frequency, round numbers to simplify calculations and to make it easier to find the corresponding points on the unit circle. Anti -aliasing analog filter To be sure that we are working with a sampling frequency that is higher than twice the real bandwidth we must use an analogue low -pass filter . Being fc the cutting frequency of this filter, we are sure that the signal we will deal with has no components above this frequency → fc is the maximal frequency. In practice, being the transition band of a filter less sharp than in the ideal case it is good prac tice to chose fs > 2*fc, to be sure to not introduce aliasing artifacts. Example: ECG: fmax = 200Hz AAA filter: fc = 210 Hz fs: 2*fc = 220 Hz → fs = 250 Hz Transition band : a set of frequencies just before the cut -off frequency that are attenuated Moreover, it is good practice to select as sampling frequency and thus as Nyquist frequency, round numbers to simplify calculations and to make easier to find the corresponding points on the unit circle. Aliasing Sampling at too low frequency (to long sampling period) (fs < 2*fc) generate spurious spectral components at low frequencies due to the overlap of higher frequencies. If we set the sampling rate larger than 2*fc, the original signal can be recovered by placing a low -pass fi lter at the output of the sampler (Figure 3.1 (c)). If the sampling rate is too low (fs < 2*fc), spectral components overlap in frequency ( Figure 3.1(e ))).  Signals in the overlap areas are dirtied and cannot be recovered  high signals are reflected into lower frequencies This is known as ALIASING . Multiplexing It is a device able to acquire more than one channel (and more than one signal) at the same time. It realizes the switching from a channel to the other obtaining as output a single string of data containing the sequence of the samples of the different cha nnels .  Each input channel is sampled at the same frequency , which is exactly ⥍ⷱ= ⥍ⷫⷳⷪⷲⷧⷮⷪⷶ ⤻  The frequency of the multiplexer indeed must be set according to the sampling frequency required for the input signals: i. given N input channel, each one coming from different processes and having different bandwidth → different sampling frequencies according to Shannon Theorem ii. the multiplexer frequency is determined by the maximal sampling frequency among the N ones: ⥍ⷫⷳ ⷪⷲⷧⷮⷪⷶ = ⤻ ⟦⥍ⷱ┿ⷫ⷟ⷶ iii. once defined the ⥍ⷫⷳⷪⷲⷧⷮⷪⷶ you are sure that the Nyquist frequency of all N input signal is ⥍ⷬ= ⥍ⷫⷳⷪⷲⷧⷮⷪⷶ ╿⤻ Quantization After time discretization, apply amplitude discretization to fit the signal in digital representation (= representable with L bits in binary representation ) Sampled input signal is transformed in an output signal that can assume only values which are equal to the quantization levels ⤹= ⥕⥜⥔⥉⥌⥙ ⥖⥍ ⥉⥐⥛⥚ ⥜⥚⥌⥋ ⥕= ╿ⷐ= ⥕⥜⥔⥉⥌⥙ ⥖⥍ ⥓⥌⥝⥌⥓⥚ ⥖⥉⥛⥈⥐⥕⥌⥋ ◊⥃ = ⥈⥔⥗⥓⥐⥛⥜⥋⥌ ⥙⥈⥕⥎⥌ ⥖⥍ ⥛⥏⥌ ⥚⥐⥎⥕⥈⥓ ⥘= ◊⥃ ╿ⷐ= ⥘⥜⥈⥕⥛⥐⥡⥈⥛⥐⥖⥕ ⥐⥕⥛⥌⥙⥝⥈⥓ Exampl e: We have a signal ranging in ◊⥃ = (−╾╽ ⥃⏬+╾╽ ⥃) By using ⤹= ▅ ⥉⥐⥛ We obtain ⥕= ╿ⷐ= ╿▂▃ ⥓⥌⥝⥌⥓⥚ Of about ⥘= ⵁⴿⷚ ⵁⵄⵅ = ▄▅ ⥔⥃ (roughly the resolution of your quantization) > q is inversely proportional to the resolution and to L (exponentially) > resolution is directly proportional to L Quantization error Hypothesis: signal and quantization error are uncorrelated, and the density probability function (of the error) is uniformly distributed . Then we have: ROUNDING ⧯ⷣ= ⾼ ⥟⥍ (⥟)⥋⥟ = ⷖ ⾼ ╾ ⥘⥟⥋⥟ ⷯⵁ ⵊⷯⵁ = ╾ ⥘ ⥟ⵁ ╿ ␌ⵊⷯⵁ ⷯⵁ = ╽ ⧵ⷣⵁ= ⾼ ⥟ⵁ⥍(⥟)⥋⥟ = ⷖ ⾼ ╾ ⥘⥟ⵁ⥋⥟ ⷯⵁ ⵊⷯⵁ = ╾ ⥘ ⥟ⵂ ▀ ␌ⵊⷯⵁ ⷯⵁ = ⥘ⵁ ╾╿ TRUNCATION ⧯ⷣ= ⾼ ⥟⥍ (⥟)⥋⥟ = ⷖ ⾼ ╾ ⥘⥟⥋⥟ ⷯ ⴿ = ╾ ⥘ ⥟ⵁ ╿ ␌ⴿⷯ= ⥘ ╿ ⧵ⷣⵁ= ⾼ ⥟ⵁ⥍(⥟)⥋⥟ = ⷖ ⾼ ╾ ⥘⥟ⵁ⥋⥟ ⷯ ⴿ = ╾ ⥘ ⥟ⵂ ▀ ␌ⴿⷯ= ⥘ⵁ ▀ ➔ this is not used ➔ note that after quantization the correlation of quantization is summed to the autocorrelation of the signal, since the signal variance increase of a factor of Var[zq] once we have quantized it. Quantization effect After quantization the correlation of q uantization is summed to the autocorrelation of the signal since the signal variance increase of a factor of Var[zq] once we have quantized it. > this can be modelled as an additional quantization noise superimposed to the signal: To have the real equival ence it would be necessary to know exactly the noise e q(n). As this is not known, a statistical model will be used. The following hypothesis will be done: 1) Series e q(n) is an uncorrelated stationary casual process (that is a white noise). 2) Series e q(n) is uncorrelated to the signal x(n). 3) Probability density of the quantization error is uniform. When does it happen that the three hypotheses are met? Heuristically when quantization intervals are small enough to have the signal samples with high probability to cross different quantization intervals. FILTERING A filter is a system that change the frequency -domain characteristics of the input signal, by selecting only a specific bandwidth. > the output of t he system is part of the input signal (only the components within a specific bandwidth result are in the output). The objectives of filtering may include (regardless the specific applications in the biomedical field): > noise suppression (= enhance the info rmation content of the signal = increase the SNR) > enhancement of selected frequency ranges or edges in images > bandwidth limiting ( e.g., to prevent aliasing of digital signals or to reduce interference of neighbouring channels in wireless communications) > removal or attenuation of specific frequencies > special operations like integration, differentiation, delay addition and several others... ANALOG vs DIGITAL FILTERING Analogic filters They are made of elementary electric components (such as resistors, in ductors, capacitors etc.) embedded in devices. They give an «analogic» (or «analog») representation of physical phenomena in the reality. Advantages : - low cost - standard components (R, L, C and Op. Amp as active filters) - standard design methodologies («filter families») - real time applications Disadvantages : - Performance influenced by environmental factors (temperature, humidity, pressure, etc.) - calibration issues - low performances in terms of potentiality (standard filters easily feasible but it is hard to realize more complex filters as derivative filter, frequency transformations, etc) ISSUE : Analog filters belonging to the Butterworth, Chebysheff, Cauer familie s are affected by the problem of non -linear response in phase > they modify the signal’s morphology (NOT WANTED) > The only filters allowed are the ones that assure the linearity in phase. SOLUTIONs: • use Bessel filters are used: these filters have linear phase • use the filter only in its range of linearity • double filtering with reverse direction (=inverted coefficients) obtaining an overall zero phase filter (analogic version of filtfilt) Digital filters A digital filter is any device (or mathematical expression) able to obtain a discrete series y(k) as output, obtained from a discrete series u(k) (discrete -time signal) as input. Advantages - Highly immune to noise because of the way it is implemented (software/digital circuits) - Accuracy de pendent only on round -off error, directly determined by the number of bits - Easy and inexpensive to change a filter’s operating characteristics (e.g., cutoff frequency) - Performance not a function of component aging, temperature variation, and power supply voltage ------------------------------ we will consider only digital filters in this course ----------------------------- DIGITAL FILTERS A digital filter is any device (or mathematical expression) able to obtain a discrete series y(k) as output, obtained from a discrete series u(k) as input. Any kind of digital filter is an ARMA (AutoRegressive Moving Averaging) model that has two components of arbitrary degree (in its general form) : > one depending on the previous values of the output itself: AR(n ) → infinite duration sequence (IIR) > the other one depending on the input signal: MA(m) → finite duration sequence (FIR) Digital filters classification (FIR/IIR and RECURSION) Digital filters are distinguished in: 1) recursive and non -recursive 2) FIR and IIR filters NON -RECURSIVE All {an} coefficients are identically null ( all poles are implicit, thus placed in the origin ), therefore: bm = coefficients coincide with the co efficients of the impulse response RECURSIVE Recursive filters have at least one coefficient an ≠ 0. There are explicit poles, thus there is the AutoRegressive part. FIR filters IIR filters Always stable (no poles) Stable only if all poles are within the unit circle ARMA (0, M) Order: M Number of coefficients: M+1 ARMA ( N, M) Order: max (M, N) Number of coefficients: M+ N+2 PRO: - flexibility - always stable (all poles are in the origin) CONS: - slow (higher computational complexity) - wider transition band (less precision in fcut) PRO: - closer to the ideal filter with lower order - faster (lower computational complexity) CONS: - can be unstable (if poles out of unitary circle) - lower flexibility wrt FIR (only Classical familie s of filters (Butterworth, Chebyshev …)) Generally, it is easier to realize non -recursive filters with FIR techniques and recursive filters with IIR techniques , BUT recursive FIR filters or non -recursive IIR filters can be implemented anyway. REMARKS: - The ARMA system equation allows us to design an IIR filter using a finite number of filter coefficients (to simplify the computation of the system output). - An I IR filter of order max (M, N) has N+1 more degrees of freedom for fitting a desired frequency response than an FIR filter of order M. Consequently, an IIR filter of a particular order can have a sharper frequency response than an FIR filter of the same order. (= IIR requires less coefficients / lower order to have the same performance of an FIR / to be closer to the ideal filter) Filter’s order The order of a filter is the degree of the polynomial (s) that represents the f ilter. The performance of a filter ( = how close to the corresponding ideal filter goes the real filter) is directly proportional to its order: the higher the order usually the narrower the transition band and the less pronounced the ripples. The ideal fil ter (or “brickwall”) can be modelled as a filter of infinite order. Filter design and effect Given a filter defined by the H(z) transfer function, by moving poles and zeros around the unitary circle you can design its effect on signals (= roughly estimat e the modulus of ⤵(⬕)). > to cancel a certain frequency, place a zero in correspondence of the desired frequency the closer the zero is to 1 (border of unitary circle), the higher is the frequency attenuation > to amplify a certain frequency, place a pole in correspondence of the desired frequency the closer the pole is to 1 (border of unitary circle), the higher is the frequency amplification NB: > poles and zeros in the origin do not contribute to ⤵(⬕) > conversely, the closer to the unit circle the higher their effect > the number of poles (N) and zeroes (M) is always equal, eventually we implement this balance introducing poles and zeros in the origin (w hich have no effect) → (N -M are in the origin) > poles and zeros work in couples because the frequency distribution is symmetric wrt to the origin: both in positive and negative intervals ( [╽⏬⥍ⷬ] and [−⥍ⷬ⏬╽] ) there is the same content frequency > only poles affect system’s stability, zeros don’t Example 1: This is an example of a filter with : 2 real zeroes: q = ± 1 2 complex and conjugates poles: p = 0.5 ± 0.5 j ⤵ (⬕) x(n) ; X(z) y(n) ; Y(z) Y(z) = H(z)X(z) |Y(z)| = |H(z)||X(z)| > Recursive filter > Stable > the 2 poles perform an amplification weaker than the maximal effect (not so close to the unit circle) > the 2 zeros cancel the Nyquist frequency and the zero/sampling frequency > (Knowing that the spectrum is defined over [╽⏬╽⏯▂]= [╽⏬⥍ⷬ]= [╽⏬ⷤ⻏ ⵁ] ) by analysing the spectrum in the normalized frequency domain in the positive part , w e can see that only frequencies around ⥍ⷬ␋▁ are amplified by the filter, while all others are attenuated Example 2: The closer the poles to the unitary circle, the higher their effect on the spectral components = the higher the peaks on the PSD . While zeros attenuate proportionally to how close they are to the unitary circle (=1, 0 on PSD) Example 3: Given this causal system 1. it is an ARMA(1,0) => the AR part is present => hence it is an IIR filter 2. We can compute the transfer function of the given system, by applying the z -transform at both part of the equation and dividing by X(z) 3. we can express the transfer function in pos itive power to highlight zero and poles and to handle it in MatLab 4. Plot them in MatLab q = 0 ; p = 0.9 5. Plot the transfer function in module and phase [H1,w1] = freqz(b,a,100); magH1 = abs(H1); phaH1 = angle(H1); figure, subplot(2,1,1); plot(w1/pi,magH1);grid xlabel( 'frequency in \pi units' ); ylabel( 'Magnitude' ); title( 'Magnitude Response' ) subplot(2,1,2);plot(w1/pi,phaH1/pi);grid xlabel( 'frequency in \pi units' ); ylabel( 'Phase in \pi units' ); title( 'Phase Response' ) just for th e sake of completeness, MatLab offers the possibility to get also the frequency response shifting all angles [0; 2pi] instead of [ 0; pi ] by using this instruction [H2,w2] = freqz(b,a,200, 'whole' ); But this is quite useless in practice since th e part from [pi; 2pi] is the symmetric version of the previous half and doesn’t add any additional information . IDEAL AND REALISTIC FILTERS In red -> ideal filters In black -> realistic filters The ideal filter, sometimes called a “brickwall” filter, can be approached by making the order of the filter higher and higher. Passband and Frequency of cut Since in a real filter the re is a Transition band, a conventional meaning to the FREQUENCY OF CUT is required . A usual notion is that the frequency of cut corresponds to the frequency whose attenuation is of 3dB, or in other words the frequency for which the gain of the filter is 0.5 of the value it ha s in the passing band: > PASSBAND band [ ⳤ, ⫞⪡ ] , is called passband and ⫉⳥ and is the tolerance (or ripple) that we are willing to accept in the ideal passband response. ␌⤵(⧼)␌ⵁ ≈ ╾ ≈ ╿╽ ⥓⥖⥎ (╾) = ╽ ⥋⤯ (The filter doesn ’t affect the signal in the pas sband) > STOPBAND band [ ⫞⪤ ,⫕] , is called stopband and ⫉⳦ and is the tolerance (or ripple) ␌⤵(⧼⥊)␌ⵁ = ╽⏯▂ → ␌⤵(⧼⥊)␌ ≈ ╿╽ ⥓⥖⥎ ╾╽ (╾␋◉╿) ⥋⤯ ≈ −▀ ⥋⤯ (Frequencies higher than the cut - off frequency pass for less than the half of the ir amplitude = 0.5 attenuation) > TRANSITION BAND band [ ⫞⪡ , ⫞⪤ ] , is called transition band and there are no restrictions on the magnitude response in this band. Phase of filters In most filter applications, the magnitude response ⤵(⥌ ⥑⧼ ) is of primary concern. However, the phase response may also be important . The phase delay Θ(⧼) of a filter is the relative delay imposed on a particular frequency component of an input signal. ⤵(⧼)= ␌⤵(⧼)␌⥌ⷨ ⟬⩿(⫞)= ⤺(⧼)⥌ⵊⷨ ⸫ⲱ(⫞) ❧ { ⤺(⧼)= ␌⤵(⧼)␌ ␮(⧼)= ⟬⩿(⫞)␋⫞ Linear phase If a filter has a phase that is linear in frequency, which means: ⟬⤵(⧼)= −⧤⧼ ❧ ␮(⧼)= ⧤ ❧ ⤹⥐⥕⥌⥈⥙ ⥗⥏⥈⥚⥌ ⥍⥐⥓⥛⥌⥙ Then all frequencies have uniform delay = all frequency components will be delayed by ⫆ samples , which means that the output of the filter is shifted in time, but it doesn’t introduce any distortion (if also the first condition of non -distortion is satisfied).  DISTORTION -LESS FILTER Conversely, in NON -LINEAR phase filters is that the filter modifies the morphology of the input signal by affecting the harmonic components in a non - homogeneous manner, thus not preserv ing the morphology. NB: using filtfilt in MATLAB we can correct the phase distortion for whatever filter. REMARKS: Property 3 : FIR filters have linear phase (constant delay) and ⧤= ⷑⵊⵀ ⵁ , where M is the number of coefficients (  order = M -1) Distortionless transmission If a signal is transmitted through a system (filter) then, this system is said to provide a distortionless transmission if the signal form remains unaffected = if the filter doesn’t affect the signal morphology . In biomedical signals it is very important to preserve the morphology of the signal since it is usually what we perform the diagnosis on. Therefore, we cannot accept a filter that distorts a signal in removing the noise . Two are the conditions of DISTORTIONLESS TRANSMISSION : 1. the system must amplify (or attenuate) each frequency component uniformly (the magnitude response must be uniform within the signal frequency band). 2. the system must delay each frequency component by the same discrete -time value (number of samples) → linea r-phase system : ⪳(⫞) = ⫆ = ⪔⪠⪟⪤⪥ That is, all frequency components will be delayed by ⫆ samples ZERO PHASE LINEAR PHASE NON -LINEAR PHASE Different types of real filters MOVING AVERAGE FILTER IN MATLAB: M = 11; B = ones( M,1)/ M; A = 1; out = filter( B, A, Signal); SMOOTHING FILTER Real types of low -pass FIR filters that manage to attenuate only specific frequencies. Main purpose is to smooth data removing high -frequency noise (including 60 -Hz, movement arti facts, and quantization error). They are usually employed when the source s of disturbances are unclear. They result to be distortionless (=with linear phase)!! An example of smoothing filter of order 4 (3 complex and conjugate and 1 real zeros) By increasing the order of the filter, we obtain a narrower frequency removal: HANNING FILTER One of the simplest smoothing filters is the Hanning moving average filter. It computes a weighted moving average, since the central data point has twice the weight of the other two: ⥠(⥕)= ╾␋▁ ( ⥟(⥕)+ ╿⥟(⥕− ╾)+ ⥟(⥕− ╿)) ⤵(⥡)= ╾␋▁ (╾+ ╿⥡ⵊⵀ+ ╾⥡ⵊⵁ) NOTCH (BAND -REJECT) FILTER A common biomedical signal processing problem involves the removal of noise of a particular frequency or frequency range from a signal while passing higher and/or lower frequencies without attenuation, like in the case of power line interference (at 60 Hz in US and 50H z in EU). One simple method of completely removing noise of a specific frequency from the signal is to place a zero on the unit circle at the location corresponding to that frequency. > For example: ⥍ⷱ= ╾▅╽ ⤵⥡ ⥍ⷬ= ⥍ⷱ ╿ = ▆╽ ⤵⥡ ⥐⥕ ⥂⥀ ╂ ▃╽ ⤵⥡ ❧ ⥙⥌⥔⥖⥝⥌ ⏮ ⥞⥊ = ╿ ▀⥍ⷬ= ╿ ▀⧳ NB: keep in mind that the notch filter also tends to altogether attenuate the signal . > to have a deeper removal we can increase the order of the FIR filter > to have a narrower notch band we can use an IIR notch filter by adding 4 more explicit poles near the zeros, we manage to preserve frequencies near (both before and after) the frequency to remove without any attenuation and to have a narrow and deep notch band to remove the disturbance. But notice that in this case the linear response in phase is lost. The only thing we can do is to notice this aspect and prevent distortion by using the filter in its range of linearity (or use filtfilt) . z = [ -(1/2)+i *(sqrt(3)/2); -(1/2)+i*(sqrt(3)/2); -(1/2) -i*(sqrt(3)/2); -(1/2) -i*(sqrt(3)/2)] p = [ 0.9*(cosd(130)+i*sind(130)); 0.9*(cosd(130) -i*sind(130)); 0.9*(cosd(110)+i*sind(110)); 0.9*(cosd(110) -i*sind(110))] [b,a]=zp2tf(z,p,1) sys = tf(b,a,0.1) zplane(z,p); freqz(b,a,512); FIR FILTERS = B(z) Properties: 1) Finite impulse response Finite impulse response implies that the effect of transients or initial conditions on the filter output will eventually die away. NB: The output of a FIR filter of order N is the weighted sum of the values in the storage registers of the delay line. 2) Linear phase In many biomedical signal processing applications, it is important to preserve certain characteristics of a signal throughout the filtering operation. A filter with linear phase has a pure time delay as its phase response, so phase distortion is minimized. FIR filters can easily be designed to have a linear phase characteristic. Linear phase can be obtained in four ways, as combinations of even or odd symmetry (defined as follows) with even or odd length. > for odd values of N, even (+) or odd ( -) simmetry > for even values of N, even (+) or odd ( -) simmetry. In any case: ␮(④)= ⷒⵊⵀ ⵁ = ⥊⥖⥕⥚⥛⥈⥕⥛ ⥋⥌⥓⥈⥠ For odd values of N -> the delay is odd For even values of N -> there is also ½ sample Design workflow Given some requirements for the filter (such as passband, stopband, ripple tolerance in passband and stopband, etc.) … 1. Define the design method : a. Window design technique b. Frequency sampling technique c. Optimal equiripple design technique 2. Define hyperparameters 3. Define the filter coefficients → get the transfer function and the impulse response (H(z) and h(n)) 1 - WINDOW DESIGN TECHNIQUE Design an ideal frequency -selective filter with finite -duration impulse response (= low pass / high pass / bandpass filter) The basic idea behind the WINDOW DESIGN is to choose a proper ideal frequency -selective filter (which always has a noncau sal, infinite -duration impulse response) and then to truncate (or window) its impulse response to obtain a linear -phase and causal FIR filter. DESIGN PROCESS : 1. choose the ideal -selective filter, hence select the cut -off frequency ( ⧼c) and the sample delay ( ⧤) → Hd(⧼c , ⧤) remember that such a filter will respond to an impulse input with a signal that is (impulse response) h d(n) = Z -1[ H d(⧼c , ⧤) ] for a LOW PASS filter for instance: If the objective is to minimize the mean -squared error between the actual frequency response and the desired frequency response, then it can be shown by Parseval’s theorem that the error is minimized by directly truncating h d(n). In other words, the pulse response h(n) is chosen to be the fi rst N terms of h d(n). 2. select a finite time window of the infinite -duration impulse response Given M -1 the order of the wanted FIR, consider only M samples of the ideal impulse response. ( M is the number of coefficients of the filter = length of the FIR impulse response) This in mathematical terms consist in • (In Time domain) M ultiplying the impulse response h d(n) for the regular time window function: • (In Frequency domain) Convoluting the transfer functions of the ideal filter and the window: ⥄ (⥌ⷨ⸫ )= ⥁⤳ [⥞(⥕)] The wider the main lobe the wider the transition bandwidth NB: the choice of M affects the width of the transition band of the obtained FIR filter, since it affects the width of the main lobe of ⥄ (⥌ⷨ⸫ )⏬ but doesn’t affect the ripples’ amplitude . TRANSITION BAND It is affected by the chosen order of the FIR: width MainLobe = 1/M the higher M , the sharper the main lobe of ⥄ (⥌ⷨ⸫ )⏬ the closer the obtained filter to the ideal one in terms of transition band (= sharper transition = smaller transition bandwidth ) RIPPLES (Gibbs Phenomenon) Conversely, the width of the window doesn’t affect the relative amplitude of side lobes in ⥄ (⥌ⷨ⸫ ), which are the cause of the ripples both in the passband and in the stopband. The only possibility to reduce the ripples is to choose a smoother window (not performing sharp truncation of the ideal impulse response h d(n)) Gibbs phenomenon The Gibbs phenomenon is the mechanism that affects direct truncation of signals, which is the same of applying a rectangular window . w(n) M samples If we increase M, the width of each side lobe will decrease, but the area under each lobe will remain constant. Therefore, the RELATIVE AMPLITUDES of side lobes will remain constant. > This implies that all ripples will bunch up near the band edges. The only technique to reduce the ripples is to change the shape of the window! WHICH IS THE MORE SUITABLE WINDOW? The rectangular window, among common window types, has the narrowest main lobe which produces a fairly sharp transition region in the achieved frequency response. However, the rectangular window has the largest sidelobe magnitudes (these sidelobes are the cause of the non -zero passband ripple and stopband attenuation in the truncation method) > Consequently, we may wish to us e a window with smaller magnitude sidelobes to reduce the passband ripple and stopband attenuation, at the expense of have a less sharp transition region Practice - Window design technique Design a digital FIR lowpass filter with the following specifications: Transition band = w s – wp 1. we already know which is the ideal frequency -selective filter of reference: low pass with cut -off frequency within 0.2pi - 0.3pi (for instance) wc = (ws+wp)/2 2. we must choose a time window in compliance with the requested specifications time -window that has a stopband attenuation (A s) > 50 dB (see the table above)  both Hamming and Blackman are suitable but Hamming provides the sharpest transition being equal the number of samples (choose the filter of smallest order) (= Hamming needs less samples to provide the required transition bandwidth)  best choice is HAMMING WINDOW 3. Time window length? We need M samples where M can be computed as the inverse function of the transition bandwidth: Hamming_transBW = 6.6 pi / M → M = 6.6 pi / required_transBW = 6.6 pi / w s – wp NB: we do not use the passband ripple value of ⤿⥗ = A ⥗ = 0.25 dB in the design, we will have to check a posteriori the actual ripple and veri fy that it is indeed within the given tolerance clear all ; close all ; clc % Definition of parameters wp = 0.2*pi; ws = 0.3*pi; tr_width = ws - wp; % Definition of number of samples M %% ceil() rounds the result to the next int %% to consider M samples we must span from 0 to M -1 M = ceil(6.6*pi/tr_width) + 1; n=[0:1:M -1]; % Definition of cutoff frequency wc = (ws+wp)/2; % Impulse response of the ideal filter %% obtaining only the first M observations hd = ideal_lp(wc, M); % Definition of the chosen window w_ham = (hamming(M))'; % Computation of the impulse response of the real filter %% using element by element multiplication h = hd .* w_ham; % Estimation of the frequency response in decibel %% the b coefficients of a n FIR filter are exactly the samples of the impulse response %% FIR filters have no denominator (a = 1) [db,mag,pha,w] = freqz_m(h,[1]); % Frequency sampling interval %% delta_w is always 2*pi / number of samples chosen to plot in frequency domain delta_w = 2*pi/1000; % Passband ripple %% here we are taking the maximum value in abs (so the minimum since it is < 0) %% of the |H(z)| in db within the interval of frequencies < wp (bandpass frequency) Ap = -(min(db(1:1:wp/delta_w+1))) % Stopband r ipple: min stopband attenuation %% here we are taking the minimum value in abs (so the maximum since it is < 0) %% of the |H(z)| in db within the end of the transition band As = -round(max(db(ws/delta_w+1:1:501))) % Plots subplot(2,2,1); stem(n,hd); title( 'Ideal Impulse Response' ) axis([0 M -1 -0.1 0.3]); xlabel( 'n' ); ylabel( 'h_d(n)' ) subplot(2,2,2); stem(n,w_ham);title( 'Hamming Window' ) axis([0 M -1 0 1.1]); xlabel( 'n' ); ylabel( 'w(n)' ) subplot(2,2,3); stem(n,h);title( 'Actual Impulse Respons e' ) axis([0 M -1 -0.1 0.3]); xlabel( 'n' ); ylabel( 'h(n)' ) subplot(2,2,4); plot(w/pi,db);title( 'Magnitude Response in dB' );grid axis([0 1 -100 10]); xlabel( 'frequency in \pi units' ); ylabel( 'Decibels' ) set(gca, 'XTickMode' ,'manual' ,'XTick' ,[0,0.2,0.3,1]) set(gca, 'YTickMode' ,'manual' ,'YTick' ,[ -50,0]) set(gca, 'YTickLabelMode' ,'manual' ,'YTickLabels' ,[ '50' ;' 0' ]) function hd = ideal_lp(wc,M) % Ideal LowPass filter computation % -------------------------------- % [hd] = ideal_lp(wc,M) % hd = ideal impulse response between 0 to M -1 % wc = cutoff frequency in radians % M = length of the ideal filter % alpha = (M -1)/2; n = [0:1:(M -1)]; m = n - alpha; fc = wc/pi; hd = fc*sinc(fc*m); function [db,mag,pha,w] = freqz_m(b,a) % Modified version of freqz su broutine % ------------------------------------ % [db,mag,pha,grd,w] = freqz_m(b,a); % db = Relative magnitude in dB computed over 0 to pi radians % mag = absolute magnitude computed over 0 to pi radians % pha = Phase response in radians over 0 to pi radia ns % w = 501 frequency samples between 0 to pi radians % b = numerator polynomial of H(z) (for FIR: b=h) % a = denominator polynomial of H(z) (for FIR: a=[1]) % [H,w] = freqz(b,a,1000, 'whole' ); H = (H(1:1:501))'; w = (w(1:1:501))'; mag = abs(H); db = 20*log10((mag+eps)/max(mag)); pha = angle(H); RESULTS: M = 67 As = 52 dB % > 50 dB Ap= 0.0394 dB % < 0.25 dB % note that the values are compliant with the specifications GAIN = 1 (= 0dB) 0.2 < Wc < 0.3 2 - FREQUENCY SAMPLING TECHNIQUE Obtain the real filter from the ideal one by directly specifying that H REAL (⧼) must fit a set of values obtained sampling the HIDEAL (⧼). The frequency sampling method of design is more straightforward than the window desi gn method since it circumvents the transformations from the time domain to the frequency domain. 1. Given a set of specification, define the appropriate ideal frequency -selective filter → HIDEAL (⧼) Example: low pass filter FREQUENCY TIME (discrete) 2. Sample the ideal frequency -selective filter in N values Hⵘⵓⵔⵐⵛ (④) − ⥚⥈⥔⥗⥓⥐⥕⥎ (⤻)❧ ⤵ⷍⷈⷉⷅⷐ (⥒)= ⤵ⷍⷈⷉⷅⷐ (⥒╿⧳ ⤻ ) ⥒= ╾⏬⏰ ⏬⤻ ⤵ⷖⷉⷅⷐ (⧼)⏮ ⤵ⷖⷉⷅⷐ (⥒)= ⤵ⷍⷈⷉⷅⷐ (⥒) ⟕ ⥒= ╾⏬⏰ ⏬⤻ Sampling at ◊⧼ = ⵁ⸢ ⷒ 3. Obtain the finite impulse response of the real FIR filter performing the inverse transform of the discrete transfer function ⤵ⷖⷉⷅⷐ (⥒) ⤵ⷖⷉⷅⷐ (⥒)− ⥐⤱⤳⥁ ❧ ⥏ⷖⷉⷅⷐ (⥕) hREAL (n) is the finite impulse response of a filter whose ⤵ⷖⷉⷅⷐ (⧼)= ⤵ⷍⷈⷉ ⷅⷐ(⧼) ⟕⧼ = ⷩⵁ⸢ ⷒ , but ⤵ⷖⷉⷅⷐ (⧼)≠ ⤵ⷍⷈⷉⷅⷐ (⧼) ⟕⧼ ≠ ⷩⵁ⸢ ⷒ The aim is to find ⤵ⷖⷉⷅⷐ (⧼)⏮ ⊚⊖⊛ ( ⤺⥀⤲ (⤵ⷖⷉⷅⷐ (⧼)⏬⤵ⷍⷈⷉⷅⷐ (⧼) ) NB: The higher N, the denser the frequency sampling, the more the values at zero error, the lower the overall error. There are two design approaches 1) Naïve deign method : one can use the basic idea literally and provide no constraints on the approximation error. 2) Optimum design method : o ne can try to find the filter configuration that minimizes the error. The calculation of filter coefficients is reduced to an OPTIMIZATION problem : find the M that either minimizes the overall error (or that minimizes the maximum error over ω) (or that maximizes the minimal attenuation in the stop band ) REMARKS: 1. The approximation error (the difference between the ideal and the actual response) is zero at the sampled frequencies. 2. The approximation error at all other frequencies depends on how sharp we designed the discrete ideal response. We know that sharp transi tions result in overshoot and ripples, thus t he sharper the ideal response, the larger the approximation error. 3. The error is larger near the band edges and smaller within the band NB: The frequency sampling method generally yields more efficient filters than the window method. ISSUE : the exact value of the cut -off frequency can’t be predicted with certainty Problems of 1 and 2 methods: > We cannot specify the band frequencies ⧼⥗ and ⧼⥚ precisely in the design, but just accept whatever values we obtain after the design! > We cannot specify both ⧧1 and ⧧2 ripple factors simultaneously. > The error is maximal nearby the transition zone, whereas it is minimal on the edges of the band. ➔ METH OD 3 introduction of more powerful filters able to distribute the error in a uniform way all over the entire band : higher performances (at equal filter order). 3 - OPTIMAL EQUIRIPPLE DESIGN TECHNIQUES Find the optimal filter order , namely the one that maximizes a figure of merit (that measures the filter performance) Given a set of constraints, find the optimal filter (find the filter’s order and parameters) that provide the best performance. This can be done by iteratively 1. fixing some variables 2. changing the others until a figure of merit is optimized (usually by means of non -linear functions). Usually, maximum approximation error is used as figure of merit, and in this case the optimization is done by minimizing it. (Sometimes calle d the minimax or the Chebyshev error). The parameters to be set are 5: ▪ M (number of coefficients) ▪ 1, 2 (ripple s tolerance ) ▪ p ( passband frequency) ▪ s ( stopband frequency ) There are different algorithms that can be used: • Parks -Mc Clellan Method : M, ωp, ωs are given and δ1, δ2 are computed by optimization • Hofstetter -Oppenheim Method : M, δ1, δ2 are given and ωp, ωs are computed by optimization Method 3 vs 1,2 The equiripple filter design techniques allow to have uniform ripples both in the pass and in the stopband. + with respect to the techniques 1 e 2, given the same number of coefficients, they can achieve better performance (LOWER ERROR) or, equivalently, they allow to reach the same specifications with a lower order filter + With this method we can implement also MULTIBAND FILTERS BUT - non -linear functions - higher complexity (which increases as M increase) - the optimization algorithm could NOT converge (this is the case in which constraints are relaxed, since the requirements are probably too strict, such as very narrow transition band and small ripples, which might not be obtained with any real filter) Multiband filters are useful to perform sub - bands selection as in case of brainwaves. IIR FILTERS An infinite -duration impulse response filter has a system function of the form: where ⥈⥕ and ⥉⥕ are the coefficients of the filter. We h ave assumed without loss of generality that ⥈0 = 1. The order of such an IIR filter is called N i f ⥈N is the coefficient of the term of higher order that is ≠ 0. Stability is assured if all poles are inside the unit circle . The introduction of poles (of the AR part in the filter = A(z) coefficients) results in the amplification of certain frequencies , additional mechanism wrt the attenuation effect that results from zeros. This enables sharp variations in frequency = high s electivity for frequencies (see the notch filter case above). However, this is got at the expense of the linearity of the phase. IIR filters have non -linear phase and so they can be used only in their range of linearity (or with double filtering, with fil tfilt). Compared to FIR filters , IIR filters perform better and are computationally easier to implement but are less flexible and limited to traditional filters and suffer from non -linearity in phase. Design workflow Given some requirements for the filter to implement… 1. Design the analogue filter that would fit ( «starting» filte r: ⤵⷟(⥑⧼ )) 2. Define the correspondent digital filter using one of the 3 possible methods to convert a filter from analogue to digital : • Invariance of the impulse response • Direct z -transform (pole zero plot) • Bilinear z -transfor m 1 - INVARIANCE OF THE IMPULSE RESPONSE Given ⤵⷟(⥑⧼ ) ⏭⥏⷟(⥛) , perform the Impulse Invariant Transform (IIT) to get ⤵(⥌ⷨ⸫ ) ⏭⥏(⥕) . Impulse Invariant Transform Expand the ⤵⷟(⥚= ⥑⧼ ) in partial fractions and convert its poles into their discrete equivalents : ⥞⥐⥛ ⥏ ⥁= ④⵬␋␾⷟ ❧ ⤵(⥡= ⥌ⷨ⸫ ) By doing so