- userLoginStatus
Welcome
Our website is made possible by displaying online advertisements to our visitors.
Please disable your ad blocker to continue.
Computer Engineering - Model Identification and Data Analysis
Full exam
MODEL IDENTIFICATION AND DATA ANALYSIS – Module 1, A.Y. 2024/2025 Prof. Luigi Piroddi, Prof. Simone Formentin – February 12 th, 2026 1. Multiple choice questions Check with an × the correct answer (wrong answers are penalized with a negative score; missing answers yield 0 points) Consider the stochastic process y(t) represented by the following block diagram: z +2 z + (t ) 4 + 1 z 2 +y (t ) where (ꞏ) ~ WN(2, 1). Compute the mean of the process. 1.1) E[y(t)] = ? a) 2 × b) 5 c) 1 d) 3 The system can be reformulated as an MA(1) with an additive constant process equal to 5. 1+2z 1 (t ) 5 ++ y (t ) where (ꞏ) ~ WN(0,1). Since the MA(1) has zero mean, it follows that E[y(t)] = 5. Compute the variance of the process. 1.2) Var[y(t)] = ? a) 1 b) 1/2 × c) 5 d) 4 The variance of the process equals that of the MA(1), which amounts to Var[y(t)] = (1+2 2)ꞏ1 = 5. Compute the correlation function ~() for = 0. 1.3) ~(0) = ? a) 5 × b) 30 c) 10 d) 25 Since () = Var[y(t)] = ~() – E[y(t)] 2, it follows that ~(0) = 5 + 5 2 = 30. The power spectral density has the form () = a + b cos() + c cos(2) + dꞏ(), where () is a Dirac function. Determine the values of parameters a, b, c, and d. 1.4) a = ? a) 25 b) 0 c) 10 × d) 5 1.5) b = ? a) 1b) 2c) 5× d) 4 1.6) c = ? × a) 0 b) 1c) 2d) 1 1.7) d = ? × a) 25 b) 5c) 4d) 10 Consider first the MA(1) process. Its spectrum can be easily calculated as follows: (z) = (1+2z 1)(1+2z)ꞏ1 = 5 + 2(z+z 1) () = 5 + 4 cos() An impulse at the origin (with amplitude equal to 25) must be added, due to the non-zero expected value, resulting in the following expression: () = 5 + 4 cos() + 25ꞏ() 2 Consider the stochastic process y(t) given by the following equation: y(t) = y(t 1) – 0.25y(t 2) + (t 1) + 4(t 2) where (ꞏ) ~ WN(0, 1/16). The canonical representation of the process has the following form: y(t) = a 1y(t 1) + a 2y(t 2) + (t) + c 1(t 1), (ꞏ) ~ WN(0, 2). Determine the values of parameters a 1, a 2, c 1, and 2. 1.8) a 1 = ? a) 1 × b) 1 c) 0.75 d) 0.25 1.9) a 2 = ? a) 4 b) 4 c) 0.5 × d) 0.25 1.10) c 1 = ? × a) 0.25 b) 4 c) 1 d) 1 1.11) 2 = ? a) 1/4 b) 1/8 × c) 1 d) 1/16 The given process is stationary, since the transfer function is asymptotically stable (check the roots of polynomial z 2 z + 0.25), and the input is itself a stationary process. The canonical form can be easily determined as y(t) = y(t 1) – 0.25y(t 2) + (t) + 0.25(t 1), (ꞏ) ~ WN(0, 1) Find the optimal one-step ahead predictor of y(t), in the form: y (t1|t) = ay (t|t1) + by(t) + cy(t1) 1.12) a = ? a) 0.75 × b) 0.25 c) 4 d) 0.25 1.13) b = ? a) 0.25 b) 1 c) 0.75 × d) 1.25 1.14) c = ? a) 1.25 b) 0.75 × c) 0.25 d) 0.5 A(z) = 1 z 1 + 0.25z 2, C(z) = 1 + 0.25z 1 y (t1|t) = C(z) A(z) C(z) y(t+1) = 1.25z 1 0.25z 2 1 + 0.25z 1 y(t+1) = 1.25 0.25z 1 1 + 0.25z 1 y(t) y (t1|t) = –0.25y (t|t 1) + 1.25y(t) 0.25y(t 1) Consider the stationary stochastic process y(t) generated by: y(t) = (t) + 0.5(t 1) where (ꞏ) ~ WN(0, 1). Compute the covariance function () of the process for = 0, 1, and 2. 1.15) (0) = ? × a) 1.25 b) 0.75 c) 1 d) 0.25 1.16) (1) = ? × a) 0.5 b) 1 c) 0.25 d) 1.5 1.17) (2) = ? a) 0.25 b) 0.5 × c) 0 d) 1 Since the process is an MA(1), the covariance function can be calculated as follows: (0) = (1 2 + 0.5 2)ꞏ1 = 1.25, (1) = (1ꞏ0.5)ꞏ1 = 0.5, () = 0, 2 Suppose one wants to identify the best model of the process in the family: 3 M: y(t) = 1 a y(t1) + b y(t2) + (t) , (ꞏ) ~ WN(0, 2) based on a realization of the process y(ꞏ) consisting of N samples. Calculate the value to which the estimates of parameters a and b tend as the number N of data tends to infinity. 1.18) a = ? a) 0.4762 b) 1.3752 c) 2.2981 × d) 2.1000 1.19) b = ? a) 0.5026 × b) 0.1905 c) 0.3562 d) 0.7758 For N the estimated parameters a and b tend to the values of a and b that minimize the cost function J () = E[(y(t)y (t|t1)) 2] Prediction form model: y (t|t1) = cy(t1) + by(t2), where c = 1/a. J () = E[(y(t) cy(t1) by(t2)) 2] = = (1+ c 2 + b 2)(0) + 2(bc c)(1) 2b(2) c(0) + (b 1)(1) = 0 b(0) + c(1) (2) = 0 1.25c + 0.5(b 1) = 0 1.25b + 0.5c = 0 c = 10 21 = 0.4762, b = 4 21 = 0.1905 a = 2.1 Calculate the variance of the prediction error of y(ꞏ) based on the identified model. 1.20) Var[(t)] = ? a) 0.8102 b) 1.9842 × c) 1.0119 d) 1.2605 Var[(t)] = J ( ) = (1+ c 2 + b 2)(0) + 2(b c c )(1) 2b (2) = 1.0119 4 2. Open-ended questions Write the answers in the allotted spaces on this and the next page. 2.1) Explain why it can be stated that an AR(1) process is equivalent to an MA() process. The AR(1) model is defined by the following equation: v(t) = av(t1) + (t) , (ꞏ) ~ WN(0, 2), where |a| < 1 to guarantee stationarity. Iterating the equation from an initial time point, say t 0, where the value of v equals v 0, one obtains: v(t) = t0 t1 i at1i i+1 + a tt0 v0, t > t 0 Now, if |a| < 1, the last term vanishes as t 0 , and the process becomes: v(t) = t0 t1 i at1i i+1 = 0 j ajtj, j = t i1 In other words, v(t) is an infinite linear combination of current and past noise values, configuring an MA() process: v(t) = tat1a 2t2 The expected value is trivially verified to be [v(t)] = 0. The variance of v(t) equals Var[v(t)] = (1 + a 2 + a 4 + ...) 2, which is a geometric series with common ratio a 2. Now, since |a| < 1, the series converges: Var[v(t)] = 0 j a2j = 1 1 a 2 < Then, the variance of the process is finite (() is also finite since |()| (0)) and indeed v(t) represents a well-defined (stationary) MA() process with c j = a j. 2.2) Explain how you would approach the identification of the following model: M(): y(t) = bu(t1) + 1 1 + dz 1 (t) , (ꞏ) WN(0, 2) The model is equivalent to an ARX(1, 2): y(t) = dy(t1) + bu(t1) + bdu(t2) + (t) Notice that the model is nonlinear in the parameters, which prevents the application of LS. One can then apply the ML method: M (): y (t|t1) = dy(t1) + bu(t1) + bdu(t2) (t) = y(t)+dy(t1) bu(t1) bdu(t2) (t) = (t)/b (t)/d = u(t1) + du(t2) y(t1) + bu(t2) Then, we can apply the ML method using (t) and (t), as follows: b (i+1) d(i+1) = b (i) d(i) + 1 N t (t)(t) T1 1 N t (t)(t) 5 2.3) Explain what is the canonical representation of a stationary process with a rational spectrum and what is its role in the prediction process. The spectral factorization theorem states that there exist a unique representation W (z) = C(z) A(z) (t) of a stationary stochastic process v(t) with rational spectrum such that A(z) and C(z) are monic and coprime polynomials with the same degree and all roots in the open and closed unit circle, respectively. This representation is denoted as canonical. If the model is represented in canonical form and the roots of C(z) are not in |z| = 1, then the inverse system can also be interpreted as a legitimate asymptotically stable system. Such system is denoted whitening filter since it allows to recover (t) from v(t), which is crucial to obtain a predictor from past data, as opposed to a predictor from past noise samples (which are not generally accessible).