Observation: The linearity assumption for multiple linear regression can be restated in matrix terminology asįrom the independence and homogeneity of variances assumptions, we know that the n × n covariance matrix can be expressed as If X is an n × 1 column vector then the covariance matrix X is the n × n matrix Then the expectation of A is the m × n matrix whose elements are E. Then the least-squares model can be expressed asįurthermore, we define the n × n hat matrix H asĭefinition 2: We can extend the definition of expectation to vectors as follows. and let Y-hat be the ( k+1) × 1 column vector consisting of the entries ŷ 1, …, ŷ n. Let B be a ( k+1) × 1 column vector consisting of the coefficients b 0, b 1, …, b k. ![]() ![]() Where β is the ( k+1) × 1 column vector with entries β 0, β 1, …, β k and ε is the n × 1 column vector with entries ε 1, …, ε n. Let X be the n × ( k+1) matrix (called the design matrix):Ĭan now be expressed as the single matrix equation Let Y = an n × 1 column vector with the entries y 1, …, y n. We start with a sample for each of the independent variables x j for j = 1, 2, …, k. Definition 1: We now reformulate the least-squares model using matrix notation (see Basic Concepts of Matrices and Matrix Operations for more details about matrices and how to operate with matrices in Excel).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |