Note 1 Multiplying a matrix times a vector is the crucial step. If students have seen Ax before, it was
row times column. In examples they are free to compute that way (as I do). “Dot product with rows” gives the same answer as “combination of columns”. When the combination x1 a1 C x2 a2 C x3 a3 is computed
one component at a time, we are using the rows.
The example illustrates how the same Ax arrives both ways. Differences like x2 x1 come from row
times column. Combining the columns of A is probably new to the class: good. The relation of the rows
to the columns is truly at the heart of linear algebra.
Note 2 Three basic questions in linear algebra, and their answers, show why the column description of
Ax is so essential:
When does a linear system Ax D b have a solution?
Ax D b asks us to express b as a combination of the columns of A. So there is a solution exactly
when b is in the column space of A.
When are vectors a1; : : : ; an linearly independent?
The combinations of a1,...,an are the vectors Ax. For independence, Ax D 0 must have only the
zero solution. The nullspace of A must contain only the vector x D 0.
How do you express b as a combination of basis vectors?
Put those basis vectors into the columns of A. Solve Ax D b.
Note 3 The reader may object that we have only answered questions by introducing new words. My
response is, those ideas of column space and nullspace and basis are crucial definitions in this subject.
The student moves to a higher level—a subspace level—by understanding these words. We are constantly
putting vectors into the columns of a matrix, and then working with that matrix.
I don’t accept that inevitably “The fog rolls in” when linear independence is defined [1]. The concrete
way to dependence vs. independence is through Ax D 0: many solutions or only the solution x D 0. This
comes immediately in returning to the example of specific a1, a2, a3.
Note 1 Multiplying a matrix times a vector is the crucial step. If students have seen Ax before, it was
row times column. In examples they are free to compute that way (as I do). “Dot product with rows” gives the same answer as “combination of columns”. When the combination x1 a1 C x2 a2 C x3 a3 is computed
one component at a time, we are using the rows.
The example illustrates how the same Ax arrives both ways. Differences like x2 x1 come from row
times column. Combining the columns of A is probably new to the class: good. The relation of the rows
to the columns is truly at the heart of linear algebra.
Note 2 Three basic questions in linear algebra, and their answers, show why the column description of
Ax is so essential:
When does a linear system Ax D b have a solution?
Ax D b asks us to express b as a combination of the columns of A. So there is a solution exactly
when b is in the column space of A.
When are vectors a1; : : : ; an linearly independent?
The combinations of a1,...,an are the vectors Ax. For independence, Ax D 0 must have only the
zero solution. The nullspace of A must contain only the vector x D 0.
How do you express b as a combination of basis vectors?
Put those basis vectors into the columns of A. Solve Ax D b.
Note 3 The reader may object that we have only answered questions by introducing new words. My
response is, those ideas of column space and nullspace and basis are crucial definitions in this subject.
The student moves to a higher level—a subspace level—by understanding these words. We are constantly
putting vectors into the columns of a matrix, and then working with that matrix.
I don’t accept that inevitably “The fog rolls in” when linear independence is defined [1]. The concrete
way to dependence vs. independence is through Ax D 0: many solutions or only the solution x D 0. This
comes immediately in returning to the example of specific a1, a2, a3.
การแปล กรุณารอสักครู่..
