Linear algebra and linear models pdf


















We can even say that linear algebra is the language of the modern mathematical economics. To solve the system is to answer to these questions. As it will be shown in one of the next sections, given a system of m simultaneous linear equations in n unknowns, there is either no solution or exactly one solution or infinitely many solutions. The coefficient matrix A with the column vector of constants b set alongside it, separated by a line or bar is called augmented matrix and denoted by A b or A.

Vectors x and b will always be column vectors. The idea is not to solve the system of linear equations but rather to use some general theoretical considerations. Theorem on homogenous system. Such a system is called homogeneous. As we shall see later, homogeneous systems play an especially important role in the study of linear equations. Other solutions if they exist then are called non-trivial.

At least one entry of non-trivial solution is non-zero. Theorem on system with regular diagonal matrix. In fact this is an idea that leads to the effective computational algorithm of linear algebra — Jordan- Gauss method.

Similar result can be obtained for the system with regular triangular matrix. Theorem on system with regular triangular matrix. The variables corresponding to basic columns are called basic variables. The variables corresponding to non-basic columns are called free variables. In our example basic or dependent variables are x1 , x3 , x6 , and x7 , free or independent variables are x2 , x4 , x5 , and x8 Sometimes, usually in economic applications the basic variables are called also dependent or endogenous, free variables are called also independent or exogenous.

Theorem on system with echelon matrix. By transposing all the terms containing free variables in our example x2 , x4 , x5 , and x8 from the left to the right side, choosing values for all free variables in our example c2 , c4 , c5 , and c8 and plugging them into system we get a new system with regular triangular matrix. Two systems of linear equations are said to be equivalent if they have identical solution sets. Some operation that transforms a system into a new system is called equivalent if the original and transformed systems are equivalent.

Equivalent operation is reversible, in other words one can recover the original system applying inverse operation to it. For example one can interchange two equations. We recover the original system interchanging the same equation in the transformed system. Let us give the full list of equivalent operations. We will call fifth elementary equation operation Gauss elimination operation or simply Gauss operation.

Since equals are always added to or subtracted from equals or multiplied by the same scalar, the set of xi 's which solve the original system will also solve the transformed system. In fact, since these three operations are reversible, any solution of the transformed system will also be a solution of the original system. Consequently, both systems will have exactly the same set of solutions. Rigorous proof can be provided on the basis of the definition of the equivalent systems.

For example let us prove that Gauss elimination operation 5 is equivalent. Theorem on the Gauss elimination operation. Gauss operation is an equivalent operation. New-York, London. Chapter 7 2. Transforming augmented matrix in the process of solving a system of linear equation we can 1 interchange two rows; 2 interchange two columns in the matrix of the system not touching column b ; 3 deleting the null row consisting of zeros ; 4 multiplying a row by a nonzero scalar; 5 change an row by adding to it a multiple of another row; The fifth elementary matrix operation we will call again Gauss elimination operation or simply Gauss operation.

Substitution method Substitution is the method usually taught in beginning algebra classes. To use this method, Solve one equation of system for one variable, say xn in terms of the other variables in that equation. Substitute this expression for xn into the other m—1 equations. Proceed until you reach a system with just a single equation, a situation which is easily solved.

Deficiencies of the substitution method The substitution method is straightforward, but it can be cumbersome. Furthermore, it does not provide much insight into the nature of the general solution to systems like 3.

It is not a method around which one can build a general theory of linear systems. However, it is the most direct method for solving certain systems with a special, very simple form.

As such, it will play a role in the general solution technique we now develop. Elimination of Variables Method a Gaussian Elimination First consider the simple case of a system with unique solution. The method here consists of three steps: The method which is most conducive to theoretical analysis is elimination of variables, another technique that should be familiar from high school algebra. Instead of transforming equations one can operate with matrices.

To solve a general system of m equations by elimination of variables, use the coefficient of x1 in the first equation to eliminate the x1 term from all the equations below it. To do this, add proper multiples of the first equation to each of the succeeding equations. Now disregard the first equation and eliminate the next variable — usually x2 — from the last m—1 equations just as before, that is, by adding proper multiples of the second equation to each of the succeeding equations.

If the second equation does not contain an x2 but a lower equation does, you will have to interchange the order of these two equations before proceeding. Continue eliminating variables until you reach the last equation. The resulting simplified system can then easily be solved by substitution. This method of reducing a given system of equations by adding a multiple of one equation to another or by interchanging equations until one reaches a system with regular triangular or echelon matrix and then solving it via back substitution is called Gaussian elimination.

The important characteristic of resulting system is that each equation contains fewer variables than the previous equation. At each stage of the Gaussian elimination process, we want to change some coefficient of our linear system to 0 by adding a multiple of an earlier equation to the given one.

In our example, the coefficient 1 by the unknown x1 in the first equation is the pivot; on the next stage the coefficient 1 by the unknown x2 in the second equation is the pivot both pivots are double-underlined. Note that 0 can never be a pivot in this process.

If you want to eliminate x j from a subsystem of equations and if the coefficient of x j is zero in the first equation of this subsystem and nonzero in a subsequent equation, you will have to reverse the order of these two equations before proceeding. There is a variant of Gaussian elimination, called Gauss-Jordan elimination, which does not use back substitution at all, but uses some additional elementary operations.

This method starts like Gaussian elimination, e. Now, after reaching it, instead of using back substitution, use Gaussian elimination methods from the bottom equation to the top to eliminate all but the first term on the left-hand side in each equation. One can "eliminate" the variable x3 from equations 1 and 2 of this system by subtracting third equation from the first one and by multiplying third equation by —2 and adding new equation to the second one.

Gauss-Jordan elimination is particularly useful in developing the theory of linear systems; Gaussian elimination is usually more efficient in solving actual linear systems. Let us sum up Gaussian elimination technique. The solution to the system of equations can then be read from the remaining elements in the column vector b.

To transform the coefficient matrix to an identity matrix, work along the principal axis. Matrix methods Earlier we mentioned a third method for solving linear systems, namely matrix methods. We will study these methods in the next lectures, when we discuss matrix inversion and Cramer's rule.

For now, it suffices to note that some of these more advanced methods derives from Gaussian elimination. The understanding of this technique will provide a solid base on which to build your knowledge of linear algebra and some further applied courses such as quantitative methods for economist. It is essential that applying Gauss elimination method we can detect the case of many solutions and manage it.

Let us now generalise the results of our analysis. Lemma on Gauss elimination iteration. Any matrix having at least one nonzero element can be transformed by elementary matrix operations to the form in which any row has more leading zeros than the first row. Comparing rows one can find a row containing less leading rows than the others. Interchanging rows we can make this row to be the first one. Using Gauss elimination operations one can make other rows to have less leading zeros than the first row.

The augmented matrix of a system of linear equations having at least one nonzero coefficient by the unknown can be reduced by elementary matrix operations to the echelon form.

After this process done 1 the system has no solution if and only if the last row of the echelon form of the augmented matrix contains only one nonzero element and it is in the column of the constant terms last column of the augmented matrix ; in other cases the system has at least one solution; 2 if the solution exists it is unique if and only if in the echelon form the coefficient matrix all columns except the last column of the constant terms has regular triangular form; Proof.

One can find a nonzero coefficient and interchanging rows and columns move it to the first place in the first row. Then applying lemma of Gauss elimination iteration transform the matrix into the form in which any row has more leading zeros than the first row. Now delete the first row and consider the matrix consisting of all other rows. If all elements of new matrix are equal to zero the process is finished after deleting zero rows we get echelon matrix consisting of one first row — the system has at least one solution.

If there is a coefficient not equal to zero we can repeat the process. After some iterations an echelon matrix will be obtained. The definition. This is just the case when a system of linear equations can have unique solution. Moreover the question of the existence and the uniqueness of a solution is the central question of most applications of linear algebra. For any square matrix we will define a number called the determinant, which answers this question.

Many mathematical models in economics are based on constrained maximisation or minimisation problems. Determinants are important here too, because the second order condition for such problems requires that one check the signs of determinants of certain matrices of second derivatives.

The terms alternate in sign; the term containing a11 and a22 receives a plus sign and the term containing a12 and a21 receives a minus sign. In general, consider a permutation of indices 1, 2, …, n : it can be written as j1 , j2 , We will say that permutation j1 , j2 , Now we are ready to give the general definition of the determinant. This is a somewhat tedious exercise. The efficient computational methods will be given in the next lectures. The determinant of a lower-triangular, upper-triangular, or diagonal matrix is simply the product of its diagonal entries.

So we come to idea how to calculate effectively determinants: using Gauss transformations we try to get a triangular or diagonal matrix for which the determinant is calculated as it was shown in the previous theorem. Chapter 9 2. Properties of determinants. In the term a11a22 which is calculated as the product of two diagonal elements a11 a12 a21 a22 The permutation of the column indices 1, 2 is even, note that the line connecting these elements has a negative slope.

The other term a12a12 which is calculated as the product of two off-diagonal elements a11 a12 a21 a22 The permutation of the column indices 2, 1 is odd, and line connecting these elements has a positive slope.

So we can conclude that odd permutation inverse order of the column indices corresponds to the line with the positive slope, while even permutation direct order of the column indices corresponds to the line with the negative slope. Some useful notations Let us now introduce some useful symbols. The determinant of a matrix A is equal to the determinant of its transpose A T.

The rows of A are the columns of A T and vice versa. The sign of the term has a plus sign if permutation of column indices is even, the term has a minus sign if the permutation column indices is odd.

Permutation parity is determined by the number of pairs of elements having inverse order. But as we know from preliminary notes inverse order of the column indices corresponds to the line with the positive slope, while even permutation direct order of the column indices corresponds to the line with the negative slope.

After transposition the element aij has a position a ji which is symmetric about the diagonal of the matrix. Using geometric meaning of the direct and inverse order of the elements one can see that after transposition positive slope of the line remains positive while negative slope remains negative.

Corresponding properties for rows are automatically true. The determinant changes sign when two columns are exchanged. If columns A and B are standing in the matrix of determinant side by side with no other columns between them interchanging two columns can change the order of only two elements in each term of the determinant.

If n other columns are standing between A and B If one exchange j-th and k-th columns the determinant changes its sign, on the other hand it does not change because two columns are identical. The determinant is a linear function of the j-th column, for all j. Consider any term of determinant. Reduce original matrix by Gaussian elimination applied to columns or rows to echelon form and then evaluate the determinant as a product of the all diagonal elements.

We now multiply the first row of b by —2 and add the result to the second row to get the matrix c. The last three rows and three columns in c form the sub-matrix to which the same rules should be applied. The element a 22 is equal to 1. By multiplying the second row in c by —3 and —6 and adding the results to the third and fourth rows, respectively, we get the matrix d.

Using its element a33 which is 9 we can transform the matrix d to the triangular form. The determinant of e is equal to the product of its diagonal elements; consequently, the determinant of the original matrix a is equal to —19 we had to interchange rows only once. Using Determinants. We now build up to another. According to the definition of determinant each term of the determinant a11 a The sum of all other products corresponding to the common factor aij we denote as Aij.

The multiples Aij of factors aij are called cofactors. We can formulate this as a theorem. The determinant can be expanded into a sum of the elements of a column multiplied by their cofactors. Example In the example from the Lecture 6 we use factors from a column 2. The sum of the elements of some column multiplied by the cofactors to another column is zero.

Let us first consider a case of 1, 1 -th minor. Neither A11 nor M 11 contains a11 , all their terms are the products of elements from columns and rows with numbers from 2 to n.

Due to position of the element a11 all pairs of elements containing a11 can have only direct order of the column indices. So A11 and M 11 must have the same sign. In case of arbitrary Aij one can change the position of the element aij moving to the first place in the first row, making i—1 exchanges of rows and j—1 exchanges of columns. Determinant of matrix A can be expanded by its column j. Similarly, we can show that each ci is zero.

Therefore, the vectors are linearly independent. We now describe the Gram—Schmidt procedure, which produces an orthonormal basis starting with a given basis x1 ,. Having defined y1 ,. Note that the linear span of z1 ,. We remark that given a set of linearly independent vectors x1 ,. This fact is used in the proof of the next result. Let W be a set not necessarily a subspace of vectors in a vector space S. The vector u is called the orthogonal projection of x on the vector space S. Otherwise, let x1 ,.

Use the Gram—Schmidt process on the set x1 ,. Since v is perpendicular to each yi and since the linear span of y1 ,. It remains to show the uniqueness. Thus the decomposition is unique. Let W be a subset of the vector space T and let S be the linear span of W. Since xi , yj are orthogonal for each i, j , u and v are orthogonal. Let M be the linear span of x1 ,. This contradicts the fact that z is linearly independent of x1 ,.

This subspace is called the null space of A, and we denote it by N A. This completes the proof. Thus 5. Which of the following functions define an inner product on R 3? Show that the following vectors form a basis for R 3. Use the Gram—Schmidt procedure to convert it into an orthonormal basis. Vector Spaces and Matrices 1. Then the following conditions are equivalent: i A is nonsingular, i. This proves the uniqueness. Suppose iii holds. Thus R A , which by definition is dim C A , must be n.

In particular, the product of two nonsingular matrices is nonsingular. We will denote by Aij the submatrix of A obtained by deleting row i and column j. We have therefore proved the following result: 6. A square matrix is nonsingular if and only if its determinant is nonzero. Since the columns j1 ,. Conversely, if A is of rank r, then A has r linearly independent rows, say the rows i1 ,.

Let B be the submatrix formed by these r rows. Then B has rank r, and hence B has column rank r. As remarked earlier, the rank is zero if and only if A is the zero matrix. The columns of B are linearly independent. Note that a left inverse or a right inverse is not unique, unless the matrix is square and nonsingular. The proof is the same as that of 7.

These two results and the rank factorization see 4. The second part follows similarly. Vector Spaces and Matrices Problems 1.

By the fundamental theorem of algebra, the equation has n roots, and these roots are called the eigenvalues of A. The eigenvalues may not all be distinct. The number of times an eigenvalue occurs as a root of the characteristic equation is called the algebraic multiplicity of the eigenvalue. A principal submatrix of a square matrix is a submatrix formed by a set of rows and the corresponding set of columns.

A principal minor of A is the determinant of a principal submatrix. The identity matrix is clearly positive definite and so is a diagonal matrix with only positive entries along the diagonal.

If A is positive definite, then it is nonsingular. Therefore, A must be nonsingular. If A is positive definite, then any principal submatrix of A is positive definite. Apply this condition to the set of vectors that have zeros in coordinates j1 ,. It follows that B, and similarly any principal submatrix of A, is positive definite. If A is a symmetric matrix, then the eigenvalues of A are all real.

Vector Spaces and Matrices an orthonormal basis for R n. The identity matrix is clearly orthogonal. The product of orthogonal matrices is easily seen to be orthogonal. Let Q be an orthogonal matrix with x as the first column such a Q exists; first extend x to a basis for R n and then apply the Gram—Schmidt process. This is known as the spectral decomposition of A. Then A is positive definite if and only if the eigenvalues of A are all positive. There exists an orthogonal matrix P such that 6 holds.

If A is idempotent, then each eigenvalue of A is either 0 or 1. This follows by an application of the spectral theorem. We say that a matrix has full row or column rank if its rank equals the number of rows or columns.

Since B has full column rank, it admits a left inverse by 7. Similarly, C admits a right inverse. If A is a symmetric matrix, then show that the algebraic multiplicity of any eigenvalue of A equals its geometric multiplicity. If A is a symmetric matrix, what would be a natural way to define matrices sinA and cosA? Let A be a symmetric, nonsingular matrix. Show that A is positive semidefinite.

What can you say about the rank of A? Show that the set is a vector space and find a basis for the space. Show that the set is a vector space and find its dimension. Can we conclude that A must be the zero matrix? If rows i1 ,. Let A be a square matrix. Let A be a square matrix with all row sums equal to 1.

Now obtain a relationship between the characteristic polynomoials of AB and BA. Conclude that the nonzero eigenvalues of AB and BA are the same. The Cayley—Hamilton theorem asserts that A satisfies its characteristic equation, i.

Prove the theorem for a diagonal matrix. Then prove the theorem for any symmetric matrix. Further, A is positive definite if B has full column rank. Then A is positive definite if and only if it is nonsingular. Vector Spaces and Matrices Then prove that A is nonsingular. If A is symmetric, then show that R A equals the number of nonzero eigenvalues of A, counting multiplicity. If A is a symmetric matrix of rank r, then prove that A has a principal submatrix of order r that is nonsingular.

Show that AB has only real eigenvalues. If A is positive semidefinite, then show that AB has only nonnegative eigenvalues. Let S, T be subspaces of R n. Show that C is positive semidefinite. Show that Y must be positive semidefinite. The first one follows similarly. Section 5 1. Answer: Only ii defines an inner product.

Section 7 1. Now deduce the result from the Frobenius inequality. Section 8 1. Section 9 4. Let B be the submatrix of A formed by rows i1 ,. Vector Spaces and Matrices rows i1 ,. Then the corresponding column of A is not a linear combination of columns of C. This is a contradiction, since coulmns of C form a basis for the column space of A.

If A, B, C, D are all singular, the result is trivial. So assume, without loss of generality, that A is nonsingular. Then the rank of A as well as that of the partitioned matrix being n, the last n columns are linear combinations of the first n. The general case is proved by induction. We now give a proof. Thus AB, BA have the same characteristic polynomial and hence the same eigenvalues.

This follows from the more general fact that the roots of a polynomial are continuous functions of its coefficients.

Now, if A is singular, we may construct a sequence of nonsingular matrices with limit A and use a continuity argument. See the next exercise for a different proof. Hint: Use the spectral theorem to deduce the general case from the diagonal case.

It follows that A is nonsingular. Hint: Use the two preceding exercises. If A is positive 1 1 semidefinite, then so is B 2 AB 2 , and has only nonnegative eigenvalues. Then there exists a basis x1 ,. We show that x1 ,. We now show that the set is linearly independent. Hence the set is linearly independent and the proof is complete. The first r rows of B are identical to rows i1 ,. Ir 0 A21 B C Hint: First suppose B has rank one. Then there exist u1 ,. The general case is obtained using the spectral decomposition of B.

Now use the preceding two exercises. Now use the preceding exercise. Otherwise, A has infinitely many g-inverses, as we will see shortly. Then the following conditions are equivalent: i G is a g-inverse of A. In particular, if we let z be the ith column of the identity matrix, then we see that the ith columns of AGA and A are identical. Linear Estimation Alternatively, if A has rank r, then by 7. This also shows that any matrix that is not a square nonsingular matrix admits infinitely many g-inverses.

Another method that is particularly suitable for computing a g-inverse is as follows. Let A be of rank r. Just multiply AGA out. Let G be a g-inverse of A. Furthermore, equality holds if and only if G is reflexive. That completes the proof. The reason for this terminology will be clear from the next result.

Then the following conditions are equivalent: i G is a minimum norm g-inverse of A. In view of 1. Inserting this in 2 we get 1. Then the following conditions are equivalent: i G is a least squares g-inverse of A. Inserting this in 5 we get 4. Then according to 1. If G is a reflexive g-inverse of A that is both minimum norm and least squares then it is called a Moore—Penrose inverse of A.

We first show unique- ness. Suppose G1 , G2 both satisfy 6. Each of the following steps follows by applying 6. The terms that are underlined are to be reinterpreted to get the next step each time. We now show the existence. Find the Moore—Penrose inverse of. We call y a random vector if each yi is a random variable. Linear Estimation vector whose ith component is E yi. The dispersion matrix, or the variance-covariance matrix of y, denoted by D y , is defined to be cov y, y.

The dispersion matrix is obviously symmetric. Since variance is nonnegative, we conclude that D x is positive semidefinite. We now introduce the concept of a linear model. Suppose we conduct an exper- iment that gives rise to the random variables y1 ,. We make the assumption that the distribution of the random variables is controlled by some usually a small number of unknown parameters.

We also assume that y1 ,. We do not make any further assumptions about the distribution of y at present. Determine their variances and the covariance between the two. This is seen as follows. This implies that the spaces must be equal. Note that the matrices U, V are not necessarily unique. The statement has a converse, which we will establish in Chapter 6.

Linear Estimation 3. For such models the following results can easily be verified. Parts iii and iv constitute the Gauss—Markov theorem. Describe the estimable functions.

Linear Estimation 2. Furthermore, if A is positive definite, then equality holds in the above inequality if and only if A is a diagonal matrix. So suppose A is nonsingular. If A is positive definite and if equality holds in the inequality, then it must hold in the arithmetic mean—geometric mean inequality in the proof above. Then A must be diagonal. The result follows by 4. Suppose four objects are to be weighed using an ordinary chemical balance with- out bias with two pans.

We are allowed four weighings. In each weighing we may put some of the objects in the right pan and some in the left pan. Any procedure that specifies this allocation is called a weighing design. Let yi denote the weight needed to achieve balance in the ith weighing. If the sign of yi is positive, then the weight is required in the left pan, otherwise in the right pan. This is the D-optimality criterion, which we will encounter again in Chapter 5 in the context of block designs.

The matrix 8 is a Hadamard matrix. These properties will be useful. Find the RSS. Is it correct to say that the grand mean y.. Then by 6. We find the RSS. This model arises when we want to compare k treatments. We have ni observations on the ith treatment. Instead of writing the model in standard form we follow a different approach. This is easily proved using calculus.

However, again there is a more elementary way. This will be achieved in the next chapter. Let A be a matrix and let G be a g-inverse of A. Show that A has a g-inverse of rank k. Conclude that a square matrix has a nonsingular g-inverse. Find the g-inverse of x that is closest to the origin. Is it true that any positive semidefinite matrix is the dispersion matrix of a random vector? Prove that the BLUE of an estimable function is unique.

Consider the data in Table 2. In Example 6. In the standard linear model set up suppose the error space the space of linear functions of y with expectation zero is one-dimensional, and let z, a linear function of the observations, span the error space. Linear Estimation Let X1 ,.

What can you conclude about cov Xi , Xj for any i, j? Section 6 1. Using 7. Then G is a g-inverse of A of rank k. Hint: By a suitable transformation reduce the problem to the case where A11 , A22 are both diagonal matrices. Then use the Hadamard inequality. Therefore, n is even. A similar analysis using the third row, which must be orthogonal to the first as well as the second row, shows that n must be divisible by 4.

It is strongly believed that conversely, when n is divisible by 4, there exists a Hadamard matrix of order n; however, no proof has been found. We now prove the converse. The following result will be used, whose proof follows by expanding the determinant along a column several times. Thus A cannot have a nonpositive eigenvalue, and therefore A is positive definite. Combining this observation with 8.

Then A is positive definite if and only if all principal minors of A are positive. Similarly, a symmetric matrix is positive semidefinite if and only if all its principal minors are nonnegative. If a principal submatrix of A and its Schur complement in A are positive definite, then A is positive definite. Then the right-hand side of 3 is positive definite, and it follows that A is positive definite, since S defined in i is nonsingular. We are ready to obtain yet another characterization of positive definite matrices.

Then A is positive definite if and only if all leading principal minors of A are positive. Clearly, if A is positive definite, then all its leading principal minors are positive. We prove the converse by induction.

Let A be partitioned as in 4. Since any leading principal minor of B must be positive, by the induction assumption B is positive definite.

Thus by 1. Thus each eigenvalue of A is greater than or equal to 1. Let B be the Schur complement of a11 in A. Show that any real eigenvalue of A must be positive. Let A be a symmetric matrix. If every leading principal minor of A is nonnegative, can we conclude that A is positive semidefinite?

Give a proof of the Hadamard inequality see 4. The Jacobian is clearly 1. Then it reduces to the integral of a standard normal density and therefore equals 1. Hint: Find the characteristic function of By. Tests of Linear Hypotheses 2. We now show that the converse is also true. Then Ay, By are independent. Observe that by 2. So by 2. Substitute these expressions in the density function of y given in 7 and then divide by the marginal density of y1 , i.

Let X1 , X2 be a random sample from a standard normal distribution. Tests of Linear Hypotheses Proof. Therefore, the right-hand side also must have the same roots. Therefore, A is idempotent with rank r. Then by 2. Again, by 3. The necessity can also be shown to be true without this assump- tion. The proof employs characteristic functions and is more complicated.

Linear Algebra and Linear Models. Authors view affiliations R. Provides a concise and rigorous introduction to linear algebra from the matrix theory viewpoint which is well-suited for statistical applications Offers a compact introduction to estimation and testing in linear models covering the basic results required for further studies in linear models, multivariate analysis and design of experiments Contains a large number of exercises, including over seventy five problems on rank, with hints and solutions Includes supplementary material: sn.

Vector Spaces and Subspaces. Pages Rank, Inner Product and Nonsingularity. Eigenvalues and Positive Definite Matrices. Generalized Inverses. Inequalities for Eigenvalues and Singular Values. Rank Additivity and Matrix Partial Orders.

Linear Estimation. Tests of Linear Hypotheses.



0コメント

  • 1000 / 1000