# The strong Popov form of nonlinear input--output equations.

1. INTRODUCTIONThe representation of nonlinear input--output (i/o) equations in the strong Popov form is a good starting point both for theoretical studies and implementation since such representation makes the computation of shift operators explicit. The strong Popov form simplifies the definition of the field of functions, associated with the nonlinear control system and also the software development.

This paper builds on two earlier papers [9] and [1] that address the transformation of the set of equations into the row-reduced and the strong row-reduced form, respectively. The transformations are the equivalence transformations that, by definition, do not change the zeros (solutions) of the system equations. Note that in [9] only linear transformations defined over the field of certain functions are applied. The field elements are meromorphic functions of system output and input variables and their shifts. Unfortunately, though linear transformations are enough to transform the system equations into the row-reduced form, they do not always result in the strong row-reduced form. The reason is that the concept of the row-reduced form in the nonlinear case is related rather to the linearized system equations than the original equations themselves. The paper [1] extends the results of [9], allowing the nonlinear equivalence transformations.

In this paper the same tools as in [9,1] are applied to transform the system equations into the (strong) Popov form. First, the linearized system description Pdy--Qdu = 0 in terms of two polynomial matrices P and Q will be found. Under the assumption that the matrix P, related closely to P, is in the row-reduced form (and respective i/o equations are in the strong row-reduced form), P will be transformed into the Popov form, multiplying from the left by a certain unimodular matrix. Then the obtained unimodular transformation matrix is applied to the original system equations. This approach works well in many situations. However, there exist numerous examples when this method leads to equations in the Popov form, which contain higher-order shifts of output variables than allowed by the definition of the strong Popov form. The reason is that the Popov form is actually a property of linearized equations and not the property of equations themselves, whereas the strong Popov form is the property of equations. In particular, the set of i/o equations is said to be in the Popov form if and only if its linearization is in the Popov form. However, in general, the Popov form of the linearized equations cannot be easily translated back to the system equations, i.e. to the strong Popov form. When the resulting equations are not in the strong Popov form, one has to look for nonlinear transformations, like in [1].

For linear time-varying systems the transformation of the system equations into the Popov form has been addressed in numerous papers (see for instance [4,6,10] and the references therein). Though general ideas in our approach are similar to those used in the theory of linear time-varying systems, the application of the results from linear time-varying theory requires special attention and is not, in most cases, directly realizable. A few issues need additional attention. First, whereas the coefficients of polynomials in the linear time-varying case belong to the field R(t), those in the nonlinear case are from the field of meromorphic functions in independent system variables. Moreover, as a result of computations it may happen that the dependent variables show up and these have to be replaced via the independent ones, otherwise the presented approach does not yield correct results. Second, in the construction of the respective field in the nonlinear case, a multiplicative set S has been introduced, depending on system variables. This means that certain expressions in system variables belonging to this set are not allowed to be equal to zero. Third, transforming an arbitrary nonlinear system into the strong Popov form may require nonlinear transformations, as mentioned earlier. In this case, it can happen that one may be able to transform the system equations into the strong Popov form only locally. That is, in different domains the equations may take different forms.

The paper is the extension of the conference paper [2]. Compared to [2], the following additions are made in this paper. (i) The algorithm for the Popov form has been improved. Now it also constructs a set S of inequations, ensuring that certain expressions are nonzero. (ii) The motivation behind the strong Popov form has been discussed in more details; comparison of the Popov and strong Popov forms has been added as well as the algorithm, transforming the set of i/o equations into the strong Popov form. (iii) Examples have been elaborated.

The paper is organized as follows. Section 2 introduces essential algebraic structures, related to the set of i/o equations. In Section 3 the Popov form and the strong Popov form of the set of i/o equations are discussed, whereas two examples are presented in Subsection 3.4. Section 4 draws the conclusions.

2. PRELIMINARIES

2.1. System of implicit i/o equations

We recall first the concepts and the language introduced in [9]. Consider the infinite sequences Y = (... , y(-1),y(0),y(1),y(2),... ,)and U = (... ,u(-1),u(0), u (1), u (2),... ),wherey(k)[member of] Rpandu(k)[member of] [R.sup.m] for k [member of] Z. We think of components of Y and U as independent variables. Let A be the set of all analytic functions with real values depending on finitely many elements of Y and U. Each function may depend on different elements of Y and U. A is a ring with addition and multiplication. Let [delta] : A [right arrow] A be defined as follows: [[delta]y.sub.i](k) =[y.sub.i](k+1), [[delta]u.sub.i](k) = [u.sub.i](k + 1) and for [phi] [member of] A, ([delta][phi])(Y,U) := [phi]([delta]Y,[delta]U), where [delta]Y = Y :=(... , y (0), y (1), y (2),... ),y(k):=y(k+1),and[delta]U = U :=(... , u (0), u (1), u (2),... ),u (k):=u(k+1). Then A is a difference ring with a difference operator [delta]. Observe that [delta] is injective and onto, so it is an automorphism. Moreover, [[delta].sup.-1] Y = Y where y(k) =y(k-1) and [[delta].sup.-1]U = U where u(k) = u (k-1).

Let S be a multiplicative subset of the ring A. This means that 1 [member of] S, 0 [??] S and if a and b belong to S, so does ab. We shall assume that S is invariant with respect to both [delta] and [[delta].sup.-1]. Then [S.sup.-1]A denotes the localization of the ring A with respect to S. It consists of meromorphic functions whose denominators belong to S. Observe that [S.sup.-1]A is an inversive difference ring with the difference operator [delta] and, via the natural injection [alpha] [right arrow] [alpha]/1, S may be interpreted as a subset of [S.sup.-1]A.

Consider a discrete-time multi-input multi-output nonlinear system, described by the set of implicit higher-order difference equations, relating the inputs [u.sub.k], k = 1,... ,m, the outputs [y.sub.i],i=1,... ,p, and a finite number of their time shifts:

[[phi].sub.i]([y.sub.j](t), [y.sub.j](t + 1),...,[y.sub.j](t + [n.sub.ij]),[u.sub.k](t + 1),...,[u.sub.k](t + [S.sub.ik])) = 0, (1)

where i,j = 1,... ,p, k = 1,... ,m, and [[phi].sub.i] is a real meromorphic function belonging to [S.sup.-1] A, defined on an open and dense subset of [R.sup.(n + 1)(p + m)]. If the ith equation does not depend on [y.sub.j] (t+l), l = 0,1,... , we set [n.sub.ij] := -[infinity]. Let [n.sub.i] be the highest output shift in the ith equation, i.e. [n.sub.i] := [max.sub.j][n.sub.ij] and let n be the highest output shift of the system, i.e. n := [max.sub.i , j] [n.sub.ij] . Hereinafter we use the notation [xi] for a variable [xi] (t), [[xi].sup.[k]] for itsk-steptimeshift[xi](t + k), k [membr of] Z.

For system (1) we define the matrix

[mathematical expression not reproducible]

and integers [m.sub.i] := n - [n.sub.i] for i = 1,... , p. By diag{[mathematical expression not reproducible]} we mean the diagonal operator matrix with the elements [mathematical expression not reproducible] on the main diagonal. The matrix [mathematical expression not reproducible] Mis called the leading coefficient matrix of system (1).

Definition 1. The set ofi/o equations (1) is said to be in the strong row-reduced form if its leading coefficient matrix has full rank.

Assumption 1. In this paper it is assumed that system (1) is in the strong row-reduced form.

Note that an arbitrary implicit system of the form (1) can be transformed into the strong row-reduced form as shown in [9,1], though analytic transformation may not always exist.

Let us recall that an ideal I of a commutative ring A is a subset of A with the property that if a, b [member of] I, then a + b [member of] I and if c [member of] A and a [member of] I, then ca [member of] I. If A is a difference ring, then an ideal I of A is a difference ideal of A if it is closed with respect to the difference operator.

Let [PHI] = {[[phi].sub.1],... ,[[phi].sub.p]} be a finite subset of [S.sup.-1]A. [PHI] may be interpreted as a system of implicit i/o equations. Let [<<[PHI]>>.sub.S] denote the smallest ideal of [S.sup.-1]A that contains all the shifts [[delta].sup.k] ([[phi].sub.i]) for i = 1,... , p and k [member of] Z, i.e. the forward and backward shifts of [[phi].sub.i]. Observe that [<<[PHI]>>.sub.S] is a difference ideal of [S.sup.-1]A and [delta]([<<[PHI]>>.sub.S]) = [<<[PHI]>>.sub.S] = [[delta].sup.-1]([<<[PHI]>>.sub.S]). Observe that [PHI] may be considered as a subset of [S.sup.-1] A for some other multiplicative set S. For that reason we put S in the notation of the ideal [<<[PHI]>>.sub.S]. We make the following assumption:

Assumption 2. The ideal [<<[PHI]>>.sub.S] is prime, i.e. if [alpha],[beta] [member of] [S.sup.-1]A and [alpha][beta] [member of] [<<[PHI]>>.sub.S], then [alpha] [member of] [<<[PHI]>>.sub.S] or [beta] [member of] [<<[PHI]>>.sub.S], and is proper, i.e. different from the entire ring.

The properness of the ideal [<<[PHI]>>.sub.S] is equivalent to the condition S [union] [<<[PHI]>>.sub.S] = [empty set]. In particular, numerators of [[phi].sub.i]'s do not belong to S. Let [S.sup.-1]A/[<<[PHI]>>.sub.S] be the quotient ring. It consists of cosets [phi] = [phi] + [<<[PHI]>>.sub.S] for [phi] [member of] [S.sup.-1]A. We define "+" and "*" in this new ring by [mathematical expression not reproducible] and [mathematical expression not reproducible]. These definitions do not depend on the choice of a representative in a coset. In particular [[??].sub.i] = 0, for i = 1,... ,p. Since, by Assumption 2, [<<[PHI]>>.sub.S] is a prime ideal, [S.sup.-1]A/[<<[PHI]>>.sub.S] is an integral ring. Now we can redefine [delta] on [S.sup.-1]A/[<<[PHI]>>.sub.S] (denoted now by [[delta].sub.[PHI]] to indicate its dependence on [PHI]) as follows: [mathematical expression not reproducible]. This again is well defined, for if [??] = [??], then [phi] + [<<[PHI]>>.sub.S] = [psi]+ [<<[PHI]>>.sub.S]. Since [delta]([<<[PHI]>>.sub.S]) [subset] [<<[PHI]>>.sub.S] and [delta]([<<[PHI]>>.sub.S]) + [<<[PHI]>>.sub.S] = [<<[PHI]>>.sub.S], [delta][phi] + [delta]([<<[PHI]>>.sub.S]) = [delta][psi]+[delta]([<<[PHI]>>.sub.S]). Moreover, the operator [[delta].sub.[PHI]] is bijective, so [mathematical expression not reproducible] is well defined on [S.sup.-1]A/[<<[PHI]>>.sub.S]. Let [mathematical expression not reproducible] denote the field offractions of the ring [S.sup.-1]A/[<<[PHI]>>.sub.S]. As [[delta].sub.[PHI]] can be naturally extended to the field of fractions, [mathematical expression not reproducible] is now an inversive difference field with the difference operator [[delta].sub.[PHI]].

Proposition 1. [9] Assume that S and S are multiplicative subsets of A and S [subset] S , invariant with respect to [delta] and [[delta].sup.-1]. Let [PHI] [subset] [S.sup.-1] A and the ideal [<<[PHI]>>.sub.S] be prime and proper. Let S [intersection] [<<[PHI]>>.sub.S] = [empty set]. Then (a) [S.sup.-1] A [SUBSET] [S.sup.-1] A, (b) the ideal [<<[PHI]>>.sub.S] of [S.sup.-1]A is prime and proper, (c) there is a natural monomorphism of difference rings [tau]: [S.sup.-1]A/[<<[PHI]>>.sub.S] [right arrow] [S.sup.-1]A/[<<[PHI]>>.sub.S] , and (d) [tau] may be extended to a monomorphism of difference fields [mathematical expression not reproducible].

2.2. Non-commutative polynomials

The field [mathematical expression not reproducible] and the shift operator [[delta].sub.[PHI]] induce the ring of polynomials in a variable Z over [mathematical expression not reproducible], denoted by [mathematical expression not reproducible]. A polynomial [mathematical expression not reproducible] is written as [mathematical expression not reproducible], where [mathematical expression not reproducible] for 0 [less than or equal to] i [less than or equal to] [mu]. The addition of polynomials from [mathematical expression not reproducible] is standard. The multiplication is defined by the linear extension of the following rules Z*a := ([[delta].sub.[PHI]]a)Z and a-Z := aZ, where a [member of] [mathematical expression not reproducible] and [[delta].sub.[PHI]]a means [[delta].sub.[PHI]] evaluated at a (so for example [mathematical expression not reproducible]). Observe that an element [mathematical expression not reproducible] does not commute with Z in general, so the ring [mathematical expression not reproducible] is non-commutative. It is called the twisted polynomial ring and it satisfies both the left and right Ore conditions, i.e. it is an Ore ring [7]. Moreover, for the polynomials a and b it holds that deg(a*b) =dega + degb. Let us define the action of the ring [mathematical expression not reproducible] on the field [mathematical expression not reproducible] by the linear extension of the formula [mathematical expression not reproducible], where [mathematical expression not reproducible].

Let [mathematical expression not reproducible] be the set of p x q-dimensional matrices, whose entries are polynomials in [mathematical expression not reproducible].

Definition 2. For the matrices from [mathematical expression not reproducible] we define the following elementary row operations: (1) interchange of rows i and j, (2) multiplication of the row i by non-zero scalar from [mathematical expression not reproducible], (3) replacement of the row i by itself plus any polynomial multiplied by any other row j.

Observe that all elementary row operations are invertible and any elementary row operation on matrix W [mathematical expression not reproducible] is equivalent to premultiplication (left multiplication) of W by an appropriate invertible matrix [mathematical expression not reproducible]. A matrix [mathematical expression not reproducible] is called unimodular if it has an inverse matrix [mathematical expression not reproducible] such that [mathematical expression not reproducible], where [I.sub.p] is identity matrix.

2.3. Linearized i/o equations

Our goal is to represent system (1) in terms of polynomials from [mathematical expression not reproducible]. For that purpose we apply the differential operation d to (1) to obtain [mathematical expression not reproducible] for i = 1,... ,p. Defining [mathematical expression not reproducible] and [mathematical expression not reproducible] like in [9] enables us to rewrite (1) as

P(Z)dy + Q(Z)du = 0, (2)

where [mathematical expression not reproducible] and [mathematical expression not reproducible] are polynomial matrices, whose elements [mathematical expression not reproducible]. Equation (2) describes the (globally) linearized system, associated with equations (1).

Let [mathematical expression not reproducible] denote the map [mathematical expression not reproducible] (later we usually skip the denominator). If [mathematical expression not reproducible] with [p.sub.i] [member of] [S.sup.-1]A, then we define [mathematical expression not reproducible]. This is a polynomial in [mathematical expression not reproducible]. For a matrix P with elements in [S.sup.-1] A[Z; [delta]], we define the matrix [mathematical expression not reproducible] where [mathematical expression not reproducible].

3. POPOV FORM

3.1. Linearized equations in the Popov form

Let us denote the ith row of the matrix [mathematical expression not reproducible] by [w.sub.i]. For the non-zero row [w.sub.i]. we define its degree deg [w.sub.i*] [EQUIVALENT TO] [[sigma].sub.i] as the exponent of the highest power in Z present in [w.sub.i]. for i = 1,... ,p. If [w.sub.i]. [EQUIVALENT TO] 0, we define [[sigma].sub.i] = -[infinity]. The vector of the row degrees is denoted by [sigma] := [[[sigma].sub.1],... , [[sigma].sub.p]]. The column degrees [[tau].sub.1],... ,[[tau].sub.q] are defined as the row degrees of the matrix [W.sup.T]. The degree of the matrix W is defined as deg W:=max{[[sigma].sub.1],... ,[[sigma].sub.p]}. Let N = deg W,e = [1,... ,1], and M=[[m.sub.1],... ,[m.sub.p]]:=N*[member of]-[sigma]. By [Z.sup.M] we denote the diagonal p x p matrix with the diagonal elements [mathematical expression not reproducible].

Definition 3. The matrix [L.sub.row](W) such that [Z.sup.M]W = [L.sub.row](W) [Z.sup.N]+ lower degree terms is called the leading row coefficient matrix of W.

The matrix [mathematical expression not reproducible] is said to have full rank if rank W = min(p,q).

Definition 4. [3,9,10] A polynomial matrix [mathematical expression not reproducible] with non-zero rows is called row-reduced if its leading row coefficient matrix [L.sub.row](W) has full row rank over the field [mathematical expression not reproducible]. If W contains zero rows, then W is called row-reduced if its submatrix consisting of non-zero rows is row-reduced.

Definition 5. [3] A polynomial matrix [mathematical expression not reproducible] is said to be in the weak Popov form if the leading coefficient of the submatrix formed from the non-zero rows of W is in upper triangular form up to row permutations.

Definition 6. [8,10] Matrix [mathematical expression not reproducible] is in the Popov form if W is row-reduced with the rows sorted with respect to their degrees ([[sigma].sub.1] [less than or equal to] * * * [less than or equal to] [[sigma].sub.p]) and for all non-zero rows [w.sub.i]. there is a column index [j.sub.i] (called the pivot index) such that

(I) [mathematical expression not reproducible] is monic

(II) deg [w.sub.ik]<deg [w.sub.ij], i,if k < [j.sub.i]

(III) deg [w.sub.ik] [less than or equal to] deg [mathematical expression not reproducible],if k [greater than or equal to] [j.sub.i]

(IV) deg [mathematical expression not reproducible]

(V) if deg [mathematical expression not reproducible] and i < k, then [j.sub.i] < [j.sub.k] (if the degrees of the rows are equal, then the pivot indices are increasing).

Proposition 2. [10] For any matrix [mathematical expression not reproducible] there exists a unimodular matrix [mathematical expression not reproducible] such that U * Wis in the Popov form.

Remark 1. Every matrix in the Popov form is also in the weak Popov form while converse is not necessarily true. If we relax Definition 6 so that the condition [[sigma].sub.1] [less than or equal to] * * * [less than or equal to] [[sigma].sub.p] is dropped and (I), (IV), and (V) are replaced by the requirement that pivot indices are different ([j.sub.i] [not equal to] [j.sub.k] whenever i [not equal to] k), then we obtain the definition for the weak Popov form.

Definition 7. The set of i/o difference equations (1) is said to be in the Popov form if there is a multiplicative subset S of A, S [[contains].bar] S, such that the matrix [mathematical expression not reproducible] is in the Popov form over [mathematical expression not reproducible].

3.2. Algorithm: transforming the matrix into the Popov form

The following algorithm transforms the matrix W into the Popov form under the assumption that the p x q matrix W is row-reduced and its row degrees are non-decreasing: 0 [less than or equal to] [[sigma].sub.1] [less than or equal to] [[sigma].sub.2] [less than or equal to] * * * [less than or equal to] [[sigma].sub.p]. The algorithm is a straightforward extension of the respective procedure for the linear time-varying case [10], except the steps related to the construction of the set [S.sub.0].

Algorithm 1

Input: Matrix W

Output: Matrix W in the Popov form and unimodular matrix U, such that W = UW

Step1.W:=W,U:=[I.sub.p],[S.sub.0]:={1}

Step 2. i:=1

Step 3. Fix the column index [j.sub.i] such that deg [mathematical expression not reproducible] and [j.sub.i] is minimal of all possible columns

Step 4. k:=1

Step 5. If k [not equal to]i And deg [mathematical expression not reproducible] Then

(a) Find [gamma] and r such that [mathematical expression not reproducible] and deg r < [[sigma].sub.i]

(b) E:=[I.sub.p]

(c) [E.sub.k,i]:=-[gamma]

(d) W:=E * W

(e) U:=E * U

(f) Add denominators of coefficients of [gamma] to [S.sub.0] (if they are not in [S.sub.0] already).

End If

Step 6. k:=k+1

Step 7. If k [less than or equal to] p Then Goto Step 6

Step 8. i:=i+1

Step 9. If i< p Then Goto Step 4

Step 10. If condition (IV) of Definition 6 is not fulfilled, Then Goto Step 2

Step 11. Make the pivot elements of W monic:

(a) Let [[alpha].sub.1],... , [[alpha].sub.p] be the leading coefficients of the pivot elements [mathematical expression not reproducible]

(b) Let A be the diagonal matrix with 1/[[alpha].sub.1],... ,1/[[alpha].sub.p] on the main diagonal

(c) W:=A * W

(d) U:=A * U

(e) Add functions [[alpha].sub.1],... , [[alpha].sub.p] to [S.sub.0] (if they are not in [S.sub.0] already).

Step 12. Return W,U

When we start, some functions [[phi].sub.i] in (1) may have denominators that, together with their forward and backward shifts, should be included in the set S. If the functions are analytic, one may set S := {1}, meaning that [S.sup.-1] A = A. Of course, additional denominators that show up in the reduction algorithm should also be included in S together with their shifts and powers; this means denominators both in the equivalence transformations and in transformed matrix W. That is, we extend our initial S by adding an infinite number of elements.

In extension one has to be careful not to include in S the denominators that may cancel the functions [[phi].sub.i]. This is guaranteed by the use of the field [mathematical expression not reproducible] (in the algorithm) whose elements are fractions with non-zero denominators. Since we work with cosets with respect to [<<[PHI]>>.sub.S], the denominators are represented by functions that do not belong to [<<[PHI]>>.sub.S]. Since we multiply the functions [[phi].sub.i] by the functions, being representatives of the cosets, the functions [[phi].sub.i] cannot be cancelled (because they cannot appear in denominators). This explains why the property S [intersection] [<<[PHI]>>.sub.S] = [empty set] must always be satisfied.

Remark 2. The infinite set S can be compactly described by its generator set [S.sub.0]. The set [S.sub.0] generates S if each element of S can be obtained from a finite number of elements of [S.sup.0] by applying a finite number of multiplications and backward and forward shifts to these elements. For the sake of simplicity, we present in examples below rather the generator set [S.sub.0] than S itself.

If the assumptions of Algorithm 1 are not fulfilled, the pivot indices [j.sub.i] can change during the execution of the algorithm. To avoid possible misunderstanding, we show that if the assumptions are satisfied, the pivot indices keep their values.

Lemma 1. Assume that the matrix W is row-reduced and its row degrees satisfy 0 [less than or equal to][[sigma].sub.1] *** [less than or equal to][[sigma].sub.p]. Consider the transformation on Step 5(a) of Algorithm 1, which can be written as

[mathematical expression not reproducible] (3)

where [w.sub.k].. is the old kth row and [w.sub.k].. is the transformed row Transformation (3) does not change the pivot indices that are already fixed on Step 3.

Proof. One has to show that transformation (3) does not change the pivot indices [j.sub.1],... ,[j.sub.i-1],[j.sub.i+1],... ,[j.sub.p]. Without loss of generality we assume in this proof that all fixed pivot indices are on the main diagonal, i.e. [j.sub.l] = l for l=1,... , p.

Case 1, k < i. We show that after applying transformation (3) to matrix W the pivot index of the row [w.sub.k]. is still k. First, we show that the transformed elements [mathematical expression not reproducible] satisfy the condition deg [w.sub.kv] < [[sigma].sub.k]. Since [[sigma].sub.1] [less than or equal to] *** [[sigma].sub.p], the transformation (3) occurs only if [[sigma].sub.k] = [[sigma].sub.i] and thus deg [w.sub.ki] = deg [w.sub.ii] and deg [gamma] = 0. Thus deg [w.sub.kv] = max{deg [w.sub.kv],deg [w.sub.iv]} < [[sigma].sub.k]. Second, we consider the element [w.sub.kk] := [w.sub.kk] - [gamma][w.sub.ik] and show that deg [w.sub.kk] = [[sigma].sub.k]. The degree of [w.sub.kk] would be less than [[sigma].sub.k] only if the leading terms of [w.sub.kk] and [gamma][w.sub.ik] would be equal, but this cannot happen, since deg [w.sub.ik] < deg[[sigma].sub.i] = deg[[sigma].sub.k] = deg [w.sub.kk] and deg[gamma] = 0.

Case 2, i < k. Note that when passing the main loop (Steps 2-9) for the first time, the pivot indices are not yet fixed for the rows [w.sub.i+1].,... ,[w.sub.p]*, thus the proof makes sense for the second or further execution of the main loop. We show that after applying transformation (3) to matrix W the pivot index of the row [w.sub.k]*. is still k. First, we show that the transformed elements [w.sub.kl] := [w.sub.kl] -[gamma][w.sub.il], l = 1,... ,k-1, satisfy the condition deg [w.sub.kl] < [[sigma].sub.k]. The sum of two polynomials cannot have the degree higher than the addends. Thus,

deg [w.sub.kl] [less than or equal to] max{deg [w.sub.kl],deg([gamma][w.sub.il])}. (4)

Since pivot of [w.sub.k]. is k,

deg [w.sub.kl]<[[sigma].sub.k], l = 1,... ,k-1. (5)

The polynomial [gamma] is determined in Step 5(a) as the right quotient of [w.sub.ki] and [w.sub.ii] such that [w.sub.ki] = [gamma][w.sub.ii] + [w.sub.ki], and deg [w.sub.ki] < deg [w.sub.ii]. Then deg [gamma]= deg [w.sub.ki]-deg [w.sub.ii] and thus

deg([gamma][w.sub.il])[less than or equal to]deg [w.sub.ki]-deg [w.sub.ii] + [w.sub.il]. (6)

Knowing that deg [w.sub.ki] < [[sigma].sub.k], deg [w.sub.ii] = [[sigma].sub.i], and deg [w.sub.il] [less than or equal to] [[sigma].sub.i] for l=1,... ,p allows us to deduce deg([gamma][w.sub.il]) < [[sigma].sub.k]. Due to (5) and (6) the in equality (4) yields deg [w.sub.kl]<[[sigma].sub.k]for l = 1,... ,k-1.Second, we show that deg [w.sub.kk] = [[sigma].sub.k]. By (3), [mathematical expression not reproducible], and since the pivot index of [w.sub.k]*. is k, one knows that deg[w.sub.kk] = [[sigma].sub.k]. It is possible to get deg [w.sub.kk] < [[sigma].sub.k] if the leading terms of [w.sub.kk] and [gamma][w.sub.ik] are equal, but this would mean that the rows of [L.sub.row](W) are dependent and thus W is not row-reduced, which contradicts the assumption. []

Theorem 1. Assume that the matrix W is row-reduced and its row degrees satisfy the inequalities 0 [less than or equal to] [[sigma].sub.1] [less than or equal to] [[sigma].sub.2][less than or equal to] *** [less than or equal to][[sigma].sub.p]. Application of Algorithm 1 transforms W into the Popov form.

Proof. Passing Steps 1-9 of the algorithm for the first time transforms matrix W into the form, where pivot indices [j.sub.i] are fixed for each row and conditions (II) and (III) of Definition 6 are fullfilled. Due to Lemma 1, if the pivot index is once fixed, it cannot change later. Executing Steps 1-9 transforms the leading row coefficient matrix [L.sub.row](W) into the upper triangular form (up to permutation of the rows) so that the elements [L.sub.row][(W).sub.i ,ji], corresponding to the pivot indices, are non-zero, and those located left from [mathematical expression not reproducible] are zero. It means that W is in the weak Popov form.

Step 10 checks whether condition (IV) is fulfilled. In the case of a negative answer Steps 2-9 are repeated until condition (IV) is satisfied. Condition (V) of Definition 6 is guaranteed, since on Step 3 we choose [j.sub.i] as the smallest possible. Finally, Step 11 makes the pivot elements of W monic. []

Remark 3. The weak Popov form can be obtained as an intermediate step of computing the Popov form. For the weak Popov form, we remove Step 10 from Algorithm 1. For the Popov form the outer loop (Step 2-Step 9) of the algorithm is passed at most p - 1 times, but for the weak Popov form these steps are passed just once.

3.3. Strong Popov form

The definition of the Popov form as given in Definition 7 is actually a property of linearized i/o equations (2) (where the matrix P is replaced by P) and not the property of the original system equations (1). In particular, the set of i/o equations is said to be in the Popov form if the polynomial matrix P is in the Popov form. To transform the linearized i/o equations (2) into the Popov form, linear i/o equivalence transformation, as constructed in Algorithm 1, is used. Such transformation is defined in terms of unimodular polynomial matrix over the difference field of meromorphic functions [mathematical expression not reproducible] in independent system variables and the polynomial indeterminate may be interpreted as a forward shift operator. Finally, the unimodular matrix defines the operator that will be applied to system equations (1). The application of the unimodular matrix to the system equations corresponds to the linear transformations (over [mathematical expression not reproducible]) of the equations, however, sometimes nonlinear transformations are necessary to bring the system into the explicit form, see Example 3. This approach works well in many situations. However, sometimes this method leads to equations in the Popov form, which cannot be transformed into the explicit form. For this reason we introduce below the strong Popov form.

Definition 8. The set of i/o equations (1) is said to be in the strong Popov form if

(a) 0 [less than or equal to] [n.sub.1] [less than or equal to] [n.sub.2] [less than or equal to] *** [less than or equal to] [n.sub.p];

(b) for each [[phi].sub.i], [n.sub.i] [greater than or equal to] 0, there exists a variable [y.sub.ji] such that the following conditions hold: (i) [mathematical expression not reproducible]; (ii) [n.sub.ik] < [n.sub.i], if k < [j.sub.i]; (iii) [n.sub.ik] [less than or equal to] [n.sub.i], if k [greater than or equal to] [j.sub.i]; (iv) [mathematical expression not reproducible]; (v) if [n.sub.i] = [n.sub.k] and i < k, then [j.sub.i] < [j.sub.k]

Comparing Definitions 6 (Popov form) and 8 (strong Popov form) reveals that conditions (I)-(V) in Definition 6 regarding the degrees of polynomials match conditions (i)-(v) in Definition 8 regarding the structural indices [n.sub.ij] of system (1). Though the definitions are analogous, it is important to note that the Popov form of system (1) is, by Definition 7, determined by matrix P, whose entries do not necessarily satisfy the condition deg [p.sub.ij] = [n.sub.ij], but rather deg [mathematical expression not reproducible]. Moreover, note that in the Popov form of matrix P zero rows are allowed while in the strong Popov form we require that all indices [n.sub.i] > -[infinity]. Thus the strong Popov form of system (1) implies the Popov form, but the converse is not true, in general. An example of the system, being in the Popov but not in the strong Popov form, can be found in Example 3 below.

If system (1) is in the strong Popov form, it can be represented in the explicit form

[mathematical expression not reproducible] (7)

where j = 1,... ,p,k=1,... ,m, and

[mathematical expression not reproducible]

Example 1. In the special case p = m = 2, assume, for the sake of simplicity, that [j.sub.1] = 1 and [j.sub.2] = 2. By Definition 8 this system is in the strong Popov form if the following conditions hold:

(a) since [j.sub.i] = i, one has [n.sub.1] := [n.sub.11] and [n.sub.2] := [n.sub.22], which have to satisfy [n.sub.1] [less than or equal to] [n.sub.2];

(b) (i) [mathematical expression not reproducible] and [mathematical expression not reproducible]; (ii) [n.sub.21] < [n.sub.2]; (iii) [n.sub.12] [less than or equal to] [n.sub.1]; (iv) [n.sub.12] < [n.sub.2] and [n.sub.21] < [n.sub.1]; (v) in case [j.sub.i] = i, the condition is always satisfied.

The following algorithm transforms the implicit i/o equations (1) into the strong Popov form using linear transformations, whenever possible.

Algorithm 2

Input: Implicit equations (1) in strong row-reduced form

Output: System [[phi].sub.i] = 0 in the strong Popov form

Step 1. Find the matrix [mathematical expression not reproducible]

Step 2. Transform the row-reduced matrix P into the Popov form by Algorithm 1. It means multiplying P by the unimodular matrix [mathematical expression not reproducible]. Denote P:= UP

Step 3. Apply U as an operator to system (1): [mathematical expression not reproducible]

Step 4. If System [[phi].sub.i] = 0 is in the strong Popov form, i.e. Definition 8 is satisfied

Then Return [[phi].sub.i]

Else Print: "The system cannot be transformed into the strong Popov form via linear transformations."

Proposition 3. An arbitrary nonlinear system (1) can be transformed (at least locally) in the strong Popov form (with possible zero rows), using either linear or nonlinear transformations.

3.4. Examples

Example2. Consider the system

[mathematical expression not reproducible] (8)

Since no denominators appear in equations (8), we may set S = [S.sub.0] := {1}. The polynomial matrix

[mathematical expression not reproducible] (9)

The leading row coefficient matrix

[mathematical expression not reproducible] (10)

is of full rank, thus (8) is row-reduced, however, the conditions of Definition 6 are not fulfilled and therefore, the system is not in the Popov form. The row degree vector of the matrix P is [sigma] = [1,2, 3], thus row degrees are in non-decreasing order. Following Algorithm 1, we take the matrix P := P, U = [I.sub.3], and i = 1. Since [[sigma].sub.1] = 1, the pivot element of the 1st row is [p.sub.12], i.e. [j.sub.1] = 2. Next, the aim of Steps 4-6 is to make the degrees of all the elements below [p.sub.12] strictly less than [[sigma].sub.1] = 1. For the 2nd row (k = 2) we obtain [p.sub.22] = [gamma][p.sub.12] + r, consequently [gamma]=[u.sub.1]Z, r=1, and

[mathematical expression not reproducible] (11)

Since [gamma] has no denominators, the generator set [S.sub.0] := {1} remains unchanged. Multiplying P = P by E from the left corresponds to multiplying the 1st row by -[gamma] and adding it to the 2nd row, resulting in

[mathematical expression not reproducible] (12)

We also let U := E * U = E. For k = 3 the degree condition deg [p.sub.32] = -[infinity] [greater than or equal to] [[sigma].sub.1] = 1 is fulfilled, therefore the Then-part in Step 5 is skipped.

Taking i = 2 yields that [j.sub.2] = 3 - the pivot element of the second row is [p.sub.23]. For k = 1 the degree condition is fulfilled, but for k = 3 we obtain [mathematical expression not reproducible] and [mathematical expression not reproducible], yielding

[mathematical expression not reproducible].

Due to the division by [mathematical expression not reproducible] we set [S.sub.0] = {1, [u.sub.1]}. The matrix

[mathematical expression not reproducible]

For i = 3 the pivot element is [p.sub.31]. Since deg [p.sub.11] = deg [p.sub.21] = -[infinity] < deg [p.sub.31] = 3, the degree conditions are satisfied. We have reached Step 10, where we ascertain that condition (IV) of Definition 6 is not fulfilled and thus return to Step 2. For i = 1 and [y.sub.1] = 2 Steps 5 and 6 give the following results: if k = 2, the degree condition is fulfilled, if k = 3, then [gamma] = u Z, r = 0 and thus

[mathematical expression not reproducible]

The transformed matrix P fulfils conditions (I)-(V) of Definition 6. Note that the row and column degrees of P are equal now, if regarding the position of the pivot elements. That is [[sigma].sub.1] = [[tau].sub.2] = 1, [[sigma].sub.2] = [[tau].sub.3] = 2, [[sigma].sub.3] = [[tau].sub.1] = 3. The unimodular matrix is

[mathematical expression not reproducible]. (13)

The application of U to the vector function [phi] = [[[[phi].sub.1], [[phi].sub.2], [[phi].sub.3]].sup.T] yields the system in the strong Popov form (1):

[mathematical expression not reproducible] (14)

To transform (8) into the strong Popov form (14), it is necessary that all the variables from the set S, generated by [S.sub.0] = {1, [u.sub.1]}, are non-zero, see Remark 2. From (14) one can easily express the highest time-shifts of the output variables:

[mathematical expression not reproducible].

Example 3. Consider the model of the 27 tray binary distillation column, operating in high-purity regime [11], where u1 is modular reflux, u2 is steam flow rate (moles/minute), [y.sub.1] and [y.sub.2] are temperatures (2):

[mathematical expression not reproducible]. (15)

The generator set is [S.sub.0] := {1}. The matrix

[mathematical expression not reproducible] (16)

and its leading row coefficient matrix

[mathematical expression not reproducible]

Thus, (16) is row-reduced and in the weak Popov form, but it is not in the Popov form, since condition (IV) of Definition 6 is not fulfilled. Indeed, for i =1,[j.sub.1]=2,k = 2 we obtain deg [p.sub.22] = 2 [??] deg [p.sub.12] = 1, whenever [mathematical expression not reproducible]. However, since P is in the weak Popov form and the pivot indices [j.sub.1] = 2, [j.sub.2] = 1 are fixed, system (15) can be rewritten in the form below:

[mathematical expression not reproducible] (17a)

[mathematical expression not reproducible] (17b)

The first equation is in the proper form while the second is not, since it involves [mathematical expression not reproducible]. Substituting [mathematical expression not reproducible] in (17b) by the right-hand side of forward-shifted (17a) gives [mathematical expression not reproducible]. The latter equation still depends on [mathematical expression not reproducible], which has to be again substituted by the right-hand side of (17a). That way one obtains the explicit equations

[mathematical expression not reproducible] (18a)

[mathematical expression not reproducible] (18b)

where [mathematical expression not reproducible].

The application of Algorithm 1 leads to a transformation: for i = 1, [j.sub.1] = 2 and k = 2 we obtain from [p.sub.22] = [gamma][p.sub.12] + r the right quotient [mathematical expression not reproducible], the right remainder [mathematical expression not reproducible], and the unimodular matrix

[mathematical expression not reproducible].

Assume that [mathematical expression not reproducible]. Multiplying U and P yields

[mathematical expression not reproducible].

The matrix P is in the Popov form. However, if one tries to find the transformed equations using the linear transformation as

[mathematical expression not reproducible] (19)

then one encounters a failure. Namely,

[mathematical expression not reproducible]

still depends on [mathematical expression not reproducible]; moreover, it also contains [mathematical expression not reproducible], which cannot be removed by any linear transformation (over the polynomial ring). Thus, (19) is in the Popov form (since P is in the Popov form), but not in the strong Popov form, since 1 = [n.sub.22] [??] [n.sub.1] = 1 and condition (iv) of Definition 8 is not satisfied. This example illustrates the difficulties of adapting the theory of linear (time-varying) systems for nonlinear systems.

To transform equations (15) into the strong Popov form, one has to use nonlinear transformations in analogy with [1]. However, the simple structure of equations (15) enables one to find the nonlinear transformation almost by direct inspection. For system (15) the indices are: [n.sub.11] = 0, [n.sub.11] = 1, [n.sub.21] = 3, [n.sub.22] = 2. For the strong Popov form it is necessary to make [n.sub.22] equal to zero. Our aim is to construct the nonlinear function [mathematical expression not reproducible] such that [mathematical expression not reproducible] does not depend ony [mathematical expression not reproducible], i.e. for [mathematical expression not reproducible] one gets the index [n.sub.22] = 0. On the first step, to remove dependency on [mathematical expression not reproducible], note that the coefficient of [mathematical expression not reproducible]. Compute

[mathematical expression not reproducible],

which obviously does not depend on [mathematical expression not reproducible], i.e. for [mathematical expression not reproducible] the index [n.sub.22] = 1. On the second step we remove the term [mathematical expression not reproducible]. Note that the coefficient of [mathematical expression not reproducible]. Compute [mathematical expression not reproducible]. The explicit expression of [mathematical expression not reproducible] is omitted due to its complexity, but we indicate that [mathematical expression not reproducible] does not involve [mathematical expression not reproducible] any more. We also indicate that [mathematical expression not reproducible] depends on [mathematical expression not reproducible] linearly while the coefficient of y [mathematical expression not reproducible] is [mathematical expression not reproducible] On the third step [mathematical expression not reproducible] is removed by linear transformation [mathematical expression not reproducible] and thus for [mathematical expression not reproducible] the index [n.sub.22] = 0. To conclude, the nonlinear transformation, bringing equations (15) into the strong Popov form, is

[mathematical expression not reproducible]

The transformed [[phi].sub.2] is

[mathematical expression not reproducible]

The result coincides with (18b), obtained from the weak Popov form, when we express [mathematical expression not reproducible] as function of the other variables.

4. CONCLUSIONS

In this paper two algorithms were developed for transforming the set of nonlinear higher-order i/o difference equations into the Popov and the strong Popov form, respectively, using the equivalence transformations that, by definition, do not change the solutions of the system equations. The transformed system description includes, in general, in addition to system equations, also a number of inequations that guarantee certain expressions to be non-zero. That is, unlike the typical approaches assuming the generic setting (see for instance [5]), in our case the open and dense subset where the results are valid is specified by a certain set of functions S, being the byproduct of the developed algorithms. To resume, our linear transformations are valid globally in the entire space with removed zeros of functions from the set S. The approach avoids using the implicit function theorem, yielding only local results as in [12] for continuous-time nonlinear systems. However, when one needs to apply nonlinear transformations, the strong Popov form is not necessarily defined globally.

As for future research directions, the strong Popov form allows, in principle, finding explicit equations of the inverse system when the set of original equations will be transformed into the Popov form with respect to the control variables.

ACKNOWLEDGEMENTS

The work of Z. Bartosiewicz and M. Wyrwas was supported by grant No. S/WI/1/2016, the work of E. Pawluszewicz was supported by grant No. S/WM/1/2016. Both grants were financed by the Ministry of Science and Higher Education of Poland. The work of U. Kotta and M. Tonso was supported by the Estonian Research Council, personal research funding grant PUT481. The publication costs of this article were covered by the Estonian Academy of Sciences.

REFERENCES

[1.] Bartosiewicz, Z., Belikov, J., Kotta, U., Tonso, M., and Wyrwas, M. On the transformation of a nonlinear discrete-time input--output system into the strong row-reduced form. Proc. Estonian Acad. Sci., 2016, 65, 220-236.

[2.] Bartosiewicz, Z., Kotta, U., Pawluszewicz, E., Tonso, M., and Wyrwas, M. Transforming a set of nonlinear input-ouput equations into Popov form. In 2015 IEEE 54th Annual Conference on Decision and Control (CDC) : December 15-18, 2015. Osaka, Japan, Proceedings. Piscataway, N.J., 2015, 7131-7136.

[3.] Beckermann, B., Cheng, H., and Labahn, G. Fraction-free row reduction of matrices of Ore polynomials. J. Symb. Comput., 2006, 41, 513-543.

[4.] Cheng, H. and Labahn, G. Output-sensitive modular algorithms for polynomial matrix normal forms. J. Symb. Comput., 2007, 42, 733-750.

[5.] Conte, G., Moog, C. H., and Perdon, A. M. Algebraic Methods for Nonlinear Control Systems. Springer, London, 2007.

[6.] Davis, P., Cheng, H., and Labahn. G. Computing Popov form of general Ore polynomial matrices. In Proceedings of Milestones in Computer Algebra, Tobago. 2008, 149-156.

[7.] Farb, B. and Dennis, R. K. Noncommutative Algebra. Springer, New York, 1993.

[8.] Kojima, C., Rapisarda, P., and Takaba, K. Canonical forms for polynomial and quadratic differential operators. Systems & Control Letters, 2007, 56, 678-684.

[9.] Kotta, U., Bartosiewicz, Z., Nomm, S., and Pawluszewicz, E. Linear input-output equivalence and row reducedness of discrete-time nonlinear systems. IEEE Trans. Autom. Control, 2011, 56, 1421-1426.

[10.] Middeke, J. A Computational View on Normal Forms of Matrices of Ore Polynomials. PhD thesis, Johannes Kepler University, Linz, 2011.

[11.] Sriniwas, G. R., Arkun, Y., Chien, I.-L., and Ogunnaike, B. A. Nonlinear identification and control of a high-purity distillation column: a case study. J. Process Control, 1995, 5, 149-162.

[12.] Van der Schaft, A. J. On realization of nonlinear systems described by higher-order differential equations. Math. Syst. Theory, 1987, 19, 239-275.

Zbigniew Bartosiewicz (a*), Ewa Pawluszewicz (b), Malgorzata Wyrwas (a), Ulle Kotta (c), and Maris Tonso (c)

(a) Faculty of Computer Science, Bialystok University of Technology, Wiejska 45A, 15-351 Bialystok, Poland; m.wyrwas@pb.edu.pl

(b) Department of Automatic Control and Robotics, Bialystok University of Technology, Wiejska 45A, 15-351 Bialystok, Poland; e.pawluszewicz@pb.edu.pl

(c) Department of Software Science, School of Information Technologies, Tallinn University of Technology, Akadeemia tee 21, 12618 Tallinn, Estonia; kotta@cc.ioc.ee, maris@cc.ioc.ee

Received 12 September 2017, accepted 20 December 2017, available online 12 June 2018

(*) Corresponding author, z.bartosiewicz@pb.edu.pl

(1) For [??] see Subsection 2.2.

(2) To guarantee the condition deg [p.sub.1]*. [less than or equal to] deg [p.sub.2]*, we have reversed the order of equations, compared with the original equations in [11]. Note that though the condition deg [p.sub.1]*. [less than or equal to] deg [p.sub.2]. is not necessary for the weak Popov form, it is required later for nonlinear transformation.

Mittelineaarsete sisend-valjundvorrandite tugev Popovi kuju

Zbigniew Bartosiewicz, Ewa Pawluszewicz, Malgorzata Wyrwas, Ulle Kotta ja Maris Tonso

On valja tootatud kaks algoritmi mittelineaarse diskreetajaga susteemi sisend-valjundvorrandite teisendamiseks (lineaarsete voi mittelineaarsete ekvivalentsiteisendustega) vastavalt kas Popovi voi tugevale Popovi kujule. Ekvivalentsiteisendused ei muuda definitsiooni pohjal susteemi lahendeid. Teisendused eeldavad algse susteemi esitust tugeval reapohiselt taandatud kujul, mis varasemate tulemuste pohjal alati eksisteerib. Popovi (ja tugev Popovi) kuju sisaldab peale vorrandite veel avaldisi, mis ei tohi vorduda nulliga. Kui lineaarteisendusest piisab, on algoritmi tulemuseks vastav lineaarteisendus (ule meromorfsete funktsioonide korpuse). Mittelineaarse ekvivalentsiteisenduse leidmise algoritm on ka konstruktiivne, v.a samm, mis nouab osatuletistega diferentsiaalvorrandite susteemi lahendamist. Viimane aspekt iseloomustab paljude mittelineaarsete juhtimisulesannete lahendusi. Lisaks ei pruugi tugev Popovi kuju globaalselt eksisteerida. Teoreetilisi tulemusi on illustreeritud mitme naitega.

https://doi.org/10.3176/proc.2018.3.01

Printer friendly Cite/link Email Feedback | |

Author: | Bartosiewicz, Zbigniew; Pawluszewicz, Ewa; Wyrwas, Malgorzata; Kotta, Ulle; Tonso, Maris |
---|---|

Publication: | Proceedings of the Estonian Academy of Sciences |

Article Type: | Report |

Date: | Sep 1, 2018 |

Words: | 8135 |

Previous Article: | Mechanical and physical properties of industrial hemp-based insulation materials. |

Next Article: | Remarks on the realization of time-varying systems. |

Topics: |