Javascript required
Skip to content Skip to sidebar Skip to footer

Find the Solution to a System of Nonlinear Equations

1. Introduction

Solving nonlinear equations efficiently and reliably is of paramount importance in physics, engineering, operational research and many other disciplines in which the need arises to build detailed mathematical models of real-world systems, all of them being nonlinear in nature to a certain extent [1–4]. Moreover, large-scale systems are always sparse, which means that the total number of additive terms in a coupled set of n nonlinear functions in n variables is roughly O(n).

According to John Rice, who coined the term mathematical software in 1969, 'solving systems of nonlinear equations is perhaps the most difficult problem in all of numerical computations' [5, p 355]. This surely explains why so many people have devoted so much effort on this problem for so long.

Since the seventeenth century, the reference method for the solution of nonlinear systems is Newton–Raphson's (NR) iterative procedure, which locally approximates the nonlinear functions by their first-order Taylor-series expansions. Its terrific success stems from its proven quadratic convergence (when a sufficiently good initial guess is available), moderate computational expense, provided the Jacobian sparsity is fully exploited when dealing with large systems, and broad applicability to most cases in which the nonlinear functions can be analytically expressed [6].

For large-scale systems, by far the most time-consuming step lies in the computation of the Jacobian and its LU factors [7]. This has motivated the development of quasi-Newton methods, which make use of approximate Jacobians usually at the expense of more iterations [8]. Well-known examples in this category are the chord method, where the Jacobian is computed only once, and the secant method [9], which approximates the Jacobian through finite differences (no explicit derivatives are required). Broyden's method is a generalization of the secant method which carries out rank-one updates to the initial Jacobian [10].

Another category of related methods, denoted as inexact Newton methods, performs approximate computations of Newton steps, frequently in combination with pre-conditioners [11].

More sophisticated higher order iterative methods also exist, achieving super-quadratic convergence near the solution at the expense of more computational effort. These include Halley's cubic method.

When the initial guess is not close enough to the solution or the functions to be solved exhibit acute non-convexities in the region of interest, the NR method works slowly or may diverge quickly. This has motivated the development of the so-called globally convergent algorithms which, for any initial guess, either converge to a root or fail to do so in a small number of ways [6,11]. Examples of globally convergent algorithms are line search methods, continuation/homotopy methods [12,13], such as Levenberg–Marquardt methods, which circumvent Newton's method failure caused by a singular or near singular Jacobian matrix, and trust region methods.

Other miscellaneous methods, more or less related to Newton's method, are based on polynomial approximations (e.g. Chebyshev's method), solution of ordinary differential equations (e.g. Davidenko's method, a differential form of Newton's method) or decomposition schemes (e.g. Adomian's method [14]).

While each individual in such a plethora of algorithms may be most appropriate for a particular niche application, the standard NR method still remains the best general-purpose candidate trading off simplicity and reliability, particularly when reasonable initial guesses can be made [15,16].

A feature shared by most existing methods for solving nonlinear equations is that the structure of the original system is kept intact throughout the solution process. In other words, the algorithms are applied without any kind of preliminary transformation, even though it has long been known that by previously rearranging nonlinear systems better convergence can be achieved. According to [3, pp. 174–176], 'the general idea is that a global nonlinear transformation may create an algebraically equivalent system on which Newton's method does better because the new system is more linear. Unfortunately, there is no general way to apply this idea; its application will be problem-specific'.

This paper explores such an idea through a new promising perspective, based on unfolding the nonlinear system to be solved by identifying distinct nonlinear terms, each deemed a new variable. This leads to an augmented system composed of two sets of linear equations, which are coupled through a one-to-one nonlinear mapping with diagonal Jacobian. The resulting procedure involves two steps at each iteration: (i) solution of a linear system with symmetric coefficient matrix and (ii) computation of a Newton-like step.

The basic idea of factoring the solution of complex nonlinear equations into two simpler problems, linear or nonlinear, was originally introduced in the context of hierarchical state estimation [17], where the main goal was to geographically decompose large-scale least-squares problems. Later on, it has been successfully applied to nonlinear equations with a very particular network structure, such as those arising in power systems analysis and operation [18,19].

In this paper, the factored solution method, initially derived for overdetermined equation systems, is conceptually re-elaborated from scratch, and generalized, so that it can be applied to efficiently solving broader classes of nonlinear systems of equations.

This paper is structured as follows: in the next section the proposed two-stage solution approach is presented. Then, §§3 and 4 introduce canonical and extended forms of nonlinear functions to which the proposed method can be applied. Next, §5 discusses how, with minor modifications, the proposed method can reach different solution points from the same initial guess. Section 6 is devoted to the analysis of cases which are infeasible in the real domain, where convergence to complex values takes place. Section 7 briefly considers the possibility of safely reaching singular points, whereas §8 closes the paper by showing the behaviour of the proposed method when applied to large-scale test cases.

2. Factored solution of nonlinear systems

Consider a general nonlinear system, written in a compact form as follows:

h ( x ) = p , 2.1

where p R n is a specified vector and x R n is the unknown vector.1

By applying the NR iterative scheme, a solution can be obtained from an initial guess, x 0, by successively solving

H k Δ x k = Δ p k , 2.2

where subindex k denotes the iteration counter, Δx k =x k+1x k , H k is the Jacobian of h(⋅), computed at the current point x k , and Δp k =ph(x k ) is the mismatch or residual vector.

In essence, the new method assumes that suitable auxiliary vectors, y and u R m , with m>n, can be introduced so that the original system (2.1) can be unfolded into the following simpler problems:

E y = p , 2.3

u = f ( y ) 2.4

and C x = u , 2.5

where E and C are rectangular matrices of sizes n×m and m×n, respectively, and vector f(⋅) comprises a set of one-to-one nonlinear functions, also known as a diagonal nonlinear mapping [20]

u i = f i ( y i ) , i = 1 , m 2.6

each with closed-form inverse

y i = f i 1 ( u i ) y = f 1 ( u ) . 2.7

By eliminating vector u the above augmented system can also be written in more compact form:

E y = p 2.8

and

C x = f ( y ) . 2.9

Note that (2.8) is a linear underdetermined system, whereas (2.9) is an overdetermined one. Among the infinite solutions to (2.8) only those exactly satisfying (2.9) constitute solutions to the original nonlinear system (2.1). As explained in §3, many systems can be found in practice where such a factorization is possible, the aim being to reduce m as much as possible. In the limit, if a vector y of size m=n can be found, then solutions to the original nonlinear system will be obtained directly (i.e. without iterating) by sequentially solving the unfolded system of equations (2.3)–(2.5). But this case will arise only when the set of n equations comprises just n nonlinear distinct terms. Therefore, the need to iterate in the factored method stems from an excess of nonlinear terms (m>n) rather than from the nonlinear nature of the original problem.

Obviously, when the remaining auxiliary vector y is eliminated the original 'folded' system is obtained in factored form:

h ( x ) = E f 1 ( C x ) = p . 2.10

This leads to an equivalent expression for the Newton step (2.2)

[ E F k 1 C H k ] Δ x k = p E f 1 ( C x k ) = Δ p k , 2.11

where F is the trivial Jacobian of f(⋅). Whether (2.11) offers any computational advantage over (2.2) will mainly depend on the complexity involved in the computation of h(⋅) and its Jacobian H, compared to their much simpler counterparts f(⋅) and F (plus the required matrix products), but the convergence pattern will be exactly the same in both cases.

Yet the augmented system (2.8) and (2.9), arising when the factored form is considered, opens the door to alternative solution schemes which may outperform the 'vanilla' NR approach. The scheme developed in the sequel begins by considering the solution of the factored model as a linearly constrained least-squares (LS) problem. Then, an auxiliary least-distance problem is formulated aimed at providing improved linearization points at each iteration.

(a) Solution of the factored model as an optimization problem

Finding a solution to the unfolded problem (2.8) and (2.9) can be shown to be equivalent to solving the following equality-constrained LS problem:

Min [ y f 1 ( C x ) ] T [ y f 1 ( C x ) ] s . t . p E y = 0 , 2.12

which reduces to minimizing the associated Lagrangian function

L = 1 2 [ y f 1 ( C x ) ] T [ y f 1 ( C x ) ] μ T ( p E y ) . 2.13

The first-order optimality conditions (FOOCs) give rise to the following system:

y f 1 ( C x ) + E T μ = 0 , C T F T [ y f 1 ( C x ) ] = 0 and p E y = 0. } 2.14

Given an estimate of the solution point, x k , we can choose y k in such a way that (2.9) is satisfied, i.e.

y k = f 1 ( C x k ) . 2.15

Then, linearizing f −1(⋅) around x k

y f 1 ( C x ) Δ y k F k 1 C Δ x k

allows (2.14) to be written in an incremental form

[ I F k 1 C E T C T F k T C T D k C 0 E 0 0 ] [ Δ y k Δ x k μ ] = [ 0 0 Δ p k ] , 2.16

where D k = F k T F k 1 is a diagonal matrix with positive elements.

From the above symmetric system, Δy k can be easily eliminated, yielding:

[ 0 C T F k T E T E F k 1 C E E T ] [ Δ x k μ ] = [ 0 Δ p k ] 2.17

or, in more compact form

[ 0 H k T H k E E T ] [ Δ x k μ ] = [ 0 Δ p k ] . 2.18

If the Jacobian H k remains non-singular at the solution point, then μ=0 and the above system reduces to (2.11), as expected. Therefore, the Lagrangian-based augmented formulation has the same convergence pattern and converges to the same point as the conventional NR approach.

(b) Auxiliary least-distance problem

The driving idea behind the proposed procedure is to linearize (2.14) around a point which, being closer to the solution than y k , can be obtained in a straightforward manner. The resulting scheme will be advantageous over the NR approach if the extra cost of computing the improved linearization point is offset by the convergence enhancement obtained, if any.

For this purpose, given the latest solution estimate, x k , we consider a simpler auxiliary optimization problem, as follows:

Min ( y y k ) T ( y y k ) s . t p E y = 0 , 2.19

with y k =f −1(Cx k ). The associated Lagrangian function is

L a = 1 2 ( y y k ) T ( y y k ) λ T ( p E y ) , 2.20

which leads to the following linear FOOCs:

[ I E T E 0 ] [ Δ y ~ k λ ] = [ 0 Δ p k ] , 2.21

where Δ y ~ k = y ~ k y k . Besides being linear, the unknown x is missing in this simpler problem, which can be solved in two phases. First, λ is computed by solving:

E E T λ = Δ p k . 2.22

Then, y ~ k is simply obtained from

y ~ k = y k E T λ . 2.23

By definition, y ~ k is as close as possible to y k while satisfying (2.8).

Next, the FOOCs of the original problem (2.14) are linearized around y ~ k , hopefully closer to the solution than y k ,

y f 1 ( C x ) Δ y ~ k F ~ 1 [ C x f ( y ~ k ) ]

leading to

[ I F ~ 1 C E T C T F ~ T C T D ~ C 0 E 0 0 ] [ Δ y ~ k x k + 1 μ ] = [ F ~ 1 f ( y ~ k ) C T D ~ f ( y ~ k ) 0 ] . 2.24

Once again, eliminating Δ y ~ k yields

[ 0 C T F ~ T E T E F ~ 1 C E E T ] [ x k + 1 μ ] = [ 0 E F ~ 1 f ( y ~ k ) ] 2.25

or, in a more compact form

[ 0 H ~ T H ~ E E T ] [ x k + 1 μ ] = [ 0 E F ~ 1 f ( y ~ k ) ] . 2.26

The above linear system, to be compared with (2.18), provides the next value x k+1, which replaces x k in the next iteration. Moreover, as happened with the solution of (2.18), so long as the Jacobian H ~ remains non-singular, μ=0 and x k+1 can be obtained with less computational effort from

H ~ x k + 1 = E F ~ 1 f ( y ~ k ) . 2.27

(c) Two-step solution procedure

The two-step algorithm, arising from the proposed factored representation of nonlinear systems, can then be formally stated as follows:

Step 0: Initialization. Initialize the iteration counter (k=0). Let x k be an initial guess and y k =f −1(Cx k ).

Step 1: Auxiliary least-distance problem. First, obtain vector λ by solving the system

( E E T ) λ = p E y k 2.28

and then compute y ~ k from,

y ~ k = y k + E T λ 2.29

Step 2: Non-incremental computation of x . Solve for x k+1 the system,

H ~ x k + 1 = E F ~ 1 f ( y ~ k ) , 2.30

where H ~ = E F ~ 1 C is the factored Jacobian computed at u ~ k = f ( y ~ k ) . Then update y k+1=f −1(Cx k+1). If ∥x k+1x k ∥ (or, alternatively ∥pEy k+1∥) is small enough, then stop. Otherwise set k=k+1 and go back to step 1.

As explained above, step 1 constitutes a linearly constrained least-distance problem, yielding a vector y ~ k which, being as close as possible to y k , satisfies (2.8). As a feasible solution is approached, λ 0 and y ~ k y k . Note that the sparse symmetric matrix EE T needs to be factorized only once, by means of the efficient Cholesky algorithm (in fact, only its upper triangle is needed). Therefore, the computational effort involved in step 1 is very low if Cholesky triangular factors are saved during the first iteration. Moreover, the vector pEy k is available from the previous iteration if it is computed to check for convergence.

It is worth noting that step 2 is expressed in a non-incremental form. It directly provides improved values for x with less computational effort than that of a conventional Newton step (2.11), which may well offset the moderate extra burden of step 1.

In [18,19], the two-step procedure based on the factored model was originally developed from a different perspective, related to the solution of overdetermined systems in state estimation (i.e. filtering out noise from a redundant set of measurements).

(d) Factored versus conventional Newton's method

The factored scheme can be more easily compared with NR method if step 2 is written in incremental form. For this purpose, the term H ~ x k is subtracted from both terms of (2.30). Considering that f(y k )=Cx k , this leads to

H ~ ( x k + 1 x k ) = E F ~ 1 [ f ( y ~ k ) f ( y k ) ] . 2.31

Note that the above expression, being equivalent to (2.30), is less convenient from the computational point of view owing to the extra operations involved.

Next, the Taylor's expansion around y ~ k of the set of nonlinear functions f(y) is rearranged as follows, for y=y k :

f ( y ~ k ) f ( y k ) = F ~ ( y ~ k y k ) R ( y k , y ~ k ) , 2.32

where the remainder R ( y k , y ~ k ) comprises the second- and higher order terms of the polynomial expansion

R ( y k , y ~ k ) = j = 2 F ~ j j ! [ diag ( y k y ~ k ) ] j e , 2.33

where F ~ j is the diagonal matrix containing the jth derivatives of f(y), computed at y = y ~ k , and e is a vector of 1's.

Replacing (2.32) into (2.31), and taking into account that E y ~ k = p , yields

H ~ Δ x k = Δ p k E F ~ 1 R ( y k , y ~ k ) . 2.34

Comparing the resulting incremental model (2.34) with that of the conventional Newton's method (2.11), the following remarks can be made:

(i) If the current point, x k , is so close to the solution that y ~ k y k 1 , then R will be negligible compared to Δp k . In these cases, the only difference of the factored method with respect to NR lies in the use of the updated Jacobian, H ~ , rather than H k , and the local convergence rate of the factored algorithm will be of the same order as that of Newton's method.

(ii) If the current point, x k , is still so far away from the solution that y ~ k y k > 1 , then R will dominate the right-hand side in (2.34). In these cases, the behaviour of the factored scheme can be totally different from that of Newton's method, which fully ignores R when computing x k+1.

(iii) As expected, if step 1 is omitted (i.e. y ~ k = y k ), the factored scheme reduces to the standard Newton's method.

Let x and f ( y ) = C x denote the solution point. Further insights into remark (i), regarding local convergence rate, can be gained by subtracting H ~ x from both sides of (2.30) and rearranging, which leads for the factored method to

H ~ ( x x k + 1 ) = E F ~ 1 R ( y , y ~ k ) . 2.35

Manipulating (2.11) in a similar manner yields, for the NR method

H k ( x x k + 1 ) = E F k 1 R ( y , y k ) . 2.36

The last two expressions confirm that both the NR and the factored schemes converge quadratically near the solution (third- and higher order terms are negligible in R ). Moreover, if y ~ k is closer than y k to the solution y , then the factored method converges faster than Newton's.

In summary, from the above discussion it can be concluded that the local behaviour of the proposed factored algorithm is comparable to Newton's method (quadratic convergence rate), while the global behaviour (shape of basins of attractions) will be in general quite different. The fact that step 1 tries to minimize the distance between y ~ k and y k surely explains the much wider basins of attractions found in practice for the factored method, but there is no way to predict beforehand the global behaviour of any iterative algorithm in the general case (the reader is referred to §3, 'Aspects of Convergence Analysis', in [21], for a succinct but clarifying discussion on this issue).

3. Application to canonical forms

The factored model and, consequently, the two-stage procedure presented above can be applied in a straightforward manner to a wide range of nonlinear equation systems, which are or can be directly derived from the following canonical forms. Small tutorial examples will be used to illustrate the ideas. None of those examples are intended to assess the potential computational benefits of the factored solution process, but rather to show the improved convergence pattern over the NR method (the reader is referred to §8, where very large equation systems arising in the steady-state solution of electrical power systems are solved by both methods and solution times are compared).

(a) Sums of single-variable nonlinear elementary functions

The simplest canonical form of nonlinear systems arises when they are composed of sums of nonlinear terms, each being an elementary invertible function of just a single variable. Mathematically, each component of h(x) should have the form:

h i ( x ) = j = 1 m i c i j h i j ( x k ) ,

where c ij is any real number and h i j 1 ( ) can be analytically obtained. In this case, for each different nonlinear term, h ij (x k ), a new variable

y i j = h i j ( x k )

is added to the auxiliary vector y, keeping in mind that repeated definitions should be avoided. This leads to the desired underdetermined linear system to be solved at the first step:

h i ( x ) = j = 1 m i c i j y i j ,

while matrix C of the overdetermined linear system u=Cx is trivially obtained from the definition of u:

u i j = h i j 1 ( y i j ) = x k .

Example 3.1

Consider the simple nonlinear function

p = x 4 x 3

represented in figure 1. Introducing two auxiliary variables

y 1 = x 4 and y 2 = x 3

leads to

p = y 1 y 2 .

Therefore, with the above notation, the involved matrices are

E = ( 1 1 ) , C = ( 1 1 ) , u = f ( y ) = ( y 1 4 y 2 3 ) , F 1 = ( 4 u 1 3 0 0 3 u 2 2 ) .

For p=1, there are two solutions (points A and B in figure 1). Table 1 shows, for different starting points, the number of iterations required to converge (convergence tolerance in all examples: ∥Δx1<0.00001) by both the NR and the proposed factored procedure. Note that the factored procedure converges much faster than the NR method. The farther x 0 from the solution point, the better the behaviour of the proposed procedure compared to Newton's method.

Figure 1.

Figure 1. Nonlinear function x 4x 3 used in example 3.1. It crosses p=1 at points A and B (x=−0.8192 and x=1.3803, respectively). (Online version in colour.)

  • Download figure
  • Open in new tab
  • Download PowerPoint

Table 1. Number of iterations required by both NR and factored procedures to converge from arbitrary starting points for the nonlinear function of example 3.1, with p=1. In parenthesis, A or B indicates which point is reached in each case.

Collapse

x 0 NR factored
30 16 (B) 6 (B)
10 12 (B) 6 (B)
5 9 (B) 5 (B)
1 7 (B) 4 (B)
0.9 9 (B) 5 (B)
0.8 13 (B) 5 (B)
0.5 10 (A) 6 (B)
0 fails 6 (B)
−0.5 6 (A) 7 (B)

For values of x 0<0.75 (the minimum of the function, where the slope changes its sign), the NR method converges to point A. On the other hand, the factored scheme always converges to point B no matter which initial guess is chosen. This issue is discussed in more detail in §5, where it is also explained how to reach other solution points. As expected, the NR method fails for x 0=0 (null initial slope), whereas the factored solution does not suffer from this limitation (the updated Jacobian H ~ corresponding to the first value y ~ k is not singular).

Finally, the components of vector λ (and obviously μ) are always null at the solution point, indicating that a feasible solution has been found in all cases (see §6).

Example 3.2

In this example, the following periodic function will be considered:

p = sin x + cos x

for which two auxiliary variables are needed

y 1 = sin u 1 and y 2 = cos u 2

with u 1=u 2=x. The relevant matrices arising in this case are:

E = ( 1 1 ) , C = ( 1 1 ) u = f ( y ) = ( arcsin y 1 arccos y 2 ) , F 1 = ( cos u 1 0 0 sin u 2 ) .

For p=1.4, the two solutions closest to the origin are x=0.6435 and x=0.9273 (points A and B in figure 2).

Figure 2.

Figure 2. Nonlinear function for example 3.2. (Online version in colour.)

  • Download figure
  • Open in new tab
  • Download PowerPoint

Table 2 shows, for different starting points, the number of iterations required to converge by both methods and the final solution reached. In this particular example (p=1.4), the NR method sometimes outperforms the factored scheme in terms of number of iterations. However, note that, for starting points far away from the origin, the NR method may converge to 'remote' periodic solutions (shown in bold), whereas the factored scheme always converges to point A or B (this issue if further discussed in §5). Other advantages of the factored method will become apparent for infeasible values of p (see the same example, with p=1.5, in §6).

Table 2. Example 3.2: number of iterations and solution points for p=1.4.

Collapse

NR
factored
x 0 iter. x iter. x
10 5 0.6435 4 0.9273
5 6 6.9267 8 0.6435
1 4 0.9273 4 0.9273
0 6 0.6435 7 0.6435
−1 7 0.6435 8 0.6435
−5 6 55.6214 7 0.9273
−10 7 11.6391 8 0.9273

In all cases, the final solution provided by the factored scheme is real (or, more precisely, its imaginary component is well below the convergence tolerance). However, complex intermediate solutions are obtained in some cases, owing to the fact that arcsin y and arccos y return complex values for |y|>1 (this is discussed in §6).

(b) Sums of products of single-variable power functions

Another canonical form to which the factored method can be easily applied arises when the nonlinear equations to be solved are the sum of products of nonlinear terms, each being an arbitrary power of a single variable. Mathematically, each component of h(x) has the form:

h i ( x ) = j = 1 m i c i j k = 1 n j x k q k ,

where c ij and q k are arbitrary real numbers.

This case can be trivially handled if the original set of variables, x, is replaced by its log counterpart

α k = ln x k , k = 1 , , n .

Then the auxiliary vector y is defined as in the previous case, avoiding duplicated entries

y i j = k = 1 n j x k q k = k = 1 n j exp ( q k α k ) ,

which leads to the desired underdetermined linear system

h i ( x ) = j = 1 m i c i j y i j .

The second key point is to embed the ln function in the nonlinear relationship u=f(y), as follows:

u i j = ln y i j y i j = exp u i j ,

which leads to the overdetermined linear system to be solved at the second step:

u i j = k = 1 n j q k α k .

Once vector α is obtained by the factored procedure, the original unknown vector x can be recovered from

x k = exp α k , k = 1 , , n .

Example 3.3

Consider the following nonlinear system in two unknowns:

p 1 = x 1 x 2 + x 1 x 2 2 and p 2 = 2 x 1 2 x 2 x 1 2 ,

which, upon introduction of four y variables

y 1 = x 1 x 2 , y 2 = x 1 x 2 2 , y 3 = x 1 2 x 2 , y 4 = x 1 2

is converted into an underdetermined linear system

( p 1 p 2 ) = ( 1 1 0 0 0 0 2 1 ) E ( y 1 y 2 y 3 y 4 ) .

The nonlinear relationships u=f(y) are

u i = ln y i , i = 1 , 2 , 3 , 4

which lead to the following Jacobian:

F 1 = ( exp u 1 0 0 0 0 exp u 2 0 0 0 0 exp u 3 0 0 0 0 exp u 4 )

and the final overdetermined system in the log variables

( 1 1 1 2 2 1 2 0 ) C ( α 1 α 2 ) = ( u 1 u 2 u 3 u 4 ) .

Once the problem is solved in the log variables, the original variables are recovered from

x i = exp α i , i = 1 , 2 , 3 , 4.

For p 1=24 and p 2=20, there is a known solution at x 1=2 and x 2=3. Table 3 provides the number of iterations for different starting points. Apart from taking a larger number of iterations, the NR method sometimes converges to the alternative point x 1=31.1392 and x 2=0.5103 (marked with an 'A' in the table), unlike the factored scheme, which seems to converge invariably to the same point. For the farthest starting point tested (last row), the NR method does not converge in 50 iterations. On the other hand, the factored scheme is much less sensitive to the initial guess, since it converges systematically to the same point.

Table 3. Number of iterations required by the NR and the factored procedures to converge from arbitrary starting points for the nonlinear system of example 3.3, with p 1=24 and p 2=20.

Collapse

x 0 NR factored
(1,1) 7 6
(1,−1) 14 6
(−1,1) (A) 29 6
(10,10) 8 7
(−10,−10) (A) 10 8
(−10,10) (A) 22 7
(−100,100) fails 7

4. Transformation to canonical forms

There are many systems of equations which can be easily transformed into one of the canonical forms discussed above, by strategically introducing extra variables aimed at simplifying the complicating factors. Then, the nonlinear expressions defining the artificial variables are added to the original equation system, leading to an augmented one.

Example 4.1

Consider the following nonlinear system:

p 1 = x 1 sin ( x 1 2 + x 2 ) x 1 2

and

p 2 = x 1 2 x 2 x 2 .

By defining a new variable

x 3 = sin ( x 1 2 + x 2 ) ,

it can be rewritten in augmented canonical form, as follows:

p 1 = x 1 x 3 x 1 2 , p 2 = x 1 2 x 2 x 2 and 0 = x 1 2 + x 2 arcsin x 3 .

Note that in this case only the original unknowns, x 1 and x 2, need to be initialized, since x 3 can be set according to its definition.

5. Extending the range of reachable solutions

When there are multiple solutions, the NR method converges to a point determined by the basin of attraction in which the initial guess is located. Some heuristics have been developed enabling the NR method to reach alternative solutions in a more or less systematic manner, such as the differential equation approach proposed in [22], which modifies the equation structure according to the sign of the Jacobian determinant.

In this regard, the behaviour of the proposed factored algorithm is mainly affected by the computable range of f(y) components. When this range does not span the whole real axis, the two-stage procedure may not be able to reach some solution points, irrespective of the starting point chosen. For instance, if y=x q , with q even, then during the solution process, the expression

u = f ( y ) = y q

will always return a positive u, provided y is positive (or a complex with positive real component if y is negative). This will be determinant of the computed x values. In those cases, an easy way of allowing the factored procedure to reach alternative solution points is by considering negative ranges for f(y), i.e.

u = y q .

Example 5.1

In example 3.1, it was noted that the factored procedure always converges to point B (the positive root in figure 1). However, if the original expression for u 1

u 1 = y 1 4

is replaced by2

u 1 = y 1 4 ,

then the factored procedure always converges to point A (negative root) in fewer iterations than in example 3.1.

A similar situation arises when the nonlinear system to be solved contains periodic functions, since their inverses will have a limited range related to the period of the original function. For instance, if y = sin x , then u = arcsin y will naturally lie in the range −π/2<u<π/2. If we want to obtain u values in the extended range

( q 1 2 ) π < u < ( q + 1 2 ) π

for any integer q, then we should replace u = arcsin y by

u = q π + ( 1 ) q arcsin y .

In a similar manner, u = arccos y , which naturally lies in the range 0<u<π, should be replaced by

u = ( q + 1 2 ) π + ( 1 ) q ( arccos y π 2 )

to obtain u values in the same extended range.

Example 5.2

If, in example 3.2, the expression

u = f ( y ) = ( arcsin y 1 arccos y 2 )

is replaced by

u = ( 2 π 2 π ) + ( arcsin y 1 arccos y 2 ) ,

then, the factored method converges, in a similar number of iterations, to the points 6.9267 and 7.2105, at a distance 2π from points A and B in figure 2 (in this simple example, involving a purely periodic function, this is totally an expected result, but the reader is referred to the not so trivial example 5.3).

The fact that the range of f(y) adopted during the iterative process determines the solution point which is finally reached, irrespective of the starting point, is a nice feature worth investigating in the future, since by fully exploring all alternative values u=f(y) one might be able to systematically reach solution points in given regions of interest, without having to test a huge number of starting points. The following examples illustrate this idea.

Example 5.3

Consider the extremely nonlinear non-periodic function

p = x sin x + x ,

which is graphically represented in figure 3.

As explained in the previous examples, the following augmented system is to be factored:

p = x 1 x 2 + x 1

and

0 = x 2 sin x 1 .

The relevant components and relationships of the factored solution approach are

y 1 = x 1 x 2 , u 1 = ln y 1 , y 2 = x 1 , u 2 = ln y 2 , y 3 = x 2 , u 3 = ln y 3 , y 4 = sin x 1 , u 4 = ln ( arcsin y 4 ) ,

( p 0 ) = ( 1 1 0 0 0 0 1 1 ) ( y 1 y 2 y 3 y 4 ) and ( 1 1 1 2 0 0 1 1 0 ) ( α 1 α 2 ) = ( u 1 u 2 u 3 u 4 ) ,

where log variables have been introduced

α i = ln x i , i = 1 , 2.

The inverse of the Jacobian is in this case

F 1 = ( exp u 1 0 0 0 0 exp u 2 0 0 0 0 exp u 3 0 0 0 0 exp u 4 cos ( exp u 4 ) ) .

For p=5, no real solution exists when x<3π/2. In order to obtain real solutions for x>3π/2 (points A, B, C and D in figure 3) as explained before, we have to replace

u 4 = ln ( arcsin y 4 )

by the more general expression

u 4 = ln [ q π + ( 1 ) q arcsin y 4 ]

for q>1.

Table 4 shows, for increasing values of q, the points to which the factored solution converges, as well as the number of iterations (starting with x 0=). The complex solutions provided for q=0 and 1 (whose real component is very close to the local maximum), indicate that no real solution exists for x<3π/2.

Figure 3.

Figure 3. Nonlinear function for example 5.3. (Online version in colour.)

  • Download figure
  • Open in new tab
  • Download PowerPoint

Table 4. Example 5.3: number of iterations and solution points for p=5.

Collapse

q iter. x
0 10 2.1519+0.5820i
1 8 2.2158+1.0097i
2 5 6.6554 (A)
3 5 9.2097 (B)
4 5 12.6801 (C)
5 4 15.6411 (D)

One more example will be worked out to illustrate the capability of the factored procedure to reach different solutions, just by extending the computed range of certain f(y) components.

Example 5.4

Consider the 2×2 system proposed by Boggs [23]

1 = x 1 2 x 2

and

0 = x 1 cos ( π x 2 2 ) ,

which is known to have only three solutions: (0,1), ( 1 / 2 , 3 / 2 ) and (−1,2).

In this case, the following relationships are involved in the factored method:

y 1 = x 1 2 , u 1 = y 1 , y 2 = x 2 , u 2 = y 2 , y 3 = x 1 , u 3 = y 3 , y 4 = cos ( π x 2 2 ) , u 4 = 2 ( arccos y 4 ) π , ( 1 0 ) = ( 1 1 0 0 0 0 1 1 ) ( y 1 y 2 y 3 y 4 ) , ( 1 0 0 1 1 0 0 1 ) ( x 1 x 2 ) = ( u 1 u 2 u 3 u 4 ) and F 1 = ( 2 u 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 π sin ( π u 4 / 2 ) 2 ) .

When u 1 and u 4 are defined as above, the factored method always converges to (0,1), irrespective of the starting point x 0. What is as important, the number of iterations is only slightly affected by x 0, no matter how arbitrary it is.

If we extend the computed range of u 1 to negative values, by replacing the original definition with

u 1 = y 1 ,

then the factored method always converges to ( 1 / 2 , 3 / 2 ) , irrespective of the starting point. Finally, when the ranges of both u 1 and u 4 are extended, as follows:

u 1 = y 1

and

u 4 = 2 ( 2 π arccos y 4 ) π ,

the third solution point, (−1,2), is reached for arbitrary values of x 0.

This kind of controlled convergence cannot be easily implemented in the NR method, characterized by rather complex attraction basins in this particular example. In fact, Newton's method may behave irregularly even when started near the solutions [21].

Interestingly enough, if the fourth combination of ranges is selected

u 1 = y 1

and

u 4 = 2 ( 2 π arccos y 4 ) π ,

then the factored procedure converges to the complex point (1.7174+0.2131i,3.9041+0.7320i) showing that no real solution exists in that region. This issue is discussed in §6.

6. Infeasibility and complex solutions

A nonlinear system h(x)=p with real coefficients will have in general an undetermined number of real and complex solutions. It can be easily shown that complex solutions constitute conjugate pairs if and only if f(y*)=[f(y)]*, which happens for many functions of practical interest. When no real solutions exist we say that vector p is infeasible.

The basin of attraction associated with each solution (real or complex) is characteristic of the iterative method adopted. As discussed in §2d and illustrated by the previous examples, the basins of attraction associated with the factored method are quite different from those of Newton's algorithm. Depending on which basin of attraction the initial guess x 0 lies in, the factored method will converge to a real or complex solution.

A summary of the possible outcomes of the factored algorithm is as follows:

— Convergence to a real solution: for feasible cases, starting from a real x 0 close enough to a solution, the factored scheme eventually converges to the nearby real solution. This is favoured by the introduction of the first step, aimed at minimizing the distance between successive intermediate points. Note however that, even if both x 0 and the solution are real, depending on the domain of existence of the nonlinear elementary functions f(y), the two-stage procedure may give rise to complex intermediate values x k during the iterative process. A real solution (or, more precisely, a solution with negligible imaginary component) can also be reached starting from a complex x 0.

— Convergence to a complex solution: for infeasible cases the factored method can only converge to a complex solution, provided the right x 0 is chosen. Note that complex solutions cannot be reached if x 0 is real and the domains of f(y) and F −1 span the entire real axis. In those cases, selecting a complex initial guess is usually helpful to allow the algorithm to reach a complex solution. For feasible cases, if x 0 is real but far from a real solution, the factored iterative procedure might converge to a complex point.

— Inability to converge: as any iterative algorithm, the factored scheme may numerically break down owing to unbounded values provided by the elemental functions f(⋅) or their inverse f −1(⋅), which is the case, for instance, when an asymptote is approached. It can also remain oscillating, either periodically or chaotically, which typically happens when x 0 is real, no real solution exists and complex intermediate values cannot pop-up.

The following examples illustrate the behaviour of the factored scheme as well as the nature of the complex solutions obtained when solving infeasible cases.

Example 6.1

Let us reconsider example 3.2 with p=1.5, which constitutes an infeasible case for which the NR fails to converge to a meaningful point. The two-stage procedure always converges in a reduced number of iterations to the same complex value (or its conjugate), as shown in table 5 for the same real starting points as in example 3.2 (the same happens with other real starting points).

However, if we prevent f(y) from taking complex values by restricting y to its real domain, then the two-stage procedure remains oscillating around real values, much like the NR method.

We can use this example to see what happens if p is further increased. Table 6 presents the number of iterations and the solution points for increasing values of p (starting from x 0=0). The maximum value for which there is a feasible (real) solution is p=1.4142 (critical point). For larger p values, the factored approach converges to complex values with the same real component and increasing absolute values. Eventually, for p=4.204 the two-step procedure breaks down, which means that x 0=0 is not in the basin of attraction for this infeasible value of p.

Experimental results show that the complex values to which the factored method converges in infeasible cases are not arbitrary. In this example, evaluating the nonlinear function at the real component of the solution point (0.7854) yields p r =1.4142 (point C in figure 2). This lies almost exactly on the maximum of the function ( p M = 2 ) .

In example 3.1, if we set p=−0.2, which is infeasible in the real domain, the factored method converges from nearly arbitrary real starting points to the complex value 0.8090+0.2629i. Taking the real part of the solution yields p r =−0.1011 (point C in figure 1), which is indeed very close to the minimum of the function x 4x 3 (p m =−0.1055 at x=0.75).

Table 5. Example 6.1: number of iterations and solution points for p=1.5.

Collapse

x 0 iter. x
10 8 0.7854−0.3466i
5 5 0.7854+0.3466i
1 8 0.7854−0.3466i
0 5 0.7854+0.3466i
−1 5 0.7854+0.3466i
−5 6 0.7854−0.3466i
−10 5 0.7854−0.3466i

Table 6. Example 6.1: number of iterations and solution points for feasible and infeasible values of p, starting from x 0=0.

Collapse

p iter. x
1.4 7 0.6435
1.4142 12 0.7810
1.4143 10 0.7854+0.0111i
1.5 5 0.7854+0.3466i
2.5 5 0.7854+1.1711i
3 5 0.7854+1.3843i
4.203 10 0.7854+1.7528i
4.204 fails

Example 6.2

Let us consider the period function

p = tan x tan ( x π 2 ) ,

which is infeasible for −2<p<2 and has an infinite number of solutions otherwise (figure 4).

This example is very similar to example 3.2, except for the way the auxiliary variables are defined

y 1 = tan x and y 2 = tan ( x π 2 ) .

In this case, both the elementary nonlinear functions f(y)

u 1 = arctan y 1 and u 2 = π 2 + arctan y 2

and the Jacobian functions

F 1 = ( 1 + tan 2 u 1 0 0 1 + tan 2 ( u 2 π 2 ) )

are well defined all over the real axis (except for the asymptotes of the tan functions).

Therefore, starting from a real value x 0, the factored procedure (and also the NR scheme) will always remain in the real domain, unlike in example 3.2 containing the functions arcsin and arccos .

With p=3, starting from x 0=1, the two-stage algorithm converges in five iterations to 1.2059, whereas the solution point 0.3649 is reached, also in five iterations, when x 0=−1.

For the infeasible value p=1.9, the algorithm remains oscillating for any real starting point. However, with the complex starting value x 0=1+i, it converges to the complex solutions shown in table 7 for different infeasible values of p.

Note that the real component of the solution is always the same (0.7854). This value is exactly π/4, a local minimum of the nonlinear function.

Figure 4.

Figure 4. Periodic function for example 6.1.

  • Download figure
  • Open in new tab
  • Download PowerPoint

Table 7. Example 6.2: number of iterations and complex solution points starting from x 0=1+i.

Collapse

p iter. x
1.9 6 0.7854+0.1615i
1.5 4 0.7854+0.3977i
1 4 0.7854+0.6585i

The above examples suggest that, when the two-stage method fails to find a real solution, it tries to reach a complex point whose real component is in the neighbourhood of an extreme point of the function, which is indeed a nice feature. This is surely a consequence of the strategy adopted in step 1, which attempts to minimize the distance from new solution points to the previous ones, but this conjecture deserves a closer examination which is out of the scope of this preliminary work.

In any case, it is advisable to adopt complex starting points, particularly when infeasibility is suspected, in order to facilitate complex solutions being smoothly reached.

7. Critical points

Feasible (real) and infeasible (complex) solution points can be obtained as long as the Jacobian matrix ( H k = E F k 1 C ) remains non-singular throughout the iterative process. This applies to both the NR and the factored methods. Even though critical points are theoretically associated with singular Jacobians, in practice they can also be safely approached so long as the condition number of the Jacobian is acceptable for the available computing precision and convergence threshold adopted. However, the convergence rate tends to significantly slow down near critical points.

It is worth noting that, in the surroundings of a critical point, it may be preferable to deal with the augmented equation (2.26) rather than solving the more compact one (2.27), which assumes that μ=0.

Example 7.1

Let us consider again the periodic nonlinear function of example 6.2:

p = tan x tan ( x π 2 ) ,

which has an infinite number of critical points for p=2. Table 8 provides the number of iterations and the solution points reached by both the NR and factored methods for different starting points (as in previous examples the convergence threshold is ∥Δx1<0.00001). As can be seen, the factored method is much more robust against the starting point choice, and converges always to the same critical point (π/4).

In the neighbourhood of the critical point, the number of iterations is significantly reduced, but the factored procedure still performs much better than the NR method. For the sake of comparison, Table 8 also shows the values corresponding to p=2.1.

Table 8. Example 7.1: number of iterations and solution points for p=2 and p=2.1.

Collapse

p=2
p=2.1
factored
NR
factored
NR
x 0 iter. x iter. x iter. x iter. x
5 16 0.7854 23 −178.2854 5 0.6305 8 −37.0686
3 15 0.7854 25 101.3164 6 0.6305 13 4.0819
1.5 16 0.7854 19 0.7854 6 0.9403 9 0.9403
−1.5 16 0.7854 20 −2.3562 6 0.6305 12 −2.2013
−3 15 0.7854 18 −2.3562 6 0.9403 8 −2.5111
−5 16 0.7854 17 −5.4978 5 0.9403 6 −5.3429

In addition, numerical problems may arise during the factored solution procedure if, by a numerical coincidence, any element of the diagonal matrix F −1 becomes null or undefined. This risk can be easily circumvented by preventing diagonal elements of F −1 from being smaller than a certain threshold or abnormally large.

8. Large-scale test cases

This section provides test results corresponding to the nonlinear systems arising in the steady-state analysis of electrical power systems, known in the specialized literature as the 'power flow' problem [8]. The results presented herein are representative of many other engineering problems characterized by very sparse equations with network or graph-based structure, in which state variables associated with nodes are linked directly only to their neighbours in the graph [20].

In the power flow problem, the active and reactive power injections specified at each bus i, P i sp and Q i sp , respectively, are linearly related with unknown powers flowing through branches linking i with its neighbours ji, as follows:

P i sp = j i P i j and Q i sp = j i Q i j . } 8.1

In turn, branch power flows are nonlinearly related to nodal state variables (voltage magnitudes, V i , and phase angles θ i ):

P i j = g i j V i 2 g i j V i V j cos ( θ i θ j ) b i j V i V j sin ( θ i θ j ) and Q i j = ( b sh + b i j ) V i 2 + b i j V i V j cos ( θ i θ j ) g i j V i V j sin ( θ i θ j ) , } 8.2

where g ij , b ij and b sh represent branch parameters [8]. For a network with N buses the phase angle of a given node is arbitrarily taken as a reference (θ s =0), while the active power injection at this same node (termed slack bus) is not specified to account for unknown network losses. Replacing the pair (8.2) into (8.1) leads therefore to a set of n=2N−1 nonlinear equations in the same number of unknowns, which are customarily solved by means of NR method. In practice, the voltage magnitude rather than the reactive power is specified at generator buses, accordingly reducing the size of the equation system. Using the so-called flat-start profile (V i =1, θ i =0), the nonlinear power flow equations are solved typically in three to five iterations by Newton's method, depending on the network loading level and ill-conditioning.

As explained in [19], the natural choice for the intermediate vector y in this particular application is

y = { U i , K i j , L i j } , 8.3

where U i = V i 2 for every node i and, for each branch connecting nodes i and j, the following pair of variables is adopted:

K i j = V i V j cos ( θ i θ j ) 8.4

and

L i j = V i V j sin ( θ i θ j ) . 8.5

Vector y then comprises m=2b+N variables (b being the number of branches). It can be easily checked that, when expressed in terms of y, the power flow equations become linear (the reader is referred to [19] for the details).

Table 9 compares the number of iterations and solution times (milliseconds) for both the proposed factored procedure and Newton's method, when applied to several benchmark systems ranging from 30 to over 3000 nodes (i.e. some 6000 nonlinear equations) available in (http://www.pserc.cornell.edu/matpower/). The convergence threshold adopted is Δ p k < 0.001 .

Table 9. Number of iterations and solution times (milliseconds) for the factored and NR methods.

Collapse

factored
NR
N iter. time iter. time
30 2 1.36 3 1.97
39 3 1.87 4 2.45
57 3 2.39 3 2.56
300 3 8.4 4 10.01
2383 3 64.21 4 73.54
2737 4 102.9 fails
3012 4 99.15 fails
3120 4 102.02 5 126.16

Solution times have been obtained on an Intel Core i7 processor (2.8 GHz, 8 GB of RAM) by running a prototype Matlab implementation. Every effort has been made to optimize the code in both cases, by exploiting available sparsity features and avoiding duplicated operations (e.g. the Jacobian H k in Newton's method is obtained almost free in this particular application, as a by-product of the computations involved in the mismatch vector Δp k ).

As can be seen, the factored method always converges and takes one iteration less than NR in most cases. In scenarios representing highly loaded conditions (i.e. more nonlinearity), where the customary flat start is not so close to the solution point, the factored algorithm tends to save two or more iterations with respect to the NR method [19].

Further insights into the local convergence rate of the factored method can be gained from figure 5, where the evolution of Δ p k when solving the 300-bus system from flat start is represented, both for Newton's and factored methods.

Figure 5.

Figure 5. Convergence rates of NR (dotted line with circles) and factored (solid line with squares) methods for the 300-bus system. (Online version in colour.)

  • Download figure
  • Open in new tab
  • Download PowerPoint

9. Conclusion

In this paper, a new iterative procedure to reliably solve nonlinear systems of equations is presented and illustrated with several examples. The basic idea is to unfold the original system with the help of auxiliary variables so that elementary nonlinearities with explicit inverse can be individually handled. At each iteration, the algorithm first solves a trivial least-distance problem; then a Newton-like step is computed. For realistic problems, the overall computational effort required by the two steps should be of the same order or even lower than that of the standard NR scheme, provided suitable sparsity techniques are adopted.

Practical experience so far, including large sets of nonlinear equations arising in power systems problems, shows that the proposed method clearly outperforms the NR method near the solution point, while its behaviour, far away from NR typical basins of attraction, is less erratic and more controllable. Future efforts should be directed to more thoroughly investigating its behaviour in terms of global convergence and capability to deal with infeasible cases.

Applying the factored scheme to the development of globally convergent and/or higher order iterative methods, as well as other related problems such as nonlinear programming and the solution of nonlinear ordinary differential equations, also seems to be a promising venue of research.

Acknowledgements

The author acknowledges Walter Vargas for providing the numerical results in table 9 and Catalina Gómez-Quiles for figure 5.

Footnotes

1 Mathematicians usually write nonlinear equations as h(x)=0, the problem being that of finding the roots of the nonlinear function. In engineering and other disciplines dealing with physical systems, however, it is usually preferable to explicitly use a vector of specified parameters, p, since the evolution of x (output) as a function of p (input) is of most interest. When no real solutions exist we say that p is infeasible.

2 When using Matlab, the function nthroot should be used to obtain real qth roots of negative numbers, with q odd. Alternatively, the use of the equivalent fractional power will provide complex roots, if needed.

© 2014 The Author(s) Published by the Royal Society. All rights reserved.

References

  • 1
    Moré JJ

    . 1990 A collection of nonlinear model problems. Computational solution of nonlinear systems of equations (eds

    , Allgower EL& Georg K

    ), Lectures in Applied Mathematics, vol. 26 , pp. 723–762. Providence, RI: American Mathematical Society. Google Scholar

  • 2
    Beers K

    . 2006 Numerical methods for chemical engineering: applications in MATLAB New York, NY: Cambridge University Press. Crossref, Google Scholar

  • 3
    Judd KL

    . 1998 Numerical methods in economics. New York, NY: MIT Press. Google Scholar

  • 4
    Parker TS& Chua LO

    . 1987 Chaos: a tutorial for engineers. Proc. IEEE 75 , 981–1008. (doi:10.1109/PROC.1987.13845). Google Scholar

  • 5
    Rice J

    . 1993 Numerical methods, software, and analysis. New York, NY: Academic Press. Google Scholar

  • 6
    Ortega JM& Rheinboldt WC

    . 1970 Iterative solution of nonlinear equations in several variables. New York, NY: Academic Press (also published by SIAM, Philadelphia, 2000). Google Scholar

  • 7
    Tinney WF& Walker JW

    . 1967 Direct solutions of sparse network equations by optimally ordered triangular factorization. Proc. IEEE 55 , 1801–1809. (doi:10.1109/PROC.1967.6011). Google Scholar

  • 8
    Stott B

    . 1974 Review of load flow calculation methods. Proc. IEEE 62 , 916–929. (doi:10.1109/PROC.1974.9544). Google Scholar

  • 9
    Barnes JGP

    . 1965 An algorithm for solving non-linear equations based on the secant method. Comput. J. 8 , 66–72. (doi:10.1093/comjnl/8.1.66). Crossref, ISI, Google Scholar

  • 10
    Broyden CG

    . 1965 A class of methods for solving nonlinear simultaneous equations. Math. Comput. (American Mathematical Society) 19 , 577–593. (doi:10.1090/S0025-5718-1965-0198670-6). Crossref, ISI, Google Scholar

  • 11
    Kelley CT

    . 1995 Iterative methods for linear and nonlinear equations. Philadelphia, PA: SIAM. Crossref, Google Scholar

  • 12
    Bari Kolata G

    . 1979 Continuation methods: new ways to solve equations. Science New Series 204 , 488–489. Google Scholar

  • 13
    Chao KS& Saeks R

    . 1977 Continuation methods in circuit analysis. Proc. IEEE 65 , 1187–1194. (doi:10.1109/PROC.1977.10664). Google Scholar

  • 14
    Adomian G

    . 1989 Nonlinear stochastic systems and applications to physics. Mathematics and its Applications Dordrecht, The Netherlands: Kluwer Academic Publishers. Google Scholar

  • 15
    Deuflhard P

    . 2004 Newton methods for nonlinear problems. Heidelberg, NY: Springer. Google Scholar

  • 16
    Kelley CT

    . 2003 Solving nonlinear equations with Newton's method, fundamentals of algorithms. Philadelphia, PA: SIAM Publications. Crossref, Google Scholar

  • 17
    Gómez-Expósito A, Abur A, de la Villa Jaén A& Gómez-Quiles C

    . 2011 A multilevel state estimation paradigm for smart grids. Proc. IEEE 99 , 952–976. (doi:10.1109/JPROC.2011.2107490). Google Scholar

  • 18
    Gómez-Quiles C, de la Villa Jaén A, Gil H& Gómez-Expósito A

    . 2013 Equality-constrained bilinear state estimation. IEEE Trans. Power Syst. 28 , 902–910. (doi:10.1109/TPWRS.2012.2215058). Crossref, ISI, Google Scholar

  • 19
    Gómez-Expósito A& Gómez-Quiles C

    . 2013 Factorized load flow. IEEE Trans. Power Syst. 28 , 4607–4614. (doi:10.1109/TPWRS.2013.2265298). Crossref, ISI, Google Scholar

  • 20
    Sandberg IW& Willson AN

    . 1972 Existence and uniqueness of solutions for the equations of nonlinear DC networks. SIAM J. Appl. Math. 22 , 173–186. (doi:10.1137/0122020). Crossref, ISI, Google Scholar

  • 21
  • 22
    Branin FH

    . 1972 Widely convergent method for finding multiple solutions of simultaneous nonlinear equations. IBM J. Res. Dev. 16 , 504–522. (doi:10.1147/rd.165.0504). Crossref, ISI, Google Scholar

  • 23
    Boggs P

    . 1971 The solution of nonlinear systems of equations by A-stable integration techniques. SIAM J. Num. Anal. 8 , 767–785. (doi:10.1137/0708071). Crossref, ISI, Google Scholar

Find the Solution to a System of Nonlinear Equations

Source: https://royalsocietypublishing.org/doi/10.1098/rspa.2014.0236