# Nonlinearity

(Redirected from Non-linear)
This article describes the use of the term nonlinearity in mathematics. For other meanings, see nonlinearity (disambiguation).

In mathematics, nonlinear systems represent systems whose behavior is not expressible as a sum of the behaviors of its descriptors. In particular, the behavior of nonlinear systems is not subject to the principle of superposition, as linear systems are. Crudely, a nonlinear system is one whose behavior is not simply the sum of its parts or their multiples.

Linearity of a system allows investigators to make certain mathematical assumptions and approximations, allowing for easier computation of results. In nonlinear systems these assumptions cannot be made. Since nonlinear systems are not equal to the sum of their parts, they are often difficult (or impossible) to model, and their behavior with respect to a given variable (for example, time) is extremely difficult to predict. When modeling non-linear systems, therefore, it is common to approximate them as linear, where possible.

Some nonlinear systems are exactly solvable or integrable, while others are known to be chaotic, and thus have no simple or closed form solution. An interesting effect is that of freak waves. Whilst some nonlinear systems and equations of general interest have been extensively studied, the general theory is poorly understood.

## Background

### Linear systems

In mathematics, a linear function $f(x)$ is one which satisfies both of the following properties:

1. Additivity: $f(x + y) = f(x) + f(y) \$
2. Homogeneity: $f(\alpha\,x) = \alpha\,f(x) \$

These two rules, taken together, are often referred to as the principle of superposition. (It turns out that homogeneity follows from the additivity property in all cases where α is rational. In that case if the linear function is continuous, homogeneity is not an additional axiom to establish if the additivity property is established.) Important examples of linear operators include the derivative considered as a differential operator, and many constructed from it, such as del and the Laplacian. When an equation can be expressed in linear form, it becomes particularly easy to solve because it can be broken down into smaller pieces that may be solved individually.

Examples of linear operators are matrices or linear combinations of powers of partial derivatives e.g.

$L=d_x^2 + d_y^2$, where x and y are real variables.

A map F(u) is a generalization of a linear operator. Equations involving maps include linear equations, and nonlinear equations as well as nonlinear systems (the last is a misnomer stemming from matrix equation 'systems', a nonlinear equation can be a scalar valued or matrix valued equation). Examples of maps are

• $F(x)=x^2$, where x a real number;
• $F(u)=-d_x^2 u + g(u)$, where u is a function u(x) and x is a real number and g is a function;
• $F(u,v)=(u+v, u^2)$, where u, v are functions or numbers.

### Nonlinear systems

Nonlinear equations and functions are of interest to physicists and mathematicians because most physical systems are inherently nonlinear in nature. Physical examples of linear systems are relatively rare. Nonlinear equations are difficult to solve and give rise to interesting phenomena such as chaos. A linear equation can be described by using a linear operator, L. A linear equation in some unknown u has the form

$Lu=0$.

A nonlinear equation is an equation of the form $F(u)=0$, for some unknown u.

In order to solve any equation, one needs to decide in what mathematical space the solution u is found. It might be that u is a real number, a vector or perhaps a function with some properties.

The solutions of linear equations can in general be described as a superposition of other solutions of the same equation. This makes linear equations particularly easy to solve.

Nonlinear equations are more complex, and much harder to understand because of their lack of simple superposed solutions. For nonlinear equations the solutions to the equations do not in general form a vector space and cannot (in general) be superposed (added together) to produce new solutions. This makes solving the equations much harder than in linear systems.

## Specific nonlinear equations

Some nonlinear equations are well understood, for example

$x^2 - 1 =0$

and other polynomial equations. Systems of nonlinear polynomial equations, however, are more complex. Similarly, first order nonlinear ordinary differential equation such as

$d_x u = u^2$

are easily solved (in this case, by separation of variables). Higher order differential equations like

$d_x^2 u + g(u)=0$ , where g is any nonlinear function,

can be much more challenging. For partial differential equations the picture is even poorer, although a number of results involving existence of solutions, stability of a solution and dynamics of solutions have been proven.

The differential equation of motion of a simple pendulum is non-linear:

${d^2\theta\over dt^2}+{g\over \ell} \sin\theta=0 \quad\quad\quad$

Typicaly this is linearized by assuming small values of $\theta$ so that $\sin\theta$ ~= $\theta$, so that

${d^2\theta\over dt^2}+{g\over \ell} \theta=0 \quad\quad\quad$

For large values of $\theta$, or if the non-linear behavior of the pendulum is of interest, the non-linear equation may be analyzed by phase plane methods.

## Tools for solving certain non-linear systems

Today there are several tools for analyzing nonlinear equations, to mention a few: Implicit function theorem, contraction mapping principle and bifurcation theory.

Perturbation techniques can be used to find approximate solutions to non-linear differential equations.

To do: