EE 418/518 Nonlinear Dynamics & Chaos
Lecture 03: Map Definition, Properties, Fixed Points & Stability


Much of this section’s notes were made while reading through:

Alligood, K. T., Sauer, T. D., Yorke, J. A., & Chillingworth, D. (1998). Chaos: an introduction to dynamical systems. SIAM Review, 40(3), 732-732.


Primary goal of science: Predict how a physical system will evolve in time through the development of models that lead to theories that lead to our understanding of our universe.

Models suggest how real-world processes behave.

Extremely simple models are presented here, but all models of physical processes are idealizations with inherent inaccuracies.

“All models are wrong, but some are useful.” -George E. P. Box

Hopefully, a model can capture important features of the physical processes we study.

The feature we want to capture now is the patterns of points on an orbit.

Even for exceedingly simple models, patterns can be simple, complicated, or even chaotic (very complicated).

Always ask, “Does our model exhibit behavior because of our simplifications or despite our simplifications?”

Modeling reality too closely leads to an intractable model that is difficult to learn from.

Building and using models is an art that takes experience.

Pretend that I have planted kudzu in my yard. I notice that the length of the kudzu vine has doubled every week.

Consider the discrete mapping $x[k+1] = 2x[k]$.

This equation is written as a difference equation. It can be written in terms of either advances or delays in respect to the discrete time variable $k$.

Advance operators: $x[k+1] = 2x[k]$

Delay operators: $x[k] = 2x[k-1]$

The variable $x$ could represent many physical quantities at discrete time steps. i.e. electrical voltage samples at a predefined sample period, the number of bacteria in a petri dish per day, the population of rabbits per season, company stock reporting per quarter, etc.

A dynamical system relates a set of possible states with a rule that determines the present state in terms of past states.

The rule is restricted to be deterministic

  • no randomness/stochasticity
  • present state is uniquely produced from the past states (i.e. a previous state doesn’t map to two present states)

Is a coin flip deterministic? Why or why not?

Keller, J. B. (1986). The probability of heads. The American Mathematical Monthly, 93(3), 191-197.

There are two major types of dynamical systems are emphasized in this course

  1. Discrete-time: To be formally defined as maps (also called, recurrence relation, difference equation, transformation, iterative mapping, function)
  2. Continuous-time: To be formally defined as flows at a later time.

I. MAP DEFINITION

Recall that I have planted kudzu in my yard. I notice that the length of the kudzu vine has doubled every week.

$x[k] = f(x) = 2x[k-1]$ with $x[0] = 1$

This dynamical system evolves in time by the composition of the function $f$.

$x[2] = 2x[1] = 2(2x[0]) = f(f(x)) = f^2(x)= (f \circ f)(x) = 4$

This is like an iterated, calculator game where the same function $f$ is applied iteratively to each result again and again.


Strogatz Pedantic Point: When we say “map,” do we mean the function $f$ or the difference equation $x_{k+1}=f(x_k)$? Following common usage, we’ll call both of them maps. If you’re disturbed by this, you must be a pure mathematician … or should consider becoming one!

From Chapter 10 of Strogatz, S. H. (2018). Nonlinear dynamics and chaos: With applications to physics, biology, chemistry, and engineering. CRC press.


Note that $f^k(x)$ refers to the $k^\text{th}$ composition of the function $f$. It does not mean raised to the $k$.

A map is a function whose input space (domain) and output space (range) are the same ‘size/dimensionality.’

Example: $x[k] = 2x[k-1]$ maps the real line $\mathbb{R}$ onto itself by the map $f(x)=2x$.

If $x$ is a point and $f$ is a map, then an orbit (also called a trajectory) is the a set of points ${x,f(x),f^2(x),…,f^k(x)}$ where the starting point of the orbit is the initial condition $x[0]=x_0$. Not that the subscript is often used to communicate a time iterate.

A fundamental dynamical structure is a fixed point, $p$ (sometimes written as $x^*$), which occurs if $f(p)= p$. This is the simplest pattern of our dynamical system. Under this condition the time series will stabilize at point $p$ and never fluctuate due to our prior requirements of determinism and uniqueness. In a sense, fixed points are the points were the dynamics of the system stop.

Notice that $x[k] = x[k-1]$ for $x[0] = x_0 = 0$ for our simple doubling example. That is $x[1] = 2x[0]=2(0)=0$ for the initial condition $x_0 = 0$. So, this system has a fixed point $p=0$. This means that if the map ever produces an output of $x[k] = 0$, all the following points on the map will be identically zero.

A more formal treatment of these ideas and definitions can be found in

Nayfeh, A. H., & Balachandran, B. (2008). Applied nonlinear dynamics: analytical, computational, and experimental methods. John Wiley & Sons.

Generally, $x$ may be a finite-dimensional vector that represents the state of the system $x_k$ at a discrete time $t_k$ where $k \subset \mathbb{Z}$ (i.e. $k=0,\pm 1, \pm 2, \dots$)

$\underline{\mathbf{x}}_{k+1}= f(\underline{\mathbf{x}}_k)$

$\begin{bmatrix} x^1_{k+1}\newline x^2_{k+1}\newline \vdots \newline x^n_{k+1}\newline \end{bmatrix} = f\Bigg(\begin{bmatrix}x^1_{k}\newline x^2_{k}\newline \vdots \newline x^n_{k}\newline \end{bmatrix}\Bigg)$

The dimensionality of the system is $n$ where $n$ real numbers are needed to specify a state of the system. For example, the initial condition for a map in $\mathbb{R}^3$ (i.e. $n=3$ or a three dimensional system) would require $3$ values: $x^1_0$, $x^2_0$, $x^3_0$. Note that the superscripts indicate the index of the state dimension and not exponentiation. For example squaring the $1^\text{st}$ scalar component of $\underline{\mathbf{x}}$ would be written as $(x^1)^2$.

The state vector is in the $n^\text{th}$ dimensional Euclidean space

  • $\underline{\mathbf{x}} \in \mathbb{R}^n$
  • $\underline{\mathbf{x}}$ ‘belongs to’ the $n^\text{th}$ dimensional real numbers

The time vector is on the real number line

  • $t \in \mathbb{R}$

The time index $k$ belongs to the integers

  • $k \in \mathbb{Z}$

The distance measure (think of this like ruler) known as the Euclidean norm is

$||\underline{\mathbf{x}}|| = \sqrt{(x^1)^2+(x^2)^2+\dots+(x^n)^2}$ where $x^i$ is the scalar component of $\underline{\mathbf{x}}$ and $(\cdot)^2$ indicates the squaring of any terms inside the brackets.

Note that some dynamical systems are more naturally studied in spaces that are not Euclidean. For example, sometimes angular state variables are of interest in spaces like spherical, cylindrical, toroidal, etc.

A map is a time evolution operator that transforms the current state of the system to a subsequent state. Maps are often referred to as evolution operators, mappings, transformations, and functions interchangeably, however, some require careful rigor when referring to functions. Recall that here:

A map is a function whose input space (domain) and output space (range) are the same ‘size/dimensionality.’ Here set and space will be used interchangeably. Formally linking this definition to our kudzu example, the map $f$ transforms points in an input region $M$ to an output region $N$.

  • $f$ maps $M$ to $N$ where $M$ is a subset/included in $n^\text{th}$ dimensional Euclidean space and $N$ is a subset/included in $n^\text{th}$ dimensional Euclidean space
  • This can be written as $f:M\rightarrow N$ where $M \in \mathbb{R}^n$ and $N \in \mathbb{R}^n$
  • The map $f$ transforms input values to output values each time the function is applied
  • The colon $:$ is shorthand for a mapping and the arrow $\rightarrow$ is shorthand for to
  • The input values form a set $M$.
  • The output values form a set $N$.
  • Maps can also be written with control parameters indicated in their definitions like $x_{k+1}=f(x_k:a)$ where the parameter $a$ is a control parameter that is meant to be varied or considered over a range of values.

II. MAP PROPERTIES

The properties of maps dictate how we may use them. To name a few examples, we are interested if a map is $onto$, one-to-one, bijective/invertible, differentiable, continuous, homeomorphic, and diffeomorphic.

Consider $f:M\rightarrow N$

  • Onto: All the elements in the function’s output space are used. Does the output space have elements that are left over or restricted by the map? The function $f$ maps $M$ onto $N$ if for every point $p$ included in $N$ there exists at least one point $q$ included in $M$ that is mapped to $p$ by $f$. The equivalent statement is $\exists \text{ }p \in N \text{ } \forall \text{ at least one }q \in M \text{ }| \text{ } f:M\rightarrow N$.
    • A map that is onto is also called surjective and involves the domain (possible inputs) mapping to the range (possible outputs) such that the range occupies the entire codomain. Recall that the codomain is the set of all possible output values (I call this our output space) and the range is the set of output values allowed by the function $f$.
    • In short, the entire output space should be accessible by the map for $f$ to exhibit the onto property.
    • An onto map is $x_k = 2 x_{k-1}$ where $f=2x:M\rightarrow N$ where $M \in \mathbb{R}^n$ and $N \in \mathbb{R}^n$.
    • An example of a map that is not onto is $x_k = |2x_{k-1}| $ where $f=2x:M\rightarrow N$ where $M \in \mathbb{R}^n$ and $N \in \mathbb{R}^n$. This example is not onto because the input space contains positive and negative reals, but the output space contains only positive reals. Note that if we restrict the input and output spaces to $M \in \mathbb{R}^+$ and $N \in \mathbb{R}^+$, then the resulting map would have the onto property.
  • One-to-one: The horizontal line test. Does every point in the input space map uniquely to the output space? The map $f$ is called one-to-one if no two (or more) points in the input space $M$ map to the same point in the output space $N$.
    • A map that is one-to-one is also called injective.
    • An one-to-one map is $x_k = 2x_{k-1} $ where $f=2x:M\rightarrow N$ where $M \in \mathbb{R}^n$ and $N \in \mathbb{R}^n$
    • An example of a map that is not one-to-one is $x_k = |2x_{k-1}| $ where $f=2x:M\rightarrow N$ where $M \in \mathbb{R}^n$ and $N \in \mathbb{R}^n$. This example is not one-to-one because both $\pm 1 \in M$ map to the same value of $+2 \in N$. Note that if we restrict the input and output spaces to $M \in \mathbb{R}^+$ and $N \in \mathbb{R}^+$, then the resulting map would have the one-to-one property.
  • Invertible: Do map outputs pass the horizontal line test AND occupy the entire output space? Can values of the map be iterated backward in time? If a map $f$ is one-to-one and onto, it is invertible. For an invertible map, $x_{k-1}$ may be found via the knowledge of $x_k$. This means that the time evolution of the map may be traced backwards in time to gain knowledge of previous states as long as a current state is given. This is achieved by applying the inverse, $f^{-1}$, of the map $f$. Note that in this case, the inverse $f^{-1}$ is also one-to-one and onto. A map that is not invertible is called noninvertible.
    • A map that is invertible is calls called bijective.
    • For map $f:M\rightarrow N$, the inverse satisfies $f^{-1}:N\rightarrow M$. In other words, the map $f$ takes a point from the input space and maps it to a unique point in the output space. The inverse, $f^{-1}$, takes a point from the output space and maps it back to a unique point in the input space.
    • For $f(x)=2x-1$, the inverse is $f^{-1}(x)=\frac{x+1}{2}$.
    • An invertible map is $x_k = 2x_{k-1} $ where $f=2x:M\rightarrow N$ where $M \in \mathbb{R}^n$ and $N \in \mathbb{R}^n$ because $f^{-1}(x)=\frac{x}{2}$.
    • An example of a map that is not invertible is $x_k = |2x_{k-1}| $ where $f=2x:M\rightarrow N$ where $M \in \mathbb{R}^n$ and $N \in \mathbb{R}^n$. This example is not one-to-one because both $\pm 1 \in M$ map to the same value of $+2 \in N$. Note that if we restrict the input and output spaces to $M \in \mathbb{R}^+$ and $N \in \mathbb{R}^+$, then the resulting map would have the one-to-one property.
  • Continuous: Does the function have an discontinuities? There are several definitions of continuity for functions. Recall in terms of limits that function is continuous at a point, $p$, of its domain $M$, if the limit of $f(x)$ approaches $p$ through the domain $M$, exists and is equal to the point $p$. That is to say $f(p)$ exists, $\lim_{x \to p} f(x)$ exists, and $\lim_{x \to p} f(x)=f(p)$. Other useful definitions of continuity may arise such as definition in terms of neighborhoods (topology), definition in terms of sequences, definition from Weierstrass and Jordan, and others.
  • Differentiable: As we zoom in, does the function look like a straight line? Can I differentiate the function? If so, how many times? If the scalar components of map $f$ can be differentiated $r$ times in respect to the scalar components of $\underline{\mathbf{x}}$, then $f$ is called a $\mathcal{C}^r$ function. Structures that are not differentiable include functions at points with breaks, corners, cusps, slopes that are vertical lines ($\sqrt[3]{x}$ at $x=0$), etc…
    • If each scalar component of $f$ is continuous with respect to the scalar components, then $f$ is a $\mathcal{C}^0$ function.
    • For $r \geq 1$, the map is called a differentiable map.
    • For example, $f(x) = |x|$ may be continuous everywhere, but is $\mathcal{C}^0$) because but it is not differentiable at $x=0$.
      • If you zoom in on the function, it does not make a straight line at zero
      • $\frac{d}{dx}|x|=\lim_{\text{h}\rightarrow 0}\frac{f(x+h)-f(x)}{h}=\lim_{\text{h}\rightarrow 0}\frac{|0+h|-|0|}{h}= \frac{|h|}{h}$ Does not exist
      • $\lim_{\text{h}\rightarrow 0^+}\frac{|h|}{h}=+1\neq \lim_{\text{h}\rightarrow 0^-}\frac{|h|}{h}=-1$
    • Think about polynomials as other examples…

“Don’t use a five-dollar word when a fifty-cent word will do.”- Mark Twain


  • Isomorphic: Can we describe mathematical structures between two objects with some relation? An isomorphism occurs when a mapping preserves the mathematical structure between two ‘objects’. For example, this means that two different functions may be topologically equivalent in some regard even though they seem numerically/computationally different. Two seemingly different functions may be shown to be isomorhpically connected (topologically conjugate) through some mapping. Generally, two mathematical structures are isomorphic if an isomorphism exists between them. Another important example connects differential equations to difference equations as shown in Lorenz’s famous paper observing chaotic systems Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of atmospheric sciences, 20(2), 130-141.

  • Homeomorphic: A map $f$ is homeomorphic (or forms a homeomorphism) if both $f$ and $f^{-1}$ are invertible and continuous (i.e. $\mathcal{C}^0$). Homeomorphism was named Henri Poincaré from Greek roots meaning similar shape. Homeomorphisms are isomorphisms that preserve all the topological properties of a given space. Two spaces with a homeomorphism between them are called homeomorphic, and from a topological viewpoint are equivalent. For example, if a homeomorphism is found between a flow and a map, then the map can be studied to infer properties of the flow. This is desirable because maps are often much easier to analyze. This property is also important when a time series is observed without knowledge of other state variables. For instance, a derivative or further state variables may be reconstructed through time delay embedding.

    • $f$ is invertible (aka a bijection, one-to-one, and onto)
    • $f$ is continuous ($\mathcal{C}^0$)
    • $f^{-1}$ is continuous, sometimes referred to as an open mapping
  • Diffeomorphic: A map $f$ is diffeomorphic (or forms a diffeomorphism) if both $f$ and $f^{-1}$ are $\mathcal{C}^r$ with $r \geq 1$. This is like a homeomorphism except the function $f$ and it’s inverse, $f^{-1}$, are differentiable. Every diffeomorphism is a homeomorphism, but not every homeomorphism is a diffeomorphism.

  • Poincaré Maps: Poincaré maps, also known as Poincaré sections or return maps, are discrete-time representations of differential equations that are diffeomorphisms. For example, A Poincaré map can describe the evolution of a system for discrete values of time. This technique allows maps (usually more simple to analyze) to infer observations about flows (usually more difficult to analyze). Structures like fixed points and periodicities may be observed as well as properties like stability and entropy.

    III. TIME SERIES & ORBITS

    Let’s study the time evolution of a map. The data will be referred to as a time series. An initial condition and the successive points for the state variables of a system as they evolve in time are referred to as an orbit (also referred to as trajectory).

    Consider the map $x_k = 2 x_{k-1}$ where $f=2x:M\rightarrow N$ where $M \in \mathbb{R}^n$ and $N \in \mathbb{R}^n$.

    For the initial condition $x_0 = 1$ time series data can be tabulated.

    $k$ $x_k$
    0 1
    1 2
    2 4
    3 8
    $\vdots$ $\vdots$

    The orbit the results from this initial condition is the set ${1, 2, 4, 8, \dots }$. Note that an orbit of the map is usually defined by forward iterates with its initial condition as its starting point ${ x_0, x_1, x_2, \dots }$. Since this map is one-to-one and onto, it is is invertible. This means that we can iterate its points backwards in time. In principle, the orbit of an invertible map can be extended to ${\dots, x_{-2}, x_{-1}, x_0, x_1, x_2, \dots }$.

    In terms of map iterations (or compositions), an orbit can be written as ${ x_0, f(x_0), f^2(x_0), f^3(x_0),\dots }$ and the orbit of an invertible map can be extended to ${\dots,f^{-3}(x_0),f^{-2}(x_0),f^{-1}(x_0), x_0, f(x_0), f^2(x_0), f^3(x_0),\dots }$.

    Common plots of these orbits include time series plots (state variables vs. time), phasors (real portion vs. imaginary portion), phase space (state variable vs. one of its delays) and the phase space with a cobweb plot overlain.

    Consider this simple, linear difference equation that is commonly taught in an undergraduate curriculum to apply our newly defined nonlinear dynamics perspective on maps.

    EE518_Lec03_firstOrderDelay

    This difference equation specifies $x[k]$ as an input and $y[k]$ as an output. There is a feedback loop with parameter $A$ as a gain term that is applied after a unit delay of $\Delta T$. We take the discrete time vector to be $k \in \mathcal{Z}$. By inspecting the diagram, a recurrence relation can be written as $y[k] = -A y[k-1] + x[k]$. We arrange all of the output terms of $y$ and delayed versions of the state variable $y$ together and consider the input $x$ as a forcing function.

    • Delay operator form: $y[k]+y[k-1] = x[k]$ with initial condition $y[-1]$
    • Advanced operator form: $y[k+1]+y[k] = x[k+1]$ with initial condition $y[0]$
    • Eigenfunctions of the delay operator $D$: $\gamma^k$, $k\gamma^k$, etc. can be used for an ansatz approach
    • Solutions of the form: $y[k]=$ Homogeneous Solution + Particular Solution
    • Solutions of the form: $y[k]= y_h[k] + y_p[k]$
    • Homogeneous Solution: Known as the natural response and represents the characteristic modes of the system
    • Particular Solution: Known as the forced response and represents the system’s behavior due to a forcing function
    • Approach: First find the characteristic modes that yield $y_h[k]$, then find the forced response $y_p[k]$.

Finding characteristic modes using the homogeneous (zero-input) equation involves turning the forcing function $x[k]$ ‘off’ and analyzing the system’s unforced or natural behavior. This gives

$y_h[k] + A y_h[k-1] = 0$ with the ansatz that the solution will take the form of the eigenfunction $\gamma^k$ resulting in a homogeneous solution of the form $y_h[k] = C\gamma^ku[k]$ where $u[k]$ is the unit step function used to satisfy causality of the system and $C$ is a constant that adjusts the solution to match an initial condition given for $y[-1]$. Plugging in the eigenfunction $\gamma^k$ as a solution to $y_h[k] + A y_h[k-1] = 0$ gives

$\gamma^k + A\gamma^{k-1} = 0$

$\gamma^k (1+\frac{A}{\gamma}) = 0$

For nonzero $\gamma$, the term $(1+\frac{A}{\gamma})$ is evaluated to form the characteristic equation

$1+\frac{A}{\gamma} = 0$

$\gamma+A = 0$

Thus, $\gamma = -A$.

To find the particular solution, the ansatz is that the solution will take the form of the forcing function $x[k]$.

For a concrete example, take $A=0.5$, $x[k]=0.3u[k]$, and $y[-1]=-0.25$. This gives the difference equation

$y[k]+0.5y[k-1] = 0.3u[k]$ with $y[-1]=-0.25$.

  • Step 1: Find the homogeneous solution $y_h[k]=C\gamma^ku[k]$ when the system is not forced, i.e. $y_h[k] + 0.5 y_h[k-1] = 0$.
    • Using the characteristic equation as shown earlier $\gamma+0.5 = 0$
    • This gives the eigenvalue $\gamma = -0.5$.
    • The resulting homogeneous solution is $y_h[k]=C(-0.5)^ku[k]$.
  • Step 2: Find the particular solutions $y_p[k] = Du[k]$ assuming that the form of the particular solution is the same as the forcing function.
    • Plug in the constant $D$ into the problem definition
    • $y_p+0.5y_p = 0.3$ or $D+0.5D = 0.3$
    • $1.5D = 0.3$
    • $D=\frac{0.3}{1.5} = 0.2$
    • This gives the resulting particular solution to be $y_p[k] = 0.2u[k]$.
  • Step 3: Match any remaining constants to the given initial conditions.
    • $y[k] = y_h[k]+y_p[k]$
    • $y[k] = (C\gamma^k+D)u[k]$, note that the function $u[k]$ applies causality to our solution, but we evaluate these values with $k=-1$ to match to our initial condition $y[-1]=-0.25$.
    • $y[-1] = -0.25 = C(-0.5)^{-1} + 0.2$
    • $C=(-0.25-0.2)*(-0.5)=0.225$
  • Step 4: Write our result as a closed-form analytic expression.
    • $y[k] = y_h[k]+y_p[k]$
    • $y[k] = (C\gamma^k+D)u[k]$
    • $y[k] = (0.225(-0.5)^k+0.2)u[k]$

Let’s check to see if this analysis makes sense. $y[-1] = (0.225(-0.5)^{-1}+0.2)=-0.45+0.2=-0.25$. This agrees with the information given.

To check our analysis closer, the original equation can be iterated as a map.

$y[k]+0.5y[k-1] = 0.3u[k]$ with $y[-1]=-0.25$ can be rewritten as

$y[k]=-0.5y[k-1]+0.3u[k]$

This data can be tabulated through iteration to gain each term then apply the superposition of each term. Note the superposition is a powerful idea that is largely restricted to linear systems and is generally not applicable to nonlinear systems.

$k$ Term 1: $-0.5y_{k-1}$ Term 2: $0.3u[k]$ Solution: $y_k=-0.5y_{k-1}+0.3$
-1 N/A 0 -0.25
0 -0.5(-0.25)=0.125 0.3 0.425
1 -0.5(0.425)=-0.2125 0.3 0.0875
2 -0.5(0.0875)=-0.04375 0.3 0.25625
3 -0.5(0.25625)=-0.128125 0.3 0.171875
4 -0.5(0.171875) = 0.3 0.2140625
5 -0.5(0.2140625) = 0.3 0.19296875
6 -0.5(0.19296875) = 0.3 0.20351562499999998
7 -0.5(0.20351562499999998) = 0.3 0.1982421875
8 -0.5(0.1982421875) = 0.3 0.20087890625
$\vdots$ $\vdots$ $\vdots$ $\vdots$
import matplotlib
import matplotlib.font_manager as fm
from matplotlib import pyplot as plt 
plt.rcParams["font.family"] = "serif"
import numpy as np 

# Define number of time steps, discrete time vector and solution vectors
steps = 15
k = np.zeros(steps + 1)
y = np.zeros(steps + 1)
yA = np.zeros(steps+1)
k[0], y[0], yA[0] = 0, -0.25, -0.25

# Define map and solution parameters
A = -0.5
D = 0.3

# Iterate in a loop to 1) calculate the difference equation 2) populate analytic solution
for i in range(steps):
	# Difference equation iteration
	y[i+1] = A * y[i] + D
	# Analytic solution population
	yA[i+1] = 0.225*((-0.5)**i)+0.2
	k[i+1] = k[i] + 1
	print(y[i+1])

# plot the figure!
#plt.xkcd()
plt.figure(figsize=(7,4))
plt.xlabel("k")
plt.ylabel("y[k]")
plt.plot([0,15],[0.2,0.2], color='k', linestyle='dotted', linewidth='1', label="Fixed Point y*")
plt.scatter(k, y, alpha=0.85, color='red', label='Iterated Solution for y[k]')
plt.scatter(k, yA, alpha = 0.85, color='blue', facecolor="None", edgecolor='blue', label="Analytic Solution for y[k]")
plt.legend()
plt.show()

Comparison of Iterated Solution and Analytic Solution

The iterated solution and analytic solution are in good agreement and both asymptotically approach the value 0.2.

The map function is $f(y)=-0.5y+0.3$.

Fixed Point Analysis: $f(x)=x$ We have required our systems to be deterministic and unique, therefore, if a point is repeated subsequently in time, then orbits will remain on that that repeated value for all future iterations. Interestingly, this same logic applies to sequences if they are also repeated. The fixed points of the map are found where $y[k]=y[k-1]$ or identically, $f(p)=p=-0.5p+0.3$ which gives $p=\frac{0.3}{1.5}=0.2$. For this example, if this fixed point is stable as both the analytic solution and iterations suggest, then we may take a limit of large $k$ and our solution should approach this fixed point.

$\lim_{k\rightarrow \infty}y[k] = \lim_{k\rightarrow \infty}(0.225(-0.5)^k+0.2)u[k] = 0.2$

This limit shows that our fixed point attracts the orbit and in the limit towards a very long time, our solution $y[k]$ approaches this fixed point. This type of dynamical structure is called a sink.

Fixed Point Stability: A more sophisticated method to show the stability of a fixed point uses the derivative of the map. The derivative of the map at a fixed point $p$ will indicate if points nearby $p$ tend to be attracted to or repelled by $p$.

For our example, $f(y)=-0.5y+0.3$ with $f’(p)=-0.5$. Because the magnitude of this derivative at the fixed point $p=0.2$ is less than one indicates that points near this fixed point tend to sink towards the fixed point. More formally, $|f’(p=0.2)|< 1$, thus $p$ is a sink which attracts orbits.

We can formally treat the neighborhood of points surrounding a fixed point. Points near a fixed point are considered as an epsilon neighborhood $N_\epsilon(p)$ if the points are represented by real numbers within a distance of $\epsilon$ (think some small, positive number) of $p$. Formally, that $N_\epsilon(p)$ is the interval (or collection/set) of numbers ${x \in \mathbb{R}:|x-p|<\epsilon }$.

As we have established, if an orbit visits a fixed point, $p$, all future iterations will remain at $p$ because $y_{k+1}=f(y_k)=f(p)=p$. Now, what about the neighborhood $N_\epsilon(p)$ that surrounds $p$? The behavior of $N_\epsilon(p)$ indicates the stability of fixed point $p$.

Consider an orbit near the fixed point $y_k = p+\eta_k$ where $\eta_n$ is a small perturbation within $N_\epsilon(p)$. Is this trajectory attracted or repelled by $p$? For example, does $\eta_k$ grow or decay with forward iterates of $f$? The first value in this nearby orbit can be represented as $y_0=p+\eta_0$ and subsequent points in the orbit is simply evolved in time by the map to generally give $y_{k+1}= f(y_k) = f(p+\eta_k)$. As long as $f$ is sufficiently smooth (meaning that derivatives of any order needed exist and they are continuous), these subsequent points can be expanded in a Taylor series. This gives $f(y)|p = \sum{n=0}^\infty \frac{f^{(n)}(p)}{n!}(y-p)^n$.

We will linearize this result. Note that we aim to linearize about the fixed point $p$. Linearization implies that we ignore all the terms that are higher than order 1. There are times that this is not justified, however, we will generally be able to linearize about a fixed point. The series is

$f(y)|p = \sum{n=0}^\infty \frac{f^{(n)}(p)}{n!}(y - p)^n= \frac{f(p)(y-p)^0}{0!} + \frac{f’(p)(y-p)^1}{1!}+\frac{f’’(p)(y-p)^2}{2!}+\dots$

Linearization of the map about a fixed point is predicated on the organization of all the higher order terms in a single collection indicated by $\mathcal{O}(y^2)$.

$f(y)|_p = \frac{f(p)(y-p)^0}{1} + \frac{f’(p)(y-p)^1}{1}+\mathcal{O}(y^2) = f(p) + f’(p)(y-p)+\mathcal{O}(y^2)$

If these higher order terms are small, then $f(y)|_p \approx f(p) + f’(p)(y-p)$.

The map transforms our distance $\eta_k$ from our fixed point as $y_{k+1}=p+\eta_{k+1}$ for a single iteration. This means that the map has acted on our fixed point $p$ and the small perturbation $\eta_k$ to give

$y_{k+1}= p+\eta_{k+1} =f(p+\eta_k)$.

Using our expansion gives

$f(p+\eta_k)= f(p) + f’(p)(p+\eta_k-p)+\mathcal{O}((p+\eta_k)^2) =f(p)+f’(p)\eta_k+\mathcal{O}(\eta_k^2) $.

Linearization (throwing away terms of order 2 and higher) gives $p+\eta_{k+1} =f(p+\eta_k)\approx f(p)+f’(p)\eta_k$, and the map acts on the fixed point to give produce $f(p)=p$.

This gives

$p+\eta_{k+1} = f(p)+f’(p)\eta_k$

$p+\eta_{k+1} = p +f’(p)\eta_k$

$\eta_{k+1} = f’(p)\eta_k$

Now, we have a new difference equation that allows us to study how points in the neighborhood $N_\epsilon$ behave. This takes the form of $\eta_{k+1} =\lambda \eta_k$ with an eigenvalue $\lambda=f’(p)$ that multiplies iterates with a clear relation of how our distance from the fixed point may grow or shrink in the neighborhood $N_\epsilon$.

$\eta_1 = \lambda \eta_0 $

$\eta_2 = \lambda \eta_1 = \lambda^2 \eta_0 $

$\eta_3 = \lambda \eta_2 = \lambda^2 \eta_1 = \lambda^3\eta_0 $

etc.

It is clear that if $|\lambda| > 1$, then the distance $\eta_k$ will grow and our orbit will be unstably repelled from $p$ with each map iteration, if $|\lambda| < 1$, then the distance $\eta_k$ will shrink and our orbit will be stably attracted to $p$ with each map iteration. Note that if $|\lambda| = 1$ the fixed point $p$ is marginally stable (this case is rare in practical systems).

This analysis implies that the derivative of our map (related by $\lambda = f’(p)$) with respect to our state variable can indicate the stability of our fixed points.

Fixed point $p$ is stable (and is referred to is a sink) if $|f’(p)|<1$.

Fixed point $p$ is unstable (and is referred to is a source) if $|f’(p)|>1$.

Fixed point $p$ is marginally stable (which is rarely observed in a physical system) if $|f’(p)|=1$. For this last case, additional analysis is needed to infer stability of the system.

References:

Alligood, K. T., Sauer, T. D., Yorke, J. A., & Chillingworth, D. (1998). Chaos: an introduction to dynamical systems. SIAM Review, 40(3), 732-732.

Keller, J. B. (1986). The probability of heads. The American Mathematical Monthly, 93(3), 191-197.

Strogatz, S. H. (2018). Nonlinear dynamics and chaos: With applications to physics, biology, chemistry, and engineering. CRC press.

Nayfeh, A. H., & Balachandran, B. (2008). Applied nonlinear dynamics: analytical, computational, and experimental methods. John Wiley & Sons.

Lathi, B. P., & Green, R. A. (1998). Signal processing and linear systems (Vol. 2). Oxford: Oxford University Press.

Collet, P., & Eckmann, J. P. (2009). Iterated maps on the interval as dynamical systems. Springer Science & Business Media.