EE 418/518 Nonlinear Dynamics & Chaos
Lecture 01: Introduction, methods, motivation, & historical perspective
I. INTRODUCTION
“… using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.” -Stanislaw Ulam
Dynamics: The study of how systems and their states change (evolve) over time.
Examples:
- A ball falls off a table and bounces on the floor
- Electric charge fills a capacitor
- The weather
- A lunar lander uses a control system for a soft landing on the moon’s surface
- The stock market
- The Moon orbiting Earth
- The Moon orbiting Earth orbiting the Sun
- Fireflies flashing in the backyard
- The internal circuits of a computer when the power is turned ‘on’
- Pressing a key on a piano causes a hammer to strike a string that then vibrates
- Traffic
- An ant finds food, signals other ants via pheromones, more ants gather to form a line (and sometimes other baffling structures: spheres, bridges, patterns, etc.)
From Merriam-Webster
dynamics (noun) dy·nam·ics /dī-ˈna-miks/
plural in form but singular or plural in construction
1 physics : a branch of mechanics that deals with forces and their relation primarily to the motion but sometimes also to the equilibrium of bodies
2 : a pattern or process of change, growth, or activity population dynamics
3 : variation and contrast in force or intensity (as in music)
From Oxford dynamics (noun) dy·nam·ics /dīˈnamiks/
1 : the branch of mechanics concerned with the motion of bodies under the action of forces
2 : the forces or properties which stimulate growth, development, or change within a system or process
From Cambridge dynamics (noun) /daɪˈnæm.ɪks/
1 : forces that produce movement
2 : forces or processes that produce change inside a group or system
3 : changes in loudness in a piece of music
4 physics: the scientific study of the forces that produce movement
Tools for this study are based on differential/difference equations and can be:
- Quantitative: Location of a fixed point, frequency of an oscillation, amount of entropy
- Qualitative: Phase-portrait trends, dense trajectories, spatial projections, dimensional reductions
- Continuous-time: Ordinary differential equations (ODEs), an RLC circuit response, a pendulum
- Discrete-time: The stock market, time-sampled systems, video (FPS)
- Linear: Small angle pendulum (approx.), RLC circuits (approx.), mass-damper-spring system (approx), beam with small deflection (approx.)
- Nonlinear: Large angle pendulum, double pendulum, RLC circuits, diode/transistor circuits, beam with large deflection
“Wait…I thought that RLC circuits were linear…”
Unfortunately, RLC circuits are not perfectly linear. The models for most energy storage elements (capacitors and inductors) often exclude a amplitude/stability dependent terms. For a parallel plate capacitor, these terms describe how the capacitace between two electrodes changes a function of voltage or dielectric stability. To model a simple, parallel plate capacitor, two common contributions are modeled: 1) the capacitance due to the electric field over the shared surface area between two electrodes and 2) the capacitance due to non-uniform, electric ‘fringing’ fields near the edges of plate-like electrodes. When a parallel plate capacitor is modeled as sufficiently ’thin,’ the fringing fields are often ignored (note that in some designs fringing fields may be significant). Even considering this idealized condition a nonlinearity is present and often discarded for ease of analysis.
Consider a voltage $V$ applied between two parallel plates with charges $+q$ and $-q$ on each respective electrode. The result is the capacitance $C = \frac{q}{V}$. As a circuit element, the current-voltage relationship of a capacitor is
$$i_C(t) = C\frac{dV}{dt} + V\frac{dC}{dt}$$.
This models indicates that if the capacitive element changes over time, those changes will manifest as a change in current $i(t)$ proportional to the voltage across the plates. Simplification is often justified by assuming the capacitance of the element is not voltage-dependent and is, in fact, stable over time. This handwaving can be justified for many analyses and leads to the familiar treatment of
$$i_C(t) = C\frac{dV}{dt}$$.
However on a physical level, the dieletric material that serves as an insulator between the parallel plates is not perfectly stable. The material properties of the dielectric change with many parameters like voltage, temperature, stress, previous charge, etc. If we consider signals of interest in the form of voltage, nonlinear effects may be minimized by the use of materials that have near voltage-independent dielectric constants like silicon dioxide (pure glass), various metal oxides, magnesium aluminum titanate, etc. Interestingly, materials (various ceramics) that facilitate the ferroelectric effect exhibit notable nonlinearity and are used to create varactors, oscillators, parametric amplifiers, circulators, exotic memory elements.
Class I vs. Class II Capacitors - TBD
Large amplitudes often lead to the observation of nonlinear effects. For a capacitor model in which the capacitance changes due to the voltage applied at the component’s terminals
$$i_C(t) = C(V)\frac{dV}{dt} + V\frac{dC(V)}{dt}$$.
Methods for Flows
Prediction of time-series behaviors and parameter dependence motivates our analysis. This is achieved by identifying dynamical structures (fixed points, limit cycles) and analyzing their stabilities. There is a long history of developing these methods. Our approach takes a progression that blends exact, approximate and computational methods with overlap as each tool begins to breakdown or become intractable.
Quantitative Analytical Methods → Qualitative Analytical Methods → Computational Methods → Data-driven & Machine Learning Methods
Consider the second-order harmonic oscillator as a toy-problem for analyzing a continuous-time system known as a flow.
- Method 1: Ansatz (Guess)
- Find/guess an f(t) such that taking it’s derivative twice gives the same f(t)
- This function can be the eigenfunction of the derivative operator (there is a little bit to unpack about that)
- Builds a characteristic equation
- Works well for linear equations
- Method 2: Energy Conservation
- Find expressions for kinetic energy (KE) and potential energy (PE)
- For conservative systems (energy is not ’lost’): KE + PE = Constant
- No friction (i.e. no dissipation)
- Works well as equations become more difficult to analyze
- Method 3: Series Expansion (most versatile)
- Often begin with Taylor’s Series (polynomial and derivatives) (implies smooth, differentiable functions)
- Nonlinear extensions exist (i.e. Volterra Series)
- Method 4: Integral Transform (Fourier transform, Laplace transform, etc)
- Comprises most of ECE undergraduate curricula
- Extensions: Wavelets
- Method 5: Hamilton’s Equations & Flows on Phase Space (Simplify and Abstract)
- Change second-order D.E. into a pair of first-order D.E.s
- Now plot/sketch/analyze variables vs. their derivatives
- No pressing need to solve the D.E., instead it is visualized
- Method 5.1: Use Matrix Exponential for the Hamiltonian
- Exponentiating a matrix is defined by Taylor’s series
- There are linear algebra approaches to solve this
- DUBIOUS WAYS TO TAKE THE EXPONENT OF A MATRIX PAPER
- Method 6: Numerical Methods
- Turn the ‘flow’ into a ‘map’ by discrete steps
- Small steps will approximate the continuous time flow (slow/expensive)
- Large steps will introduce error (faster/more incorrect)
- Sometimes numerical solvers can’t be trusted :)
Methods for Maps
- Method 1: Iteration
- Method 2: Ansatz (Characteristic Eq.)
- Method 3: Graphically (Similar to Hamiltonian)
- Method 4: Analyze fixed points/higher period orbits/stability
Nomenclature (Largely from Nayfeh)
First steps toward quantitative analytical methods: For simple ordinary differential equations (ODEs), our goal is to find a closed-form analytic solution. The broad strokes of this approach include using an eigenfunction (of the differential operator $\mathfrak{D}$) to solve a differential equation such that the solution is in an expected form.
$$\frac{d}{dx}\mathrm{x} = \mathbf{A}\mathrm{x} \implies \mathrm{x}(t_0+t) = e^{\mathbf{A}t}\mathrm{x}(t_0)$$
A method that conveniently scales with increased dimensions is to use the eigenvectors $\mathbf{T}$ of the system (eventually represented by a matrix) as a coordinate transformation that decouples each state variable from one another (practically, this implies a diagonalized matrix). The resulting coordinate transfomation allows the state of a system to evolve (in the direction of the eigenvectors) giving
$$\mathrm{x}(t) = \mathbf{T}e^{\mathbf{\Lambda}t}\underbrace{\mathbf{T^{-1}}\mathrm{x}(t_0)}_{\mathrm{z}(0)}$$.
Note that $\mathbf{T^{-1}}\mathrm{x}(t_0)$ translates the initial condition $\mathrm{x}(t_0)$ into the coordinate system where the growth or decay of the system aims along the the eigenvectors $\mathbf{T}$ of $\mathrm{x}$. This new direction is comprised of the eigenvectors $\mathbf{\Lambda}$ of $\mathrm{z}$. Finally, $\mathbf{T}\mathrm{z}(t)$ tranlates the time evolution of the system back into the original coordinate system of $\mathrm{x}$.
II. LINEAR DIFFERENTIAL EQUATIONS
Calculus & Taylor Series Review:
Ordinary Differential Equations (ODEs):
Systems of ODEs:
Eigenvalues and Eigenvectors: