MATH 3795 Lecture 18. Numerical Solution of Ordinary Differential Equations. Goals
by user
Comments
Transcript
MATH 3795 Lecture 18. Numerical Solution of Ordinary Differential Equations. Goals
MATH 3795 Lecture 18. Numerical Solution of Ordinary Differential Equations. Dmitriy Leykekhman Fall 2008 Goals I I I I Introduce ordinary differential equations (ODEs) and initial value problems (IVPs). Examples of IVPs. Existence and uniqueness of solutions of IVPs. Dependence of the solution of an IVPs on parameters. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 1 Systems of Ordinary Differential Equations. Given functions f1 , . . . , fn : Rn+1 → R and scalars y0,1 , . . . , y0,n ∈ R, we want to find y1 , y2 , . . . , yn : R → R such that y10 (x) yn0 (x) = f1 (x, y1 (x), . . . , yn (x)), .. . = (1) fn (x, y1 (x), ..., yn (x)) and y1 (a) = .. . y0,1 , yn (a) = y0,n . (2) (1) is called a system of first order ordinary differential equations (ODEs) and (2) are called initial conditions. Together it is called an initial value problem (IVP). D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 2 Systems of Ordinary Differential Equations. If we define f : Rn+1 → Rn , f1 (x, y1 . . . , yn ) .. f (x, y1 , . . . , yn ) = , . fn (x, y1 . . . , yn ) y : R → Rn , y1 (x) .. y(x) = , . yn (x) we can rewrite the system as y 0 (x) = f (x, y(x)) y(a) = y0 , where y0 = (y0,1 , . . . , y0,n )T . D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 3 n-th Order Differential Equation. Often, one has to solve n-th order differential equations of the form z (n) (x) = g(x, z(x), z 0 (x), . . . , z (n−1) (x)), x ∈ [a, b] with initial conditions z(a) = z0 , z 0 (a) = z1 , . . . , z (n−1) (a) = zn−1 . (3) can be reformulated as a systems of first order ODEs. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 4 (3) n-th Order Differential Equation. If we introduce the functions y1 (x) y2 (x) = = .. . z(x), z 0 (x), yn (x) = z (n−1) (x), then these functions satisfy the first order differential equations y10 (x) y20 (x) 0 yn−1 (x) yn0 (x) = = .. . y2 (x), y3 (x), = yn (x), = g(x, y1 (x), y2 (x), . . . , yn (x)), with initial conditions y1 (a) = z0 , y2 (a) = z1 , . . . , yn (a) = zn−1 . D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 5 n-th Order Differential Equation. If we introduce f (x, y1 , . . . , yn ) = and y2 (x) y3 (x) .. . yn (x) g(x, y1 (x), . . . , yn (x)) y0 = z0 z1 .. . , zn−1 then we arrive at the system of first order ODEs y 0 (x) = f (x, y(x)), y(a) = y0 . Thus, it is sufficient to consider systems of first order ODEs. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 6 Example 1 (Predator-Prey Model). Let’s look at a special case of an interaction between two species, one of which the predators eats the other the prey. Canadian lynx and snowshoe hare Data from pelt-trading records Pictures and data are from http://www.math.duke.edu/education/ccp/materials/diffeq/predprey/pred1.htm D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 7 Example 1 (Predator-Prey Model). Mathematical model. Assumptions. I The predator species is totally dependent on the prey species as its only food supply. y(t) is the size of the predator population at time t. I The prey species has an unlimited food supply; only predator poses threat to its growth. x(t) is the size of the prey population at t. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 8 Example 1 (Predator-Prey Model). Prey population. I I If there were no predators, the second assumption would imply that the prey species grows exponentially, i.e., x0 (t) = ax(t). Since there are predators, we must account for a negative component in the prey growth rate. Assumptions: I I The rate at which predators encounter prey is jointly proportional to the sizes of the two populations. A fixed proportion of encounters leads to the death of the prey. x0 (t) = ax(t) − bx(t)y(t). Predator population. y 0 (t) = −cy(t) + px(t)y(t). D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 9 Example 1 (Predator-Prey Model). Prey population. I I If there were no predators, the second assumption would imply that the prey species grows exponentially, i.e., x0 (t) = ax(t). Since there are predators, we must account for a negative component in the prey growth rate. Assumptions: I I The rate at which predators encounter prey is jointly proportional to the sizes of the two populations. A fixed proportion of encounters leads to the death of the prey. x0 (t) = ax(t) − bx(t)y(t). Predator population. y 0 (t) = −cy(t) + px(t)y(t). I Lotka-Volterra Predator-Prey Model: (a, b, c, p > 0 are constants) x0 (t) = ax(t) − bx(t)y(t), y 0 (t) = −cy(t) + px(t)y(t). D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 9 Example 1 (Predator-Prey Model). Solution of the Lotka-Volterra Predator-Prey Model x0 (t) = ax(t) − bx(t)y(t), y 0 (t) = −cy(t) + px(t)y(t) with initial condition x(0) = 20, y(0) = 40 and parameters a = 1, b = 0.02, c = 0.3, p = 0.005. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 10 Example 2 (Chemical Reactions). I A reaction involving the compounds A, B, C, D is written as σA A + σB B → σC C + σD D. I Here σA , σB , σC , σD are the stoichiometric coefficients. The compounds A, B are the reactants, C, D are the products. The → indicates that the reaction is irreversible. For a reversible reaction we use . For example, the reversible reaction of carbon dioxide and hydrogen to form methane plus water is CO2 + 4H2 CH4 + 2H2 O. The stoichiometric coefficients are σCO2 = 1, σH2 = 4, σCH4 = 1, σH2 O = 2. I For each reaction we have a rate r (the number of reactive events per second per unit volume, measured in [mol/(sec L)]) of the reaction that together with the stoichiometric coefficients determines the change in concentrations (measured in [mol/L]) resulting from the reaction. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 11 Example 2 (Chemical Reactions). I For example, if the rate of the reaction is r and if denote the concentration of compound A, . . . by CA , . . . , we have the following changes in concentrations: d CA (t) = · · · − σA r . . . , dt d CB (t) = · · · − σB r . . . , dt d CC (t) = . . . σC r . . . , dt d CD (t) = . . . σD r . . . , dt I Usually the reaction r is of the form α β r = kCA CB , where k is the reaction rate constant and α, β are nonnegative parameters. The sum α + β is called the order of the reaction. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 12 Example 2 (Chemical Reactions. Autocatalytic reaction.) I I Autocatalysis is a term commonly used to describe the experimentally observable phenomenon of a homogeneous chemical reaction which shows a marked increase in rate in time, reaches its peak at about 50 percent conversion, and the drops off. The temperature has to remain constant and all ingredients must be mixed at the start for proper observation. We consider the catalytic thermal decomposition of a single compound A into two products B and C, of which B is the autocatalytic agent. A can decompose via two routes, a slow uncatalyzed one (r1 ) and another catalyzed by B (r3 ). The three essential kinetic steps are A→B+C A + B → AB Start or background reaction, Complex formation, AB → 2B + C I Autocatalytic step. Again, we denote the concentration of compound A, . . . by CA , . . . . The reaction rates for the three reactions are r1 = k1 CA , r2 = k2 CA CB , D. Leykekhman - MATH 3795 Introduction to Computational Mathematics r3 = k3 CAB . Linear Least Squares – 13 Example 2 (Chemical Reactions.) The autocatalytic reaction leads to a system of ODEs dCA dt dCB dt dCAB dt dCC dt = −k1 CA − k2 CA CB , = k1 CA − k2 CA CB + 2k3 CAB , = k2 CA CB − k3 CAB , = k1 CA + k3 CAB with k1 = 0.0001, k2 = 1, k3 = 0.0008 and initial values CA (0) = 1, CB (0) = 0, CAB (0) = 0, CC (0) = 0. Here time is measured in [sec] and concentrations are measured in [kmol/L]. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 14 Example 3. I For given A ∈ Rn×n consider the linear IVP y 0 (x) = Ay(x), x ∈ [0, ∞) (4) y(0) = y0 . I Suppose there exist V = (v1 | . . . |vn ) ∈ Cn×n and Λ = diag(λ1 , . . . , λn ), λj ∈ C, such that (eigen decomposition) A = V ΛV −1 or, equivalently, Avj = λj vj , I j = 1, . . . , n. Insert into (4) (V −1 y)0 (x) = V −1 y 0 (x) = V −1 AV V −1 y(x) = ΛV −1 y(x), x ∈ [0, ∞) V −1 y(0) = V −1 y0 . D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 15 Example 3. I Putting z := V −1 y we can see that z satisfies z 0 (x) = Λz(x), z(0) = V I −1 x ∈ [0, ∞), y0 := z0 . Since Λ = diag(λ1 , . . . , λn ), this is equivalent to zj0 (x) = λj zj (x), zj (0) = z0,j , I j = 1, . . . , n. Unique solution zj (x) = eλj x z0,j , I x ∈ [0, ∞), j = 1, . . . , n. Hence, the unique solution of (4) is given by y(x) = V z(x) = n X vj eλj x z0,j . j=1 D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 16 Example 3. Consider A= We can show 1 1 A= √ 2 5 −2 1 −20.08 −39.96 −39.96 −80.02 −100 0 0 −0.1 and y0 = (1, 1)T . Since 1 1 V −1 y0 = √ −2 5 2 1 D. Leykekhman - MATH 3795 Introduction to Computational Mathematics 1 1 1 √ 5 1 =√ 5 1 −2 Linear Least Squares – 2 1 3 −1 17 = V ΛV −1 . := z0 Example 3. This leads to z10 (x) = −100z1 (x), z20 (x) = −0.1z2 (x), D. Leykekhman - MATH 3795 Introduction to Computational Mathematics 3 z1 (0) = √ 5 1 z2 (0) = − √ 5 Linear Least Squares – 18 Existence and Uniqueness of Solutions. Theorem (Existence Theorem of Peano) Let D ⊂ Rn+1 be a domain, i.e. an open connected subset of Rn with (a, y0 ) ∈ D. If f is continuous on D, then there exists δ > 0 such that the IVP y 0 (x) = f (x, y(x)), y(a) = y0 . has a solution on the interval [a − δ, a + δ]. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 19 Existence and Uniqueness of Solutions. Example Consider the IVP y 0 (x) = y(x)1/3 , y(0) = 0, For arbitrary x̄ > 0 the functions 0, y(x) = ± 32 [(x − x̄)]3/2 , x ≥ 0. 0 ≤ x ≤ x̄ x ≥ x̄. solve the IVP. Hence, the IVP has infinitely many solutions. Note that the partial derivative of f (x, y) = y 1/3 with respect to y is singular at y = 0. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 20 Existence and Uniqueness of Solutions. Theorem (Existence and Uniqueness Theorem of PicardLindelöf) If f is continuous on D and if there exists M > 0 such that kf (x, y)k ≤ M for all (x, y) ∈ D and if f is Lipschitz continuous with respect to y on D̃ = {(x, y) ∈ D : |x − a| ≤ δ, ky − y0 k ≤ δM }, i.e., if there exists L > 0 such that kf (x, y1 ) − f (x, y2 )k ≤ Lky1 − y2 k then for all (x, y1 ), (x, y2 ) ∈ D̃, y 0 (x) = f (x, y(x)), y(a) = y0 . has a unique solution on the interval [a − δ, a + δ]. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 21 Existence and Uniqueness of Solutions. The Picard Lindelöf Theorem is based on the equivalence of the IVP y 0 (x) = f (x, y(x)), x ∈ [a, b], y(a) = y0 . and the integral equation Z y(x) = y0 + x f (s, y(s))ds. a The integral of a vector valued function is defined component wise, i.e., Rx f (s, y(s))ds Z x a 1 .. f (s, y(s))ds = . . R a x f (s, y(s))ds a n D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 22 Dependence of the Solution of the ODE on Perturbations of the Problem. Consider the two ODEs y 0 (x) = f (x, y(x)), (5) y(a) = y0 and z 0 (x) = g(x, z(x)), z(a) = z0 , (6) where f, g : Rn+1 → Rn are given functions and y0 , z0 ∈ Rn are given vectors. We view (6) as a perturbation of (5) and we want to know what the error between the exact solution y and the solution of the perturbed problem z is. This question is also interesting in the context of numerical solutions of IVPs D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 23 Dependence of the Solution of the ODE on Perturbations of the Problem. I Suppose there exist constants 1 , 2 > 0 such that ky0 − z0 k ≤ 1 and kf (x, y) − g(x, y)k ≤ 2 ∀x ∈ R, y ∈ Rn . Furthermore, suppose there exists L > 0 such that kf (x, y) − f (x, z)k ≤ Lky − zk I I ∀x ∈ R, y, z ∈ Rn . Suppose that the IVPs (5) and (6) have unique solutions y and z. These solutions satisfy the integral equations Z x f (s, y(s))ds (7) y(x) = y0 + a and x Z z(x) = z0 + g(s, z(s))ds. (8) a D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 24 Dependence of the Solution of the ODE on Perturbations of the Problem. Subtract (8) from (7) to get Z x y(x) − z(x) = y0 − z0 + f (s, y(s)) − g(s, z(s))ds Zax = y0 − z0 + f (s, y(s)) − f (s, z(s))ds a Z x f (s, z(s)) − g(s, z(s))ds. + a D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 25 Dependence of the Solution of the ODE on Perturbations of the Problem. Taking the norm we get Z x ky(x) − z(x)k ≤ ky0 − z0 k + f (s, y(s)) − f (s, z(s))ds Zax + f (s, z(s)) − g(s, z(s))ds Z xa kf (s, y(s)) − f (s, z(s))k ds ≤ ky0 − z0 k + {z } | {z } a | ≤1 ≤Lky(s)−z(s)k Z + a x kf (s, z(s)) − g(s, z(s))k ds. | {z } ≤2 D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 26 Dependence of the Solution of the ODE on Perturbations of the Problem. Hence if we define the error e(x) = ky(x) − z(x)k, then Z x e(x) ≤ 1 + L e(s)ds + 2 (x − a). a D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 27 Dependence of the Solution of the ODE on Perturbations of the Problem. Lemma (Gronwall Lemma) If h,w and k are nonnegative and continuous on the interval [a, b] satisfying the inequality Z x h(x) ≤ w(x) + k(s)h(s)ds ∀x ∈ [a, b], a then h obeys the estimate Z h(x) ≤ w(x) + x R x e t k(s)ds k(t)w(t)dt. a D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 28 Dependence of the Solution of the ODE on Perturbations of the Problem. Apply the Gronwalls lemma to the equation Z e(x) ≤ 1 + 2 (x − a) + L x e(s)ds. a Here h(x) = e(x) = ky(x) − z(x)k, w(x) = 1 + 2 (x − a), and k(x) = L. Thus, the error between the solution y of the original IVP (5) and the solution z of the perturbed IVP (6) obeys Z x eL(x−t) L[1 + 2 (t − a)]dt ky(x) − z(x)k ≤ 1 + 2 (x − a) + a = 1 e L(x−a) 2 + (eL(x−a) − 1) x ≥ a. L D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 29 (9) Dependence of the Solution of the ODE on Perturbations of the Problem. Example I The solution of the differential equation y 0 (x) = 3y(x), y(0) = 1 is given by y(x) = e3x . The function f (x, y) = 3y is Lipschitz continuous with respect to y with Lipschitz constant L = 3. I The perturbed differential equation z 0 (x) = 3z(x) + 2 , has the solution z(x) = (1 + 1 + I z(0) = 1 + 1 2 3x 3 )e − 2 3 . Thus, |y(x) − z(x)| = 1 e3x + 2 3x (e − 1) x ∈ [0, ∞) 3 This shows that the error estimate (9) is sharp. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least Squares – 30