Comments
Transcript
Math 5520 Homework 3 Solutions March, 2016
Math 5520 Homework 3 Solutions March, 2016 1. Consider the boundary value problem −∇ · A∇u = f (x), in Ω ⊂ Rd , (1) u = gD on ΓD , (2) γu + A∇u · ~n = gR on ΓR , (3) where ∂Ω = ΓD ∪ ΓR , |ΓD | > 0, 0 ≤ γ(x) ≤ γ1 , and A(x) is a d × d, symmetric matrix satisfying α0 χT χ ≤ χT A(x)χ ≤ α1 χT χ, ∀χ ∈ Rd , ∀x ∈ Ω. Here, 0 < α0 ≤ α1 . You may assume that the domain boundary and all problem data are smooth and bounded as needed. a) Derive the weak formulation of (1)-(3) . b) Prove that there exists a unique weak solution. c) Prove that if u ∈ H 2 (Ω), then it is a “strong” solution (in the L2 -sense) to (1)-(3). d) Give the energy minimization formulation of the problem and show equivalence with the weak formulation. Solutions: (a) Multiply through (1) by v ∈ C ∞ (Ω), then integrate over the domain and apply the Divergence Theorem: Z Z Z A∇u · ∇v dx − (A∇u · n̂) v dσ = f v dx. Ω ∂Ω Ω Due to the Dirichlet boundary condition (2), we now restrict v = 0 on ΓD . Furthermore, we apply (3) on ΓR and thereby eliminate the normal derivative of the solution on the boundary. The result is: Z Z Z A∇u · ∇v dx + (γu − gR ) v dσ = f v dx. Ω ΓR Ω Now move the gR -term to the right-hand side. For notational convenience, we 1 now present the weak problem as follows: Find u ∈ VD (Ω) such that = L(v), for all v ∈ V0 , with v ∈ H 1 (Ω) | v|ΓD = gD , V0 = v ∈ H 1 (Ω) | v|ΓD = 0 , Z Z a(u, v) ≡ A∇u · ∇v dx + γuv dσ, Ω ΓR Z Z and L(v) ≡ f v dx + gR v dσ. a(u, v) VD = Ω (4) (5) (6) (7) (8) ΓR (b) One may apply the Riesz Representation Theorem, but I will show the result using Lax-Milgram, since it requires less work to show that the assumptions hold. First, note that V0 is a Hilbert space. We reformulate the weak problem by choosing any known G ∈ VD , which we may assume to exist given a smooth boundary and |ΓD | > 0, then decompose u = w + G so that w ∈ V0 is sought to satisfy a(w, v) = L(v) − a(G, v) ≡ L(v), for all v ∈ V0 . (9) Claim 1: a(·, ·) is coercive. Indeed, since we may use the Poincaré inequality for |ΓD | > 0, given any v ∈ V0 we have Z Z Z 1/2 2 2 a(v, v) = |A ∇v| dx + γv dσ ≥ α0 |∇v|2 dx Ω ΓR Ω Z Z α0 α0 2 |∇v| dx + |∇v|2 dx = 2 Ω 2 Ω Z Z α0 α0 |∇v|2 dx + v 2 dx ≥ Ckvk2H 1 , ≥ 2 Ω 2CP2 Ω where CP is the appropriate Poincaré constant and α0 α0 C ≡ min , > 0. 2 2CP2 Claim 2: a(·, ·) is continuous. We apply the standard series of inequalities to get the desired bounds: a(w, v) ≤ α1 |w|1 |v|1 + γ1 kwkL2 (ΓR ) kvkL2 (ΓR ) ≤ α1 |w|1 |v|1 + γ1 kwkL2 (∂Ω) kvkL2 (∂Ω) ≤ α1 |w|1 |v|1 + C1 kwkH 1 (Ω) kvkH 1 (Ω) ≤ C2 kwkH 1 (Ω) kvkH 1 (Ω) , using the Trace Inequality and bounding the H 1 -semi-norm above by the full H 1 -norm. Similary, we show 2 Claim 3: L(·) is continuous. |L(v)| ≤ kf kL2 (Ω) kvkL2 (Ω) + α1 |G|1 |v|1 + γ1 kGkL2 (ΓR ) + kgR kL2 (ΓR ) kvkL2 (ΓR ) ≤ C3 kf kL2 (Ω) + α1 |G|1 + γ1 kGkL2 (ΓR ) + kgR kL2 (ΓR ) kvkH 1 (Ω) , so that there is some C4 > 0 such that |L(v)| ≤ C4 kvkH 1 ⇒ kLk∗ ≡ sup v6=0, v∈V0 |L(v)| ≤ C4 < ∞. kvkH 1 The above claims justify the use of Lax-Milgram to conclude that there exists a unique function w = w(G) for our choice of G such that (9) is satisfied. It follows that for u = w + G, a(u, v) = L(v), for all v ∈ V0 , with u ∈ VD by construction. This u must be unique, meaning that it is in fact independent of G. Indeed, given any two functions u1 ∈ VD and u2 ∈ VD satisfying the weak problem, it follows that for e ≡ u1 − u2 ∈ V0 , we have a(u1 , v) − a(u2 , v) = a(e, v) = L(v) − L(v) = 0, for all v ∈ V0 . We may choose v = e, so that by coercivity of the bilinear form we have 0 ≤ kek2H 1 ≤ Ca(e, e) = 0 ⇒ kekH 1 = 0, (for some fixed C > 0) thus u1 = u2 and the solution is unique. We have thus proved the well-posedness of (4)-(8). (c) Now let u ∈ H 2 solve the weak problem; we show it is then an L2 -strong solution. We are allowed to assume enough smoothness of problem data and the domain boundary, etc., so that from the weak problem we apply the Divergence Theorem again to see that if a(u, v) = L(v), then Z Z Z Z (γu + A∇u · n̂) v dσ = f v dx + gR v dσ, (−∇ · A∇u) v dx + Ω Ω ΓR for all v ∈ V0 . Now define F ≡ −∇ · A∇u − f and G ≡ γu + A∇u · n̂ − gR , with G defined on ΓR . It follows that Z Z Fv dx + Gv dσ = 0, for all v ∈ V0 . Ω ΓR Note that F ∈ L (Ω). Given any v ∈ C0∞ (Ω) ⊂ V0 , we see that Z Fv dx = 0, 2 Ω 3 ΓR implying that F = 0 by density of C0∞ in L2 . This shows that (1) holds in an L2 -sense. We need not prove anything for (2). Now, since F = 0, it follows that Z Gv dσ = 0, for all v ∈ V0 . (10) ΓR Again, we are allowed to assume what we want in this problem for the domain smoothness and data smoothness, so since u ∈ H 2 (Ω) we may assume γu + A∇u · n̂ ∈ H 1/2 (∂Ω). Then with appropriate assumptions on gR , (10) implies that G = 0 and thus (3) holds in a weak sense. (d) Define the functional J (v) ≡ 1 a(v, v) − L(v), 2 for all v ∈ V0 . The energy minimization (EM) formulation is to find w ∈ V0 such that J (w) ≤ J (v) holds for all v ∈ V0 . (Weak implies EM). Given w ∈ V0 solving (9), we note that 1 s2 a(w, w) + s a(w, v) + a(v, v) − L(w) − sL(v) 2 2 s2 1 = a(w, w) + a(v, v) − L(w) 2 2 s2 = J (w) + a(v, v) ≥ J (w), 2 J (w + sv) = proving that w solves the EM problem. (EM implies weak). We note that d J (w + sv)|s=0 = 0, ds for any v ∈ V0 with v 6= 0. Then from the above expansion, we see that a(w, v) − L(v) = 0, proving the weak problem holds. 2. Recall the operator A : V → V in the proof of the Lax-Milgram theorem (see Lecture 11, slides 3-4): for φ ∈ V , hAφ, vi = a(φ, v), ∀v ∈ V. Finish the proof of the Lax-Milgram theorem. Follow the approach outlined in class: a) Prove that A is linear, b) prove that Range(A) is a closed subspace of V , c) prove that A is onto V (Projection Theorem), 4 d) conclude that Range(A) = V , and then that the final result holds by way of the Riesz Representation Theorem. Solutions: (a) Given scalars ci and functions φi ∈ V , i = 1, 2, we see that hA (c1 φ1 + c2 φ2 ) , vi = a (c1 φ1 + c2 φ2 , v) = c1 a (φ1 , v) + c2 a (φ2 , v) = c1 hAφ1 , vi + c2 hAφ2 , vi = hc1 Aφ1 + c2 Aφ2 , vi , for all v ∈ V. It follows that A (c1 φ1 + c2 φ2 ) = c1 Aφ1 + c2 Aφ2 and thus A is linear. (b) Given scalars ci and functions ψi = Aφi ∈ V , i = 1, 2, we see from (a) that c1 ψ1 + c2 ψ2 = c1 Aφ1 + c2 Aφ2 = A (c1 φ1 + c2 φ2 ) , so that c1 ψ1 + c2 ψ2 ∈ Range(A) and thus Range(A) is a linear subspace. Let ψn = Aφn be a sequence with lim kψn − ψkV = 0, n→∞ for some ψ ∈ V . Note then that by coercivity of a(·, ·), which is assumed in the statement of Lax-Milgram, kφn − φm k2V ≤ Ca(φn − φm , φn − φm ) = C hAφn − Aφm , φn − φm i = C hψn − ψm , φn − φm i ≤ Ckψn − ψm kV kφn − φm kV ⇒ kφn − φm kV ≤ Ckψn − ψm kV . Since ψn is strongly-convergent in V , it is also Cauchy and thus so is φn by the above inequality. Since V is Hilbert (hence complete), we see that φn → φ strongly for some φ in V . Next, note that for all v ∈ V , kAvk2V = hAv, Avi = a(v, Av) ≤ C kvkV kAvkV ⇒ kAvkV ≤ C kvkV , by continuity of the bilinear form. It follows that kAφ − ψkV ≤ kAφ − Aφn kV + kAφn − ψkV = kA(φ − φn )kV + kψn − ψkV ≤ Ckφ − φn kV + kψn − ψkV → 0 as n → ∞, so that kAφ − ψkV = 0 and thus Aφ = ψ. Hence the range of A is closed. (c) The Projection L Theorem says that we may write V as the direct sum V = Range(A) Range(A)⊥ . In other words, each v ∈ V may be written uniquely as v = P v + (I − P )v with P v the orthogonal projection operator onto Range(A). We apply coercivity of the bilinear form to see that for any v ∈ V , k(I − P )vk2V ≤ Ca((I − P )v, (I − P )v) = C hA(I − P )v, (I − P )vi = 0, 5 since A(I − P )v ∈ Range(A) and I − P )v ∈ Range(A)⊥ . Then v = P v for all v ∈ V and hence V = Range(A). (d) Since L is assumed to be a continuous linear functional on V , there exists ψ ∈ V such that L(v) =< ψ, v >, for all v ∈ V by the Riesz Representation Theorem. Then since ψ = Au for some u ∈ V , it follows that a(u, v) = L(v) holds for all v ∈ V . The uniqueness of u is shown via coercivity; if a(u1 , v) = a(u2 , v) = L(v) for all v, then a(u1 − u2 , v) = 0 ⇒ a(u1 − u2 , u1 − u2 ) = 0 ≥ Cku1 − u2 kV , hence u1 = u2 . 3. a) Derive the weak formulation of the BVP −(a(x)u0 (x))0 + b(x)u0 (x) + c(x)u(x) = f (x), 0 < x < 1, (11) u(0) = u(1) = 0, (12) where 0 < a0 ≤ a(x) ≤ a1 < ∞, 0 < a0 ≤ c(x) ≤ c1 < ∞ and |b(x)| ≤ a0 for all x ∈ [0, 1]. b) Prove that the weak formulation has a unique solution, assuming all problem data to be smooth and bounded as needed. Solutions: (a) It should be clear that the weak problem is: Find u ∈ H01 (0, 1) such that 1 Z au0 v 0 + bu0 v + cuv dx = 1 Z 0 f v dx, ∀v ∈ H01 (0, 1). 0 (b) We associate the bilinear form and linear functional Z a(u, v) ≡ 1 au0 v 0 + bu0 v + cuv dx, L(v) ≡ 0 Z 1 f v dx. 0 The continuity of both a(·, ·) and L(·) should be clear, so I will just show coercivity. Then the well-posedness follows by Lax-Milgram. Indeed, Z 1 0 2 0 Z 2 a|v | + bv v + cv dx ≥ a0 a(v, v) = 0 1 0 2 2 Z 1 |v | + v dx − a0 0 |v 0 ||v| dx 0 ≥ a0 kvk2H 1 − a0 kv 0 kL2 kvkL2 . Now apply Young’s inequality in the form kv 0 kL2 kvkL2 ≤ 1 1 1 0 2 kv kL2 + kvk2L2 = kvk2H 1 . 2 2 2 Coercivity follows from inserting this in the previous result above. 6 4. Modify your code to approximate −(a(x)u0 (x))0 + u0 (x) = 0, 0 < x < 1, (13) u(0) = 1, u(1) = 2. (14) Choose a(x) as in Homework 1, problem 2(c). Run the code with n = 10, 20, 40, 80, 160, 320 elements and compute the convergence rates for the error in the L2 and H 1 norms. Use the Richardson extrapolation method to do this; do not compute errors by deriving and using the true solution (see Lecture 18, slide 10). In terms of convergence rates, we observe in Table 1 precisely the optimal-order results that one expects for our smooth solution. Results could vary somewhat depending on how you coded your methods and calculated the errors. But the convergence rates should be optimal. n 10 20 40 80 160 320 L2 difference — 8.03E − 4 2.01E − 4 5.04E − 5 1.26E − 5 3.15E − 6 Rate — — 1.996 1.999 2.000 2.000 H 1 difference — 2.87E − 2 1.44E − 2 7.21E − 3 3.60E − 3 1.80E − 3 Rate — — 0.997 0.999 1.000 1.000 Table 1: The convergence rates are optimal in both norms. 7