...

II. FINITE DIFFERENCE METHOD

by user

on
Category: Documents
34

views

Report

Comments

Transcript

II. FINITE DIFFERENCE METHOD
Computational Methods and advanced Statistics Tools
II. FINITE DIFFERENCE METHOD
1. Difference formulae
Finite difference formulae are the basis of methods for the numerical
solution of differential equations. Assuming that the function f (x) is
known in the points x − h and x + h, we can estimate f ′ (x), that is the
angular coefficient of the curve y = f (x) in the point x, by means of
the angular coefficients of different lines.
Def. II.1A (Forward, backward and central differences) Let f be
a C 1 function given at least in some neighbourhood of the point x.
The forward difference (obtained by joining the points (x, f (x)) and
(x + h, f (x + h)))
f (x + h) − f (x)
,
h
the backward difference (obtained by joining the points (x, f (x)) and
(x − h, f (x − h))
f (x) − f (x − h)
D− f :=
,
h
and the central difference (obtained by joining (x − h, f (x − h)) and
(x + h, f (x + h))
D+ f :=
f (x + h) − f (x − h)
h
define the difference formulae
Dc f :=
f ′ (x) ∼ D+ f, f ′ (x) ∼ D− f, f ′ (x) ∼ Dc f.
By proceeding in analogous way, higher order derivatives can be approximated. For example, approximating the function f by a second
degree polynomial through the points (x − h, f (x − h)), (x, f (x)) and
(x + h, f (x + h)), it turns out that D+ (D− f ) is a central difference
approximation of f ′′ (x) :
D− f (x + h) − D− f (x)
h
1 1
1
= [ (f (x + h) − f (x)) − (f (x) − f (x − h))] =
h h
h
1
= 2 [f (x + h) − 2f (x) + f (x − h)] ∼ f ′′ (x).
h
D+ (D− f ) = (
1
2
Thm. II.1B (On truncation error)Let f be sufficiently regular and
let ET be its truncation error. Then ET = O(h) in case of a forward
difference, ET = O(h2 ) in case of a central difference, as h → 0.
Besides, for the central difference D+ (D− ) approximating f ′′ (x), the
truncation error is O(h2 ) as h → 0.
PROOF.
Let us apply Taylor’s formula.1 If the function is twice continuously
derivable,
f (x + h) − f (x)
− f ′ (x)
ET :=
h
h2
h
1
= (f (x) + hf ′ (x) + f ′′ (ξ) − f (x)) − f ′ (x) = f ′′ (ξ)
h
2
2
where ξ is a suitable point of (x, x + h). If the function is C (3) then
the central difference approximation gives the truncation error
1
ET =
[f (x + h) − f (x − h)] − f ′ (x)
2h
h2
h3
h2
h3
1
[f (x)+hf ′ (x)+ f ′′ (x)+ f ′′′ (ξ1 )−f (x)+hf ′ (x)− f ′′ (x)+ f ′′′ (ξ2 ) ]
=
2h
2
6
2
6
3
3
1 h ′′′
h
=
[ f (ξ1 ) + f ′′′ (ξ2 )] = O(h2 ). △
2h 6
6
Finally, if f ∈ C 4 , we have
1
D+ (D− f ) = 2 [f (x + h) − 2f (x) + f (x − h)]
h
1
h2
h3
= 2 [f (x) + hf ′ (x) + f ′′ (x) + f (3) (x)+
h
2
6
2
h
h3
+O(h4 ) − 2f (x) + f (x) − hf ′ (x) + f ′′ (x) − f (3) (x) + O(h4 )] =
2
6
2
2
h
1 h
= 2 [( + )f ′′ (x) + O(h4 )] = f ′′ (x) + O(h2 ). △
h 2
2
Remark II.1C The influence of rounding errors in difference formulae is taken into account by the following considerations.
1Recall
Taylor’s formula: if f is C r+1 in some neighbourhod of x then the r−th
remainder satisfies
hr
h2
f (x + h) − [f (x) + hf ′ (x) + f ′′ (x) + · · · + f (r) (x)] = O(hr+1 ), as h → 0.
2
r!
Here the Landau’s symbol O(hm ) as h → 0 denotes any infinitesimal fuction S(h)
such that S(h)/hm stays bounded as h → 0. There are several explicit expression
of the remainder: for example if f ∈ C r+1 there is ξ ∈ [x, x + h], such that the
hr+1 (r+1)
f
(ξ) (if, say, h > 0).
r-th remainder is equal to (r+1)!
errore(h)
3
errore totale
errore di troncamento
errore di arrotondamento
h
Figure 1. Rounding error and truncation error in the approximation of the derivative by central differences
Let us denote by f˜(x ± h) the approximate values of f (x ± h) and let
us suppose that
|f˜(x ± h) − f (x ± h)| ≤ ǫ
where ǫ only depends on the precision of the used machine. Then,
denoting D̃c the numerically computed central difference,
ǫ h2 (3)
+ f (ξ)
h 6
The bound describes the real situation, there is a divergent component
of the error as h → 0. In other words, for decreasing h rounding errors
can prevail on truncation error, with unreliable results. The situation
is described in Figure II.1. An optimal choice of h corresponds to the
2
minimum of the function hǫ + h6 max |f (3) |.
|D̃c f (x)−f ′ (x)| ≤ |D̃c f (x)−Dc f (x)|+|Dc f (x)−f ′ (x)| ≤
2. Finite difference method
È invece proprio per il fatto di essere essenzialmente non completi e sempre
bisognosi della relazione con l’altro che noi ci impegniamo nel lavoro. È la
comunione primigenia con l’altro, con la storia presente e con quella ancora
in fieri, a donare senso al mio lavoro, che è veramente mio solo nella
misura in cui è anche lavoro sul mio ”io”, lavoro a cui partecipa in modo
essenziale la relazione con l’altro.
4
The difference method is based on the choice of subintervals [xi , xi+1 ]
of [a, b] and the approximation of derivatives by suitable difference operators. We take equal intervals with length h = (b − a)/n, so the
subdivision points are given by
xi = a + ih,
i = 0, 1, ..., n
We denote {ȳi , i = 1, ..., n} the discrete approximation of the values
y(xi ) of the continuos variable solution. Let us analyze the method in
the case of the linear problem
(
−y ′′ (x) + q(x)y(x) = f (x), a < x < b
(1)
y(a) = α, y(b) = β
where q(x), f (x) are assigned functions, continuous in the interval [a, b].
Under the hypothesis
(2)
q(x) ≥ 0
existence and uniqueness of the solution can be proved for the problem
(1). Moreover, the solution is C 2 and any further regularity assumption
on data q(x), f (x) implies further regularity of the solution: for example, if f (x), q(x) admit second derivative, the solution has the fourth
derivative. At each node xi , the derivative is discretized by means of
the difference operator
y(xi+1 ) − 2y(xi ) + y(x(xi−1 )
.
y ′′ (xi ) ∼
h2
Inserting such approximation in the equation (1) for i = 1, 2, . . . , n − 1
and replacing y(xi ) by ȳi , we obtain the following linear equations
ȳi+1 − 2ȳi + ȳi−1
+ q(xi )ȳi = f (xi ), i = 1, 2, . . . , n − 1
−
h2
or
(3)
−ȳi+1 + ȳi (2 + h2 qi ) − ȳi−1 = h2 fi ,
i = 1, 2, . . . , n − 1
where qi = q(xi ), fi = f (xi ). In particular, for i = 1 ad i = n − 1,
taking into account the boundary conditions, we have the equations
(4)
(5)
for i = 1,
for i = n − 1,
(2 + h2 q1 )ȳ1 − ȳ2 = α + h2 f1
(2 + h2 qn−1 )ȳn−1 − ȳn−2 = β + h2 fn−1
Thm. II.2A (Existence and uniqueness of the numerical solution)The
difference method applied to the problem (1) leads to a discrete problem
which is the linear system of n − 1 equations
(6)
Aȳ = b
where
ȳ = (ȳ1 , . . . , ȳn−2 , ȳn−1 )T ,
5



A=


and
(2 + h2 q1 )
−1
0
2
−1
(2 + h q2 ) −1
..
..
..
..
..
.
.
.
.
.
−1 (2 + h2 qn−2 )
−1
0
−1
(2 + h2 qn−1 )






b = (h2 f1 + α, h2 f2 , ..., h2 fn−2 , h2 fn−1 + β)T .
Under the hypothesis q(x) ≥ 0 such a system admits a unique solution.
PROOF
Equations (3), (4), (5) can be written in vector form as stated. Now
from linear algebra
A is non-singular ⇐⇒ zT Az 6= 0, ∀z 6= (0, ..., 0).
By hypothesis (2), A satisfies:

2 −1
0
 −1 2 −1
 .
..
..
..
..
zT Az ≥ (z1 , ..., zn−1 ) 
.
.
.
.
 ..

−1 2 −1
0
−1 2

2z1 − z2
 −z1 + 2z2 − z3

..

.

= (z1 , ..., zn−1 ) 
 −zj−1 + 2zj − zj+1

..

.
−zn−2 + 2zn−1




z1

  ... 

 z
n−1









2
= 2z12 − z1 z2 − z1 z2 + 2z22 − z2 z3 + ... − zn−2 zn−1 + 2zn−1
2
= z12 + (z1 − z2 )2 + ... + (zn−2 − zn−1 )2 + zn−1
>0
for any (z1 , ..., zn−1 ) 6= (0, ..., 0). Thus A is positive definite, hence nonsingular: then the discrete model is solvable under the same hypothesis
of the continuous variable one, and the statement is proved. △
The discrete solution ȳi , i = 0, 1, ..., n, depends on the discretization
gauge h. Then an important question is about the convergence problem as h → 0: is it possible to approximate the continuous solution,
according to a fixed precision, for sufficiently small h? The approximation should be pointwise, since the solution of the boundary value
problem is continuous. The study of convergence is introduced by the
following example.
6
Example II.2B - Let us consider the Dirichlet problem
′′
y (x) + x2 y(x) = sin(x)
y(−4π) = 0, y(4π) = 0
The numerical computation can be performed by a program of the
following type (the language and environment R can be taken from
CRAN, or www.R-project.org)
programma<- function(a,b,ya,yb,n){
q<-function(x){x^2}
A<-matrix(numeric((n-1)^2),n-1,n-1)
h<-(b-a)/n
for (i in 1:n-1)
for(j in 1:n-1)
{ if (i==j)
A[i,j] <- 2+ h^2 * q(a+i*h)
else
{if (j==i+1 | j==i-1 )
A[i,j] <- -1 }
}
f<-function(x){sin(x)}
bb<- numeric(n-1)
bb[1]<- h^2*f(a+(n-1)*h) +yb
for (k in 2:(n-2))
bb[k] <- h^2*f(a+ k*h)
bb[n-1]<- h^2*f(a+(n-1)*h)+yb
y<- c(ya, solve(A, bb), yb)
x<- a+ h*(0:n)
plot(x, y,ty="l", main="y’’(x) + x^2y(x)=sin(x)" )
y
}
-------------------------------------------SALVARE NELLA DIRECTORY PROPRIA DI R,POI IN WORKSPACE:
> source("programma.txt")
> programma(-4*pi,4*pi,0,0,100)
-----------------------------------------------
The following theorem shows that, if the solution y(x) is sufficiently
regular, the errors |y(xi ) − ȳi | are O(h2 ) as h → 0.
7
0.0
−0.3
−0.2
−0.1
y
0.1
0.2
0.3
y’’(x)+x^2y(x)=sin(x), y(−4*pi)=0, y(4*pi)=0
−10
−5
0
5
10
x
Figure 2. The difference method applied to the forced harmonic oscillator y ′′ (x) + x2 y(x) = sin(x) with Dirichlet conditions y(−4π) = 0 , y(4π) = 0.
Thm. II.2D (Convergence ) Let y(x) be the solution of the problem
with Dirichlet conditions
−y ′′ (x) + q(x)y(x) = f (x), y(a) = α, y(b) = β
with q(x) and f (x) continuous functions on [a, b] and q(x) ≥ 0. Assume, moreover, that the function y(x) has bounded fourth derivative,
|y (4) (x)| ≤ M for any x ∈ [a, b]. If the approximation of y(xi ) obtained
by the difference method is denoted by ȳi , then for some C independent
of h,
max{|y(xi ) − ȳi | : 1 ≤ i ≤ n − 1} ≤ Ch2
i.e. the method satisfies a convergence of order 2.
Concepts of the PROOF
First recall the norm of a matrix and associated inequalities. Then
the convergence is reduced to the two concepts of consistence and
stability.
0) Norm of a matrix.
If the vector space Rn is endowed with the usual scalar product
u · u ≡ uT u, ∀u ∈ Rn
the Euclidean norm and the corresponding metric space structure are
induced:
√
k u k= uT u, d(u, v) =k u − v k, ∀u, v ∈ Rn
Consider a square n × n matrix B = ( P
bi,j ) as a linear operator u →
n
v = Bu in R , such that vi = (Bu)i = j bi,j uj , 1 ≤ i, j ≤ n.
8
Now, for each definition of a norm for vectors, an operator norm of
matrices is induced:
k Bu k
≡ sup k Bu k .
k B k:= sup
u6=0 k u k
kuk=1
By such definition, the inequality
k Bu k≤k B k k u k
is satisfied.
Not only the Euclidean norm of vectors is useful: sometimes a ”maximum” norm is convenient:
k u km := max{|ui | : 1 ≤ i ≤ n}
Although it is not associated to any scalar product, it turns out to be
equivalent to the Euclidean norm from the metric point of view:
∃c1 , c2 : k u k≤ c1 k u km ,
k u km ≤ c2 k u k .
By this choice, the corresponding norm2 of matrices is defined by:
k Bu km
k B km = sup
.
u6=0 k u km
We are interested in the following consequence:
k Bu km ≤k B km · k u km , ∀u ∈ Rn .
1) Consistence.
The quantity
1
(7)
τi = − 2 [ u(xi+1 ) − 2u(xi ) + u(xi−1 ] + q(xi )u(xi ) − f (xi )
h
is introduced. It represents the local discretization error: the error
committed in the node xi when the true solution u(x) is inserted in the
discrete scheme. Now the true solution satisfies u′′ (xi ) + q(xi )u(xi ) −
f (xi ) = 0, so we write D+ D− u as u” + truncation error (II.1B). Thus,
for some ξi , we have
(8)
h2
h2
|τi | = |u′′ (xi )+ u(4) (ξi )+q(xi )u(xi ) −f (xi )| = | u(4) (ξi )| ≤ M h2 /12.
12
12
where M = max{|u(4) (x) : a ≤ x ≤ b} is finite since u ∈ C 4 . Such an
inequality is true for maxi |τi | as well, so the vector τ = (τ1 , ..., τn−1 )
satisfies k τ km ≤ M h2 /12.
2) Stability
2Explicitly
it can be shown that
k B km = max
i
n
X
j=1
|bij |.
9
Denoting by uh the vector of the approximation ( ȳi ), and by u the
vector of the true values of the solution, the convegence regards just
the global error of the solution E := u − uh . Now by the method uh
obeys the linear system
Auh = b.
Hence
Au − Auh = Au − b.
Now, the right hand side is just the discretization error τ , i.e. the
error committed when the analytic solution is inserted in the discrete
scheme. So
A(u − uh ) = τ.
Since A is invertible, the error of the solution can be evaluated.
In the maximum norm:
(9)
k E km =k A−1 τ km ≤ k A−1 km · k τ km
Now, by consistence, k τ km → 0 as h → 0. So we only need to control
the norm of A−1 as h → 0 (stability).
For example, A is nonsingular uniformly with respect to h if the lowest
eigenvalue, say λ1 (h), is larger than a positive constant independent of
h: ∀h, λ1 ≥ c > 0. Then k A−1 km ≤ 1/λ1 ≤ c−1 . As stability is
achieved, the convergence of the method follows. △
3. Further applications of the difference method
Nel libro della Genesi, Dio assegna all’uomo due compiti: custodire il giardino e coltivarlo. Dunque non solo custodire, ma anche coltivare. E coltivare è far sorgere qualcosa di nuovo, portare alla luce potenzialità nascoste
nei semi e nella forza della terra, permettere alla vita di continuare la sua
marcia, sgombrando il terreno da ciò che è di ostacolo. È avere uno sguardo
aperto al futuro. Ma tutto ciò, data la vicinanza etimologica tra ”coltivare” e
”culto”, è anche rendere onore al mistero singolare che ciascuno di noi rappresenta, in quanto siamo parole irripetibili che Dio ha rivolto al mondo.
È investire il proprio io.
Let us analyze the finite difference method applied to convection-diffusion
problems.
Ex.II.3A (Convection-diffusion problems with Dirichlet conditions)
(
−y ′′ (x) + p(x)y ′ (x) + q(x)y(x) = f (x), a < x < b
y(a) = α, y(b) = β
where p(x), q(x), f (x) are continuous functions on [a, b] with q(x) ≥ 0.
The analytical problem turns out to admit a unique solution in the the
class C 2 .
10
Setting h = (b − a)/n, for given n ∈, let us consider the following
difference scheme obained by use of central difference operators:
ȳi+1 − ȳi−1
ȳi+1 − 2ȳi + ȳi−1
+ p(xi )
+ q(xi )ȳi = f (xi )
−
2
h
2h
for i = 1, . . . , n − 1. The boundary conditions are: ȳ◦ = α, ȳn = β.
The discrete problem is equivalent to a linear system Aȳ = f with a
tridiagonal coefficient matrix:


(2 + h2 q1 ) −1 + h2 p1
..
..
..
..
..


.
.
.
.
.




A=
−1 − h2 pi (2 + h2 qi ) −1 + h2 pi



.
.
.
.
.
..
..
..
..
..


h
−1 − 2 pn−1 (2 + h2 qn−1 )
Under the following hypothesis
h
(10)
|p(xi )| < 1 − ǫ, i = 1, 2, ..., n − 1 for some ǫ > 0
2
then the lower and upper diagonals are strictly less, in absolute value,
than the principal diagonal. Proceeding as in the proof of Thm. II.2.1,
setting for convenience 2 and −2 + ǫ the diagonal and off-diagonal
elements of a suitable comparison matrix


2 −2 + ǫ
..
..
..
.. 
 ..
.
.
.
. 
 .


−2 + ǫ 2 −2 + ǫ
zT Az ≥ zT 
z
 .

.
.
.
.
..
..
..
.. 
 ..
−2 + ǫ 2
2
2
+2·(−2+ǫ)zn−2 zn−1 +2zn−1
= 2z12 +(−2+ǫ)z1 z2 +(−2+ǫ)z1 z2 +z22 +...+2zn−2
√
√
√
√
2
+( 2 − ǫzn−2 − 2 − ǫzn−1 )2
= ǫz12 +( 2 − ǫz1 − 2 − ǫz2 )2 +...+ǫzn−1
which is > 0 for any nonzero z ∈n−1 . Hence we have the following
result.
Thm. II.3B (Existence and uniqueness of solution of the discretized
problem) The discretized version of the convection-diffusion problem
above described admits a unique solution if h = (b − a)/n is chosen
sufficiently small with respect to the convection term p(x), as in (10).
II.3C
NEWTON’S METHOD
11
0.0
0.1
0.2
0.3
y
0.4
0.5
0.6
0.7
y’’(x)+(3*x−2)*y’(x)+(x^2+1)*y(x)=cos(x/2), y(−6*pi)=0, y(6*pi)=0
−20
−10
0
10
20
x
Figure 3. The difference method applied to the the
convection-diffusion problem −y ′′ (x) + (3x − 2)y ′ (x) + (x2 +
1)y(x) = cos(x/2) with Dirichlet conditions y(−6π) =
0 , y(6π) = 0.
In the scalar case with f ∈ C (1) , f ′ (x) 6= 0, the solution of the equation
f (x) = 0 is approximated by a sequence xn starting from a (sufficiently
near) point x◦ . The idea, at the k−th step, is to replace the curve
f by the tangent line at xk , i.e. to solve the linear equation 0 =
f (xk ) + f ′ (xk )(x − xk ):
xk+1 = xk − [f ′ (xk )]−1 f (xk ).
In several dimensions suppose that f is regular with a non singular
Jacobian matrix :
f : Rm → Rm , f ∈ C 1 , J non singular.
Let us take an initial point x◦ sufficiently near the true solution of the
equation f (x) = 0.
At the k−th step, the idea is to replace the function y = f (x) by
its Taylor’s formula at the first order, i.e. to solve the linear system
0 = f (xk ) + J(xk )(x − xk ), where J is the Jacobian:
xk+1 = xk − [J(xk )]−1 f (xk ).
EX.II.3D - Let us solve the equation x + ex = 0 by Newton’s method.
Since the derivative f ′ (x) = 1 + ex is everywhere nonzero, Newton’s
iteration is convergent to the unique solution. The R code can be as
follows:
12
newton1<- function(x0,n){
f<-function(x){x+exp(x)}
Df<-function(x){1+exp(x)}
x<-numeric(n+1)
x[1]<-x0
for (i in 2:n+1)
{x[i]<-x[i-1] -(Df(x[i-1]))^(-1)*f(x[i-1])}
z<-seq(x0-5,x0+5,length=100)
plot(z,f(z),ty="l", col="red")
abline(0,0,col="green")
c(x, f(x[n+1]))
}
------------------------------------------------------SALVARE NELLA DIRECTORY PROPRIA DI R.
POI IN WORKSPACE:
> source("newton1.txt")
> newton1(-2,10)
[1] -2.0000000 0.0000000 -0.5000000 -0.5663110 -0.5671432 -0.5671433
[7] -0.5671433 -0.5671433 -0.5671433 -0.5671433 -0.5671433 0.0000000
Ex.II.3E (The difference method applied to a nonlinear problem)
(
y ′′ (x) = 12 (1 + x + y)3 , x ∈ (0, 1)
y(0) = y(1) = 0
Setting f (x, y) := 12 (1 + x + y)3 , let’s notice that
3
∂f
= (1 + x + y)2 ≥ 0
∂y
2
Then it can be shown that such a boundary value problem admits a
unique solution. Let us discretize the interval (0, 1) by the points
xi = ih, with h = 1/(n + 1) and i = 0, 1, ..., n + 1. Let us denote
by ȳi the values of the approximated solution in the points xi . By
discretizing y ′′ by means f central differences, the following nonlinear
−5
0
5
f(z)
10
15
20
13
−6
−4
−2
0
2
z
Figure 4. Graph of f (x) = x + ex , vanishing about in −0.5671433
system is got:
(
ȳ◦ = 0, ȳn+1 = 0
i +ȳi−1
+ 21 (1 + xi + ȳi )3 = 0,
− ȳi+1 −2ȳ
h2
(1 ≤ i ≤ n)
Eliminating the variables ȳ◦ , ȳn+1 , the system can be written in the
following matrix form
(11)
with ȳ = (ȳ1 , ..., ȳn )T and

2 −1
 −1 2 −1
 .
..
..
A=
.
.
 ..

−1
Aȳ + h2 B(ȳ) = 0
..
..
.
.
2 −1
−1 2




f
(x
,
ȳ
)
1
1

 , B(ȳ) = 
.
...


f (xn , ȳn )
The nonlinear system (??) can be solved, for example, by Newton’s
method. Indeed, the Jacobian matrix of the system is given by
∂f
∂f
(12)
J = A + h2 By = A + h2 · diag( (x1 , ȳ1 ), ...,
(xn , ȳn ) )
∂y
∂y
By virtue of the property fy ≥ 0, such a Jacobian turns out to be
a matrix with diagonal predominance and positive definite. Therefore
the Newton’s method is convergent if the initial values are chosen sufficiently near the solution. The implementation of Newton’s method
is the following. A plausible initial point ȳ(◦) is chosen, the (r + 1)−th
iterated vector is obtained from ȳ(r) :
−1
y(r+1) = − A + h2 By
Aȳ(r) + h2 B(ȳ(r) ) .
14
As it is easily verified, the exact solution of the continuous problem is
2
− x − 1.
y(x) = 2−x
By a program of the following type we obtain y(1) from an initial vector,
y(◦) = 0. By the same program we obtain y(2) from y(1) ... and so on.
It turns out that y(1) is already a good approximation of the analytic
2
solution y(x) = 2−x
− x − 1. Actually convergence can be proved,
distinguishing as above consistence and stability, and using the fact
that the nonlinear term is monotonic.
iterabile<-function(n,y)
{
h<-1/n
A<-matrix(rep(0,(n-1)^2),n-1,n-1)
for(i in 1:n-1)
for (j in 1:n-1)
if (i==j)
A[i,j]<-2
for(i in 1:n-1)
for (j in 1:n-1)
if (j==i+1 |j==i-1)
A[i,j]<- -1
x<-1:(n-1)*h
f<-function(x,y) {0.5*(1+x+y)^3 }
Df<-function(x,y){ 1.5*(1+x+y)^2}
J1<-matrix(rep(0,(n-1)^2),n-1,n-1)
J1<-A+h^2* diag( Df(x,y) )
y<-rep(0,n-1)
y1<- y+ solve(J1, - A%*%y -h^2 *f(1:(n-1)*h,y) )
plot(x,y1,ty="l",lwd=2,main="y’’=(1/2)(1+x+y)^3; y(0)=y(1)=0")
y1
}
-----------------------------------------SALVARE NELLA DIRECTORY PROPRIA DI R,POI IN WORKSPACE:
> source("iterabile.txt")
> y<- rep(0,99)
> iterabile(100,y)
> y1<-iterabile(100,y)
> y2<- iterabile(100,y1)
> y3<- iterabile(100,y2)
...
15
−0.15
−0.10
y1
−0.05
0.00
soluzione numerica di eq.diff. non−lineare
0.0
0.2
0.4
0.6
0.8
1.0
x
Figure 5. The red curve is the first iteration solution of
the nonlinear equation y ′′ (x) =
curve is the exact solution.
>
>
>
>
1
2 (1
+ x + y)3 . The green
y1<-as.vector(iterabile(100,0))
x<-seq(0,1,length=99)
plot(x,y1,ty="l",col="red", lwd=2, main="soluzione numerica")
curve(2/(2-x)-x-1, add=TRUE,col="green",lwd=2)
4. The elements of R
Apart from the Cross there is no other ladder by which we may get to heaven.
(St. Rose of Lima, 1586-1617)
> x<- 4
> x
[1] 4
>
> y<- c(2,7,4,1)
> y
[1] 2 7 4 1
> ls()
[1] "x"
assegnazione
assegnazione di un vettore
(lo scrive cosi’, ma e’ un vettore colonna)
elenca tutti gli oggetti nel workspace
"y"
16
> x*y
[1] 8 28 16
prodotto elemento per elemento
4
> a<- matrix(-5:14,4, 5)
assegnazione di matrice
> a
[,1] [,2] [,3] [,4] [,5]
[1,]
-5
-1
3
7
11
[2,]
-4
0
4
8
12
[3,]
-3
1
5
9
13
[4,]
-2
2
6
10
14
(nota: assegna per colonne)
> u <- c(2,5,3)
> u
[1] 2 5 3
> sum(u)/length(u)
[1] 3.333333
> mean(u)
[1] 3.333333
assegnazione di dati
media campionaria di u
media campionaria di u
> sum((u-mean(u))^2)/(length(u)-1)
varianza campionaria di u
[1] 2.333333
> var(u)
varianza campionaria di u
[1] 2.333333
> v<- c(10,20,30)
> u<-c(2,5,3)
> sum((u-mean(u))*(v-mean(v))/(length(u)-1)) covarianza di u,v
[1] 5
> cov(u,v)
covarianza di u,v
[1] 5
> zz<-seq(1,4,0.33)
sequenza di passo prefissato
> zz
[1] 1.00 1.33 1.66 1.99 2.32 2.65 2.98 3.31 3.64 3.97
> zz<-seq(1,4,length=10)
sequenza con prefissato numero si componenti
> zz
[1] 1.000000 1.333333 1.666667 2.000000 2.333333 2.666667 3.000000 3.333333
[9] 3.666667 4.000000
17
>
>
+
+
+
>
n<-5
cicli:
for (i in 1:n)
for (j in 1:n)
if (i==j)
A[i,j]<-1
A
[,1] [,2] [,3] [,4] [,5]
[1,]
1
0
0
0
0
[2,]
0
1
0
0
0
[3,]
0
0
1
0
0
[4,]
0
0
0
1
0
[5,]
0
0
0
0
1
for(var in un insieme) esegui
if(condizione) esegui else esegui
> getwd()
[1] "C:/Programmi/R/R-2.3.1"
> setwd("C:/Users/maioli/Rcodici")
direttorio di lavoro di default
cambia il direttorio di lavoro
Nel direttorio di lavoro, con l’editor semplice
di "blocco note" scriviamo il file codice
chiamandolo "sinusoide.txt":
sinusoide<- function(x,omega){
sin(x*omega)}
poi in workspace lo carichiamo ed eseguiamo:
> source("sinusoide.txt")
> x<-seq(-5,5,length=100)
> plot(x,sinusoide(x,4),ty="l")
in blocco note scrivo "tabella.txt"
1
2
3
4
5
peso
20
15
17
23
25
altezza
70
60
50
67
58
cibo
10
5
8
12
18
in workspace la leggo con "read.table"
18
> read.table("tabella.txt")
peso altezza cibo
1
20
70
10
2
15
60
5
3
17
50
8
4
23
67
12
5
25
58
18
> T<-read.table("tabella.txt")
> x<- T$peso
> y<-T$altezza
> z<- T$cibo
> M<- matrix(c(x,y,z),5,3)
> M
[,1] [,2] [,3]
[1,]
20
70
10
[2,]
15
60
5
[3,]
17
50
8
[4,]
23
67
12
[5,]
25
58
18
assegno tabella alla variabile T
assegno le tre colonne della tabella
assegno la matrice dei dati
(3 colonne: cioe’ 3 variabili)
(5 righe: cioe’ 5 unita’ osservate)
> media<-c(mean(x),mean(y),mean(z))
> media
[1] 20.0 61.0 10.6
> var(M)
[,1] [,2] [,3]
[1,] 17.00 10.25 19.25
[2,] 10.25 62.00 3.75
[3,] 19.25 3.75 23.80
il vettore delle tre medie
matrice di covarianza
dei dati campionari della matrice M
cioe’: cov(colonna i di M, colonna j di M)
per i,j = 1,2,3
> pnorm(2.3, mean=0, sd=1)
[1] 0.9892759
funz. distribuzione N(0;1) in 2.3
> pnorm(5.8, mean=2, sd=3)
[1] 0.8973627
funz. distribuzione N(2;9) in 5.8
> qnorm(0.897,mean=2, sd=3)
[1] 5.793923
quantile 0.897 di N(2;9)
> dnorm(2.7,mean=0,sd=5)
[1] 0.0689636
funz.densita’ N(0;25) in 2.7
19
Fly UP