...

Exercise 5 Calculation of the energy and geometry optimization for naphthalene

by user

on
Category: Documents
44

views

Report

Comments

Transcript

Exercise 5 Calculation of the energy and geometry optimization for naphthalene
Exercise 5
Calculation of the energy and geometry optimization for
naphthalene
A good measure for the quality of the force field is perform a geometry optimization and compare the result to
an accurate optimization with an ab-initio method or with an experimental structure. In principle all modeling
efforts start by an optimization to ensure that one starts with the (local) minimum energy structure of that
particular force field. Calculations using structures that are perhaps experimentally more accurate but are up
in the potential well, lead to unreliable results.
a) Now that we have determined the point charges for ethanol and naphthalene, we can use these for energy
calculations. Load the xyz-file for naphthalene. Click on the button “Distance” and measure the different
bond lengths of naphthalene. Click “monitor” after each selection.
b) Click on the button with “FF” which stands for force fields. Now select the “Amber/GAFF” force field,
which is a combination of the protein force field Amber and the small molecule force field GAFF (general
Amber force field).
c) Click now on a few atoms. You can see that gmolden chooses an atom type for each atom which is
defined in the force field. If you would choose one of the other force fields the atom typing would go
wrong, since these are all protein force fields. Try Charmm for instance.
d) Switch back to Amber/GAFF and press “OPT”. This opens a window to start the optimization run.
Deselect “Calc. Charge”, since we would like to use our own charges, and specify a job name: 1naphthalene.
How much did the bond length change after minimization?
Exercise 6
Implementing a Minimizer in a Computer Program
Minimization typically involves repetitive evaluation of values and derivatives of possibly rather complex multivariable functions. Such task are well suited for computers. Writing an efficient minimizer for problems commonly encountered in chemical calculations can be considered a challenging programming task even for expert
programmers. In this tutorial we will illustrate how to program minimizers for a simple quadratic function
without thinking much about the memory efficiency or the execution speed of the code.
General Principles
It is relatively straightforward to write a computer program that minimizes one-dimensional functions using the
steepest descent or Newton-Raphson methods. In general, such a program must have the following parts:
• The definition of the function to be minimized. For example, in the case of a diatomic molecule that
obeys Hooke’s law, the energy function for bond stretching is a quadratic equation with two parameters:
the harmonic force constant and the equilibrium distance.
• The prescription for calculating analytical or numerical derivatives of the function with respect to coordinates. In the steepest descent method, only first derivatives are needed; in the case of the Newton-Raphson
method both the first and the second derivatives are required.
• The prescription for updating the geometry. The formula for updating the geometry depends on minimization method.
• A loop that carries out iterations either until the minimum is found or until a maximum number of search
steps has been taken. A criterion for what constitutes a minimum (e.g. force smaller than 0.01) must be
provided.
• Statements that output the information about the progress of the calculation.
• Any statements that are specific for a particular language, such as end in Matlab.
Exercise 6.1
Steepest descent minimizer
The code below illustrates a Matlab implementation of the steepest descent minimizer for the one-dimensional
harmonic potential. This would represent a diatomic molecule.
1
5
%
% Steepest Descent minimizer: Harmonic Potential
% Dr. Herma Cuppen based on a Python mimimizer by
% Dr. Kalju Kahn, Chem 126 (UCSB)
%
6
% Level one: short, ugly, and rigid
7
% Potential Function Parameters
k
= 2743.0;
r_eq
= 1.1283;
1
2
3
4
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
% Harmonic force constant
% Equilibrium distance
% Initial Search Point
r_ini
= 1.55;
% Steepest Descent search
r = r_ini;
step = 0.0001;
E = 0.5 * k * (r - r_eq)^2;
% Energy
F = - k * (r - r_eq);
% Force
(- first derivative)
fprintf (’Step\tDistance\tEnergy\t\tForce\t\tHessian\n’)
fprintf (’0\t%g\t\t%8.6g\t%g\n’, r, E, F)
% Initial values
for i = 1:50
% Iterate up to 50 time
r = r + step*F;
% Steepest Descent update
E = 0.5 * k * (r - r_eq)^2;
F = - k * (r - r_eq);
fprintf (’%d\t%g\t\t%8.6g\t%g\n’, i, r, E, F)% Print results
if (abs(F) < 0.01)
% Are we at the stationary point already?
break
end
end
You can find this code as HarmOscSD1.m in the course directory. Read it carefully to see if you understand
the code. Examine the output and verify that the minimum has been found. Change the equilibrium distance
and the force constant and check how this affects the number of iterations needed to find the minimum.
Exercise 6.2
Minimization of Functions with Two Independent Variables
Implementing a two-dimensional minimization program is more complicated because the positions and first
derivatives are vectors now. The prescription for updating positions can be nicely expressed in matrix notation.
For the steepest descent:
∂E r1
r1
∂r1
=
− stepsize ·
∂E
r2 new
r2 old
∂r2
old
a) Create a new script which optimizes a linear molecule consisting of three atoms using steepest descent.
Instead of a single second derivative, a second partial derivative matrix (Hessian) with four components
must be calculated when using the Newton-Raphson method:
!
∂2E
∂2E
H=
∂r12
∂2E
∂r1 ∂r2
∂r1 ∂r2
∂2E
∂r22
Optimization then occurs through:
rnew = rold + H−1
old · Fold
Be careful: the force is minus the gradient.
b) Create another script which optimizes a linear molecule consisting of three atoms using Newton-Raphson.
c) How many iterations are needed for the algorithm to converge and why?
You have probably used an analytical expression for the Hessian. In reality, the Hessian is often calculated
numerically.
2
d) Make a new function in which you update the inverse Hessian (B) each iteration starting from the identity
matrix following the Davidon-Fletcher-Powell formula:
y (k) = g (k+1) − g (k)
Bk+1 = Bk −
∆rk (∆rk )T
Bk yk ykT Bk
+
ykT Bk yk
ykT ∆rk
with g (k) the gradient of the function at position in iteration k and ∆rk the change in the position between
iteration k + 1 and k. How many iterations do you need now?
3
Fly UP