...

350 Advances in Natural and Applied Sciences, 3(3): 350-356, 2009 ISSN 1995-0772

by user

on
Category: Documents
17

views

Report

Comments

Transcript

350 Advances in Natural and Applied Sciences, 3(3): 350-356, 2009 ISSN 1995-0772
350
Advances in Natural and Applied Sciences, 3(3): 350-356, 2009
ISSN 1995-0772
© 2009, American Eurasian Network for Scientific Information
This is a refereed journal and all articles are professionally screened and reviewed
ORIGINAL ARTICLE
Feed-forward Neural Networks for Precipitation and River Level Prediction
1
1
2
Ogwueleka, Toochukwu Chibueze and 2Ogwueleka, Francisca Nonyelum
Department of Civil Engineering, University of Abuja
Department of Computer Science, University of Abuja.
Ogwueleka, Toochukwu Chibueze and Ogwueleka, Francisca Nonyelum; Feed-forward Neural
Networks for Precipitation and River Level Prediction; Adv. in Nat. Appl. Sci., 3(3): 350-356, 2009.
ABSTRACT
This study presents a new approach using ANN technique to predict precipitation and river level. The
ANN model predicted river level better with ANN predicted precipitation data to missing precipitation data.
The results of the study showed that prediction of precipitation and water level of river using ANN are
reasonable, suitable and of acceptable accuracy. ANNs were found to provide a robust and practical method
for error detection and correction. The model efficiency and accuracy of the ANNs were measured based on
the mean square error and R2. Prediction of precipitation and river level by ANN will be useful for flood
management.
Key words: Artificial neural networks, water level prediction, precipitation, feedforward neural network
Introduction
Buguma Creek is located in Asari Toru Local Government Area of Rivers State, Nigeria. Buguma town
is about 23 km south west of Port Harcourt, capital of Rivers State. The area is very much influenced by tidal
effect. The study area experiences copious rainfall, with annual total rainfall in excess of 2,593 mm, which
makes the area prone to flooding. Accurate forecasting of water level is important to warn the residents of the
flood. A successful flood management strategy requires accurate forecasting of river flows (White, 2001).
Used methods for estimate missing precipitation are thiessen polygon, arithmetic mean, inverse distance,
isohyetal, and normal-ratio methods. With exception of normal-ratio and arithmetic, the rest methods require
parameters such as distance and topographical conditions of the area. Water level prediction requires accurate
estimation of runoff from a rainfall event and an accurate hydraulic model for a given discharge. Runoff
depends on precipitations characteristics, meteorological characteristics, storage characteristics, drainage basin
characteristics, geographical characteristics, geological characteristics. All these parameters are not all the time
available, thus making estimation of water level very difficult. Neural networks were selected as modeling
tools because of their capability to capture non-linear relationships present in the data as well as the ability
to self train.
Neural network (NN) is in the “black-box” class of models. These models do not require detailed
knowledge of the internal functions between inputs and outputs (El-Din and Smith, 2002). Artificial neural
networks have an inherent ability to learn and recognised highly nonlinear relationships (Swingler, 1996), and
then organise dispersed data into a nonlinear model (Hecht, 1989). Many researchers have discussed the
history, capability, kinds, structure and learning algorithm of neural networks (Gob et al, 2001, Loke et al,
1997, Lek et al, 1999). Several application of ANN in water resources and environmental issues has been
gathered in Coskun et al. (2008). The literature related to the use of ANN with missing or incomplete
hydrological data is growing (Khalil et al, 1998, Elshorbagy et al, 2000). ANNs as data driven empirical
models have been successful applied in all field of water technology: prediction of wastewater inflow rate
(Ogwueleka and Ogwueleka, 2009), membrane technology modelling (Strugholtz et al, 2008; Cabassual et al,
2007), rainfall forecasting (Hung et al, 2009), River flow forecasting (Kumar et al, 2004), modelling coagulant
Corresponding Author: Ogwueleka, Toochukwu Chibueze, Department of Civil Engineering, University of Abuja
Adv. in Nat. Appl. Sci., 3(3): 350-356, 2009
351
(Maier et al, 2004), and influx during ultra-filtration and after backwashing (Teodosiu et al, 2000).
The first objective of the paper is to present ANN models to predict missing precipitation, and second
objective is to predict river level from available data and to compare the results.
Study area
The input data for the present study were collected from Buguma Creek, which is situated in the Buguma
town. Geographically by 480800mE to 48740mE and 80200mN to 82800mN. Ecologically, the Buguma is
within the mangroves swamp ecosystem of the Eastern Niger Delta, inundated by creeks and creek lets that
include Degema Creek, Buguma Creek and Amayanabo Creeks. Mudflats were observed during the low tide.
The study areas are characterized by two dominant seasons, the wet and dry. The dry seasons begins in
November and ends in March and is defined by the north-east trade winds, which brings harmattan. The wet
seasons starts from April and ends in October and characterized by south-west winds, which are moisture laden
from the Atlantic Ocean.
The mean daily temperature indicated ranges of 32.1 to 35.1oC for dry season sampling period and 28.1
to 33.5oC for wet season sampling period. The mean values were 34.3oC and 31.77oC for dry and wet season
periods respectively. The relative humidity ranged from 67.1 to 93.2%. The surface wind speeds ranged from
0.3 m/s to 4.0 m/s.
Materials and methods
The neural network approach is a branch of artificial intelligence. ANN is a mathematical model which
has a highly connected structure similar to brain cells. They consist of a number of neurons arranged in
different layers, an input layer, an output layer and one or more hidden layers (Figure 1). Various ANN
topologies have been proposed to data such as Hopfield nets, Hamming nets, Carpenter/ Grossberg classifiers,
perceptrons, multilayer perceptrons, and Kohonen self organizing maps (Haykin, 1994). Multilayer feedforward
networks (also referred to as multilayer perceptrons) are simple but powerful and very flexible tools for
function approximation. Feed-forward neural networks are most commonly trained using a back-propagation
algorithm.
Fig. 1: Architecture of neural network
The ANN approach involves designing the architecture, scaling the data, training the network, reviewing
the results, and then validating and applying the neural network. Selection of the architecture, scaling of the
data and training the network occur simultaneously through an iterative process until a satisfactory solution
is obtained.
The model used for all classification attempts was a standard, three layer, back propagation, neural network
with N input nodes, L hidden nodes, and K output nodes. A layer of neurons is determined by its weight
matrix, a bias vector and a transfer function. The optimum number of hidden layers and the optimum number
of nodes in each of these was found by trial and error. Although it has been proven that a network with one
hidden layer can approximate any continuous function, given sufficient degrees of freedom (Hornik
et al.,
1989). Therefore, the use of one hidden layer was considered. The number of neurons in the output layer
equals the number of desired outputs. The input neurons receive and process the input signals and send an
output signal to other neurons in the network. Each neuron can be connected to the other neurons and has an
activation function and a threshold function, which can be continuous, linear or non linear functions. The signal
passing through a neuron is transformed by weights which modify the functions and thus the output signal that
reaches the following neuron. Modifying the weights for all neurons in the network, changes the output.
Information propagates from the input layer to the out layer through the hidden layer. As for pth group of
Adv. in Nat. Appl. Sci., 3(3): 350-356, 2009
352
specimen, assume that input elements of R dimensions are as x1p, x2p, ..., xRp, and then the input of jth node
in the hidden layer is
(1)
The output of the jth node can be obtained
(2)
And so, we can get the input of the kth node in the output layer
(3)
The kth node has the output as
(4)
In which w, b are weights and bias of the networks, respectively.
The errors and learning propagate backwards from the output nodes to the inner nodes. Back-propagation
was used to calculate the gradient of the error of the network with respect to the network's modifiable weights.
For classification, a feed-forward network is selected, together with a pattern together and chooses the
categories. In contrast to the learning phase, classification is very fast.
The mean square error over the training samples is the typical objective function to be minimized. The
mean squared error is as follows
(5)
M is the number of samples, tkp is the expected output
Weight and bias of the networks are modified using the Delta algorithm, they are given by
(6)
(7)
m is the momentum coefficient, and η is the learning rate.
δ error can be expressed as follows: output layer
(8)
Hidden layer
(9)
The modification of the weights and bias
Adv. in Nat. Appl. Sci., 3(3): 350-356, 2009
353
(10)
(11)
The most commonly used activation function within the nodes is a logistic sigmoid function, which
produces output in the range of 0-1 and introduces non-linearity into the network, which gives the power to
capture non-linear relationship between input and output values. The logistic function was used in this work
in the form given below:
(12)
A scaling process is often used to ensure the uniform treatment of variables prior to training. Large input
values are scaled down to prevent over-saturating the sigmoidal hidden nodes and small input values are scaled
up to make an appropriate contribution to the derivative of the objective function. Data are scaled with linear,
logarithmic, or normal transformations. In this study the data was normalized using the following equation
(13)
The process of determining ANN weights is called learning or training and is similar to calibration of a
mathematical model. ANNs are trained by applying an optimization algorithm, which attempts to reduce the
error in network output by adjusting the matrix of network weights w and (optionally) the neuron biases. The
algorithm that was tested in this research is back propagation (BP), back propagation with resilient back
propagation.
Missing Precipitation prediction
The technique used in estimating the missing precipitation recorded data is the ANN. The algorithm that
was tested in this research is resilient backpropagation (BP). The learning rates applied during the simulation
has values ranging from 0.2 to 1.0. The actual number of hidden neurons was estimated by trial and error.
Daily precipitation data was chosen for training and testing the network. The data from April 2004 to October
2008 were used for testing, training and validation of the ANN. The inputs were daily precipitations of six
neighboring stations and the output was missing precipitation of station A of the day. Formulations of the
ANN models can be tten as follows.
Pa (t) = f [ P1(t), P2 (t), P3 (t), P4 (t), P5 (t), P6 (t)]
(14)
where Pa is the precipitation at A which is missing and P1, P2 , P3 , P4, P5, P6 are the precipitation at the
other six stations.
Water Level prediction
In the designed ANN used to predict the water level, the input consists of the antecedent water level,
antecedent precipitation and the precipitation for the current day. The output for the network is the water level
for the current day. The water level of 5 days of antecedent data is illustrated with equation (15).
W(t) = f [P(t-5), P(t-4), P(t-3), P(t-2), P(t-1), P(t), W(t-5), W(t-4), W(t-3), W(t-2), W(t-1)]
(15)
Where t = time (days), P = precipitation, and W = water level
In estimating the river level of Buguma River using ANN, two sets of data were used: for Set A, missing
precipitation data were predicted using ANN and in Set B, predictions were not made to missing data.
Adv. in Nat. Appl. Sci., 3(3): 350-356, 2009
354
Results and discussion
The back propagation neural network described in the preceding section was implemented in visual studio
code on Buguma Creek. The simulation performance of the ANNs were evaluated on the basis of mean square
error (MSE) and coefficient of determination ( R2). The performance control of the ANN outputs was
measured using the coefficient of determination (R2), which is a statistical indicator that compares the accuracy
of the model to the accuracy of a trivial benchmark model wherein the prediction is just the mean of all the
samples. The coefficient of determination ( R2) is mathematically described as follows:
(16)
(17)
(18)
where tkp is the actual output value, ykp is the output value predicted by the network, tm is the mean of tkp
values, and m is the total number of data records.
Figure 2 shows the predicted precipitation in relation to that measured. R2 0.91 suggested a very good
performance. In general, a R2 value greater than 0.9 indicates a very satisfactory model performance, while
a R2 value in the range 0.8–0.9 signifies a good performance and values less than 0.8 indicate an unsatisfactory
model performance (Coulibaly and Baldwin, 2005).
Fig. 2: Comparison of observed and predicted precipitation
In Figure 3, the comparison between predicted and measured water level can be seen. There exist a high
correlation between the simulated and the experimental data with R2 = 0.92. Table 1 shows that ANN Set A
has better R2 and lower MSE than ANN Set B. The optimal model was when the inputs were missing
precipitation data predicted using ANN. The difference between Set A and Set B models is only in the input
variables. The R2 and MSE in Set A were 0.92 and 1.63 respectively. The performance indices revealed that
the ANN Set A is superior to ANN Set B.
355
Adv. in Nat. Appl. Sci., 3(3): 350-356, 2009
Fig. 3: Comparison of observed and predicted water level
Table 1: Shows the performance of ANN model in terms of the MSE and R2.
Input Type
MSE
Set A
1.63
Set B
1.91
R2
0.92
0.81
Conclusion
This indicates that ANN is an appropraite means to predict precipitation and water level of a river. The
ANN model predicts water level better with ANN predicted precipitation data to missing precipitation data.
The ANN model produced encouraging results. Comparing ANN, Set A is superior to Set B. The comparison
shows that there is a better agreement between the results of simulated results and measured results. The
results of the study showed that prediction of precipitation and water level of river using ANN are reasonable,
suitable and of acceptable accuracy. Implementation of these techniques would improve confidence in data,
advancing water resources management.
.
References
Cabassud, M., N. Delgrange-Vincent, C. Cabassud, L. Durand-Bourlier and J.M. Laine, 2007. Neural network:
a tool to improve UF plant productivity. Desalination, 145(1-3): 223-231.
Coskun, H., G. Cigizoghu, H.K. Maktar and M. Derya, 2008. Integration of information for environmental
security. Springer, pp: 275.
Coulibaly, P. and C.K. Baldin, 2005. Non-stationary hydrological time series forecasting using non-linear
dynamic methods. J. Hydrol., 307: 164-174.
El-Din, A.G. and D.W. Smith, 2002. A neural network model to predict the wastewater inflow incorporating
rainfall events. Water Research, 36: 1115-1126.
Elshorbagy, A., S.P. Simonovic, U.S. Panu, 2000. Performance evaluation of artificial neural networks for
runoff prediction. ASCE Journal of Hydrologic Engineering, 5(4): 424-427.
Gob, S., E. Oliveros, S.H. Bossmann, A.M. Braun, C.A.O. Nascimento and R. Guardani, 2001. Optimal
experimental design and artificial neural networks applied to the photochemically enhanced Fentom
reaction. Water Science Technology, 44(5): 339-45.
Haykin, S., 1999. Neural Networks: A Comprehensive Foundation, Prentice Hall. New York.
Hecht, N.R., 1989. In: Proceedings of the International Joint Conference on neural networks. IEEE Press,
Washington DC, USA., pp: 593-605.
Hornik, K., M. Stinchcombe, H. White, 1989. Multilayer feed-forward networks are universal approximations.
Neural Networks, 2: 359-366.
Hung, N.Q., M.S. Babel, S. Weesakul, and N.K. Tripathi, 2009. An artificial neural network model for rainfall
forecasting in Bangkok, Thailand. Hydrol. Earth Syst. Sci., 13: 1413-1425.
Adv. in Nat. Appl. Sci., 3(3): 350-356, 2009
356
Khalil, M., U. Panu, W. Lennox, 1998. Infilling of missing stream flow values based on concepts of groups
and neural networks, Civil Engineering Technical Report No. CE-98-2.Lakehead University. Thunder Bay,
Ont. Canada.
Kumar, D.N., K.S. Raju and T. Sathish, 2004. River flow forecasting using recurrent neural networks. Water
Resources Management, 18: 143-161.
Lek, S., M. Guiresse and J.L. Giraudel, 1999. Predicting stream nitrogen concentration from watershed features
using neural networks. Water Research, 33(16): 3469-78.
Loke, E., E.A. Warnaars, P. Jacobsen, F. Nelen and M. Ceu Almeida, 1997. Artificial neural networks as a
tool in urban storm drainage. Water Science Technology, 36(8-9): 101-9.
Maier, H.R., N. Morgan, and C.W.K. Chow, 2004. Use of artificial neural networks for predicting optimal
alum doses and treated water quality parameters. Environ. Model. Software, 19(5): 485-494.
Ogwueleka, T.C. and F.N. Ogwueleka, 2009. Application of artificial neural networks in estimating wastewater
flows. IUP Journal of Science and Technology, 5(3).
Strugholtz, S., S. Panglisch, J. Gebhardt and R. Gimbel, 2008. Neural networks and genetic algorithms in
membrane technology modeling. J.Water Supply Res. Techno.-AQUA, 57(1): 23-34.
Swingler, K., 1996. Applying neural networks: a practical guide. Academic Press, London, UK., pp: 21-39.
Teodosiu, C., O. Pastravanu and M. Macoveanu, 2000. Neural network models for ultra-filtration and
backwashing. Water Research, 43(12): 125-32.
White, W.R., 2001. Water in rivers: flooding. Proceeding of the Institution of Civil Engineers- Water and
Maritime Engineering, 2(2): 107-118.
Fly UP