Neural computing has emerged as a practical technology in recent years, with successful applications in many fields as diverse as medicine, finance, geology, physics, engineering and biology. The excitement ranges from the fact that these networks are attempts to model the capabilities of the man brain. From a statistical point of view neural networks are interesting because of their potential use in classification problems and prediction .
Artificial neural networks (ANNs) are non-linear data driven self-adaptive approach as opposed to the traditional model based methods. Neural networks are powerful tools used for modeling especially when the underlying data relationship is unknown. NNs can identify and learn correlated patterns between input data sets and corresponding target values. After training and testing, NNs can be used to predict the outcome of new independent input data . NNs imitates the learning process of the man brain and can process problems involving complex data and non-linear even if the data are and noisy imprecise. They are ideally suited for the modeling of agricultural data which are known to be complex and often non linear .
Another important feature of these networks is their adaptive nature, where by it âlearn by exampleâ replaces âprogrammingâ in problems solving . This feature makes such computational models very appealing in application domains where one has little or incomplete understanding of the problem to be solved but where training data is readily available These networks are âneuralâ in the sense that they may have been inspired by neuroscience but not necessarily because they are faithful models of biological neural or cognitive phenomena . In fact majority of the network are more closely related to traditional mathematical and/or statistical models such as clustering algorithms, nonlinear filters, non-parametric pattern classifiers, and statistical regression models than they are to neurobiology models . Artificial neural networks (ANNs) have been used for a wide variety of applications where statistical methods are traditionally being employed . They have been used in such as predicting the secondary structure of globular proteins classification problems, identifying underwater sonar currents, and recognizing speech and pattern recognition. In time series applications, NNs have been used in predicting stock market performance . As users of statistics or statisticians the problems are normally solved through classical statistical methods, such as Bayes analysis, discriminant analysis, logistic regression, and ARIMA time series models and multiple regression. Therefore, it is time to recognize neural networks as a powerful tool for data analysis .
2.2 Definition of artificial neural networks
Artificial neural network (ANN),which are also usually called neural network (NN), is a computational model or mathematical model that is inspired by the structure and/or functional aspects of biological neural networks . An artificial neural network consists of an interconnected group of artificial neurons. In almost all cases an ANN is an adaptive system that changes its structure based on internal information or external information that flows through the network during the learning phase. And the Modern neural networks are usually used to model complex relationships between outputs and inputs or to find patterns in data .
Fig 1.4 An artificial neural networks
A neural network(NN) is an interconnection of group of nodes, similar to the vast network of neurons in the human brain .
An artificial neural networks promise to be a breakthrough in areas where a statistical methods and traditional computer system have difficulty to support the decision making process in todayâs complex business environment . Neural networks have been used in building an artificial intelligent information system that ape the way in which humans think .
2.3 The need of artificial neural networks
a) Conventional computers use an algorithmic approach, but artificial neural networks works similar to human brains and learn by example
b) The ability to detect trends or extract data that are too complex to be noticed by either computer technique or humans
c) For real time operations
d) For adaptive learning process.
e) The ability to derive meaning from imprecise or complicated data.
2.4 LITERATURE SURVEY
2.5 RESEARCH GAP
â¢ Apart from time consuming some need more iterations
â¢ A lot of parameters region, status of education, wealth index, current age and contraception were used
â¢ It take a long time to train, cross validation, testing and predict the IVF success rate.
â¢ It is very complex as is need 6 hidden layers
â¢ Is more appropriate for theoretical (scientific) purposes.
2.6 Characteristics of Neural Networksâ¨
i) The NNs exhibit mapping capabilities, that is, they can map input patterns to their associated output patterns.
ii) The artificial neural networks learn by examples. Thus, artificial neural network architectures can be âtrainedâ with known examples of a problem before they are tested for their âinferenceâ capability on unknown instances of the problem. Thus they therefore can, identify new objects previously untrained.
iii) The neural networks possess the ability to generalize. and thus, they can predict new outcomes from past trendsâ¨.
iv) Neural networks are fault tolerant and robust systems. thus they can, therefore, recall full patterns from incomplete, noisy patterns or partial.
v) Neural networks can process information in parallel, in a distributed manner at high speed.
Fig 2.1: A neuron
âï A neuron: with many-inputs and one-output unitâ¨
âï Output which can be excited or not excited
â¨âï incoming signals from the other neurons determine if the neuron shall fire.â¨
âï Output subject to attenuation in the synapses, which are the junction parts of the neuron
2.7 Concept Of Artificial Intelligence
In this chapter we will begin to find out how neural networks can learn, and why learning is so useful and what the different types of learning are . We will specifically be looking at training single-layer perceptron with the perceptron-learning rule. Before we begin, we should probably first define what we mean by the word learning in the context of this chapter. It is still unclear whether machines will ever be able to learn in the sense that they will have some kind of metacognition about what they are learning like humans . However, they can learn how to perform tasks better from past experiences. and here, we define learning simply as being able to perform better at a given task, or a range of tasks with experience .
2.8 Learning in Artificial Neural Networks
One of the most impressive features of artificial neural networks is their ability to learn. You may recall from the previous chapters that neural networks are inspired by the biological nervous system, in particular, the human brain. The most interesting characteristics of the human brain is it’s ability to learn . We should note that our understanding of how exactly the brain does this is still very primitive, although we still have a basic understanding of the process . It is believed that during the learning process the brain’s neural structure is altered, decreasing or increasing the strength of it’s synaptic connections depending on their activity . This is why more relevant information is easier to recall than information that havenât been recalled for quiet a long time. Many relevant information will have stronger synaptic connections and less relevant information will gradually have it’s synaptic connections weaken thus making it harder to recall .
Although simplified, artificial neural networks can model this learning process by adjusting the weighted connections found between neurons in the network. This effectively imitate the weakening and strengthening of the synaptic connections found in our brains. The strengthening and weakening of connections is what will enable the network to learn .
Learning algorithms are extremely useful when it comes to certain problems that either can be done more efficiently by a learning algorithm or can’t be practically written by a programmer. Facial recognition will be a very example of a problem extremely hard for a human to accurately convert into code . Problem that could be solved better by a learning algorithm would be a loan granting application, which can use past loan data to classify future loan applications. Although human beings could write rules to do this a learning algorithm can better pick up on subtleties in the data that may be hard to code for .
2.8.1 Learning Types
Learning can be defined as the process by which the neural network adapts itself to a stimulus and eventually it will produce a desired output. Learning is a continuous classification process of input stimuli when a stimulus appears at the network the network either it develops or recognizes a new classification. In reality during the process of learning, the network adjusts its synaptic weights and parameters, in response to an input stimulus so that its actual output response is the same as the desired output, the network has completed the learning phase in other words it has âacquired knowledgeâ. Mathematical expressions and learning equations describe the learning process for the paradigm which actually is the process for self- adjusting its synaptic weights .
A. Supervised Learning
During the training session of an artificial neural network an input stimulus is applied that results in an output result. The result is compared with a prior desired output signal which is the target result, if the actual response differs from the target result the neural network will then generates an error signal which can then used to calculate the adjustment that should be made to the networkâs synaptic weights so that the actual output matches the desired output. In other words the error is minimized to the least value possible to zero . The error minimization process requires a special circuit known as supervisor hence the name âSupervised learningâ. With artificial neural network the amount of calculation required to minimize the error depends on the algorithm used; Some parameters to watch, are the number of iterations per input pattern and the time required per iteration for the error to reach minimum during the training session whether the artificial neural network has reached a local one or global Scope and if a local one, whether the network can it remains trapped escape from it .
B. Unsupervised Learning
While Unsupervised learning does not require a tutor that is there is no target result. During the training session the neural network receives at its input many different excitations and/or input patterns and it arbitrarily organizes the patterns in categories . When a stimulus is later applied the neural network provides an output response indicating the class to which the stimulus belongs. If the class cannot be emmerged for the input stimulus a new class is then generated. An Example show a person a set of different objects. Then ask them to separate object out into groups or classifications such that objects in a group have one or more common features that distinguishes them form another group .
When this is done, show the same person another object and ask him/her to place the object is one of the groups. Grouping may be based on color or material consistency, shape, or on some other property of the object. If there is no guidelines given as to what type of features should be used for grouping the objects the grouping may be or may not be successful .
C. Reinforced Learning
In this learning process we requires one or more neurons at the output layer and a tutor that unlike supervised learning that does not indicate how close the actual output is to the desired output but whether the actual result is the same with the target output response is obtained . The tutor will not present the target result to the neural network but presents only a one/zero indication . Thus the error signal is usually generated during the training session is binary form: 1 or 0. If the tutorsrâs indication is âbadâ the neural network readjusts its parameters and tries again and again until it get output response right. The Process of correcting synaptic weight follows different strategy than the supervised learning process , and there are Some parameters to watch which include the following: the number of iteration per pattern and the time per iteration to reach the desired output during the training session, whether the neural network reaches a local minimum or a global minimum or both, also some Certain boundaries are supposed to be established so that the trainee should not keep trying to get the correct response at infinite run time .
D. COMPETITIVE LEARNING
Then we move to Competitive learning which is another form of supervised learning that is distinctive because of its characteristic architecture, operation and several neurons are at the output layer. Whenever an input stimulus is applied, the output neurons competes with the others to produce the closest output signal to the desired output . This output will then becomes the dominant one, and the other outputs will cease producing an output signal for that impact. For another stimulus another output neuron will becomes the dominant one, and so on. When an artificial neural network with competitive learning is part of greater Artificial neural networks or neural networks system then, because of their connectivity issues, these obsolete specializations may not always be suitable . Competitive learning is commonly encountered in groups of people where each member of the group was selected and trained to perform specific tasks based on the principle of the right person at the right time at the right place .
2.9 TYPES OF NEURAL NETWORK ARCHITECTURES
Neural Network Toolbox supports a wide variety of supervised and unsupervised network architectures. With the neural network toolboxâs modular approach to building neural networks, you can also develop custom architectures for your specific problem. Then you can view the network architecture including all layers, inputs, outputs, with their interconnections .
A. Supervised Networks
â A Supervised neural networks are trained to produce desired outputs in response to sample inputs thus making them particularly well-suited for classifying noisy data, modeling and controlling dynamic systems, and predicting future events. Artificial neural network toolbox supports four different types of supervised networks .
â While the Feedforward networks have one-way connections from input to output layers. They are most commonly used for nonlinear function fitting pattern recognitions, and predictions,. Supported feedforward networks include cascade-forward backpropagation, feedforward input/delay backpropagation, linear feedforward backpropagation, and perceptron networks .
â The Radial basis networks provide an option, fast method for designing nonlinear feedforward networks. Supported variations include probabilistic neural networks and generalized regression .
âThe Dynamic networks use recurrent feedback connections memory to recognize temporal and spatial patterns in data. They are mostly used for nonlinear dynamic system modeling, time-series prediction, and control systems applications. The pre built dynamic networks in the toolbox include distributed time-delay and focused time delay, nonlinear autoregressive, Elman, layer-recurrent, and Hopfield neural networks. This toolbox also supports dynamic training of custom networks with arbitrary connections .
â Learning vector quantization (LVQ) is another powerful method for classifying patterns that are not linearly separable. Learning vector quantization lets you specify class boundaries and the granularity of classification â¨B. Unsupervised networks .
â Unsupervised neural networks are specially trained by letting the network continually adjust itself to new inputs. They normally find relationships within data and can automatically define classification schemes. Neural network toolbox will therefore supports two types of self-organizing, unsupervised networks 
â Competitive layers recognize and group similar input vectors thus enabling them to automatically sort inputs into categories. They are commonly used for pattern recognition and classification .
â Self-organizing maps learn to classify input vectors according to there similarity. Just like competitive layers, they are normally used for pattern recognition and classification tasks and they differ from competitive layers because they are able to preserve the topology of the input vectors thus assigning nearby inputs to nearby categories .
2.9.1 IMPLEMENTING SUPERVISED LEARNING
As mentioned from previous chapters supervised learning is a technique that uses a set of input-output pairs to train the network. The ideas is to provide the network with examples of inputs and outputs then to let it find a function that can correctly map the data we provided to a correct output. If the network is trained with a good range of training data when the network has finished learning we should even be able to give it a new unseen input and the network should be able to map it correctly to an output . There are many different types of supervised learning algorithms that we could use but the most popular and the one we will be looking at in more detail is back propagation. But before we look at why the back propagation is needed to train multi-layered networks, let’s first have a look at how we can train single-layer networks or as they’re otherwise known, perceptronâs .
2.9.2 The Perceptron Learning rule
Perceptron learning rule works by trying to find out what went wrong in the network and making slight corrections to hopefully prevent the occurrence of the same error again. Here is how it works.
a) First we take the network’s actual output and compare it to the target output in our training data set. If the network’s actual output and desired output are not matching we know something went wrong and we can update the weights based on the amount of error. Let us now run through the algorithm step by step to understand how it works .
First, we need to calculate the perceptron’s output for each output node of the neural network
output = f( input1 Ã weight1 + input2 Ã weight2 + input3xweight3+â¦â¦) – or –
Now since we have the actual output we can compare it to the target output to find the error: error = target output â” output .
– or – E=t-o
Then we want to use the perceptron’s error to adjust the weights.
weight change = learning rate Ã error Ã input – or -â¨âwi =rEx
We want to ensure only small changes are made to the weights on each of the iteration, so to do this we apply a small learning rate.
I) If the learning rate is too high the perceptron can jump too far and miss the solution.
II) if it is too low, it can take an unreasonably long time to train. This gives us a final weight update equation of 
weight change = learning rate Ã (target output – actual output) Ã input
– or -â¨âwi =r(t-o)xiâ¨Here’s an example of how this would work with the AND function.