Yapay sinir ağları ve uygulamaları
- Global styles
- Apa
- Bibtex
- Chicago Fullnote
- Help
Abstract
SUMMARY ARTIFICIAL NEURAL NETWORKS AND ITS APPLICATIONS Some 10 billions of highly specialized cells, neurons, are in a human brain. Organized, they are so to be able to undergo the special functions of this organ, in a parallel way to a network structure. Though very efficient in covering the instruction sets formulated for them, conventional computers experience any difficulty in the lines of speech recognition, knowledge-recalling, pattern recognition and (in certain conditions showing impossibility or difficulty to a formulation) being able to discussions and predictions, the lines less than easy for human brains, to functionize even under unfavorable conditions, `noise`. Most of the brain processes have some parallel aspects in their structures, versus the `in-sets` nature of the structures of a numerical computer machine. Hence, the idea of constructing machines with admitable resemblance first in structure than, if might, in functions to the human brain. To overcome the betold difficulties and impossibilities, led to the presence of artificial neural systems. Neural Networks are structures consisting of simple computation units (neurons) set in parallel into huge build-ups based on some desired pattern, through a border between real objects and biological nervous systems. The so called neural computation in this dicipline is an engineering term meaning to the way in which neural networks regulate their own behaviour. Instead of a programmed progressive way of behaviour, neural computations offer unprogrammed adaptive ways of data processing and are treated, to develope their interior dynamics and to learn their own behaviours in sets of `trial and error` pairs. In the development of Intelligent machines the main area is the learning process. Learning is an adaptive process, the system gains, when working, making it more efficient and effective, on the line of that particular work. Learning in neural systems, be termed when changes of strengths develop on some neural connections within the network, the analogy to the brain neurons being expressed by the use of some functional relations. The inputs a unit receives from the all other units multiplied by the connection weights, creates a total effect on the unit, which it processes on a functional basis and then creates only one response or an `output`. A learning mechanism is an algorithm developed by decreases and increases of weights in their values iteratively, in connections. Learning is usally realized by feeding the system with pairs of samples during a training phase. -60-Neural networks a present investigation area, is very recent in a historical viewpoint. Mc Cullough and Pitts first defining a cell network model in 1943, Donald Hebb developing a theory on explaining the learning in 1949, and Frank Rosenblatt et.al. introducing the neural system called perceptron in 1960's, a great excitement at that time. Though the first experiments had for a time, been undertaken on networks consisted of single elements or single units, after inefficiency of these models had proved on some points, designing multi-layered networks became a common practice globally. Later, in 1974, Werbos developed `back propagation network`, used for training processes (which is very popular and common today) for multi-layered, feed forward network systems. This model had later been current in 1985 by the works Rumelhart and Mc Celland, Parker and Lecun. A back propagation network aims to try and find an output vector, corresponding to an input vector, based on a supervised training, by allowing the output error to propagate to the hidden layers. Since this aim calls for the realization of a cost function value between desired and actual outputs, a minimum value is reached. Untill the tolerated error value comes to a desired level, weight adaptation processes are kept toward a minimum. When applied, four problems happen in back propagation algorithm; 1. Network learns in a considerably long time, 2. Trapping up in a local minimum when trying to reach a global minimum, 3. After training a whole set of vectors, to turn to another set of vectors will cause erasing of the first training, 4. It has based on an algorithm remote to that in biologic brain systems. The disadvantages of the four distinct problems prove no handicap for a back propagation network to have been one of the most commonly used models, easiest to handle and with a simple theory. Today various network models for different purposes, both for software and hardware usage are available. In the present work, in an attempt to realize a shorter period for learning and to minimize the final errors, on a back propagation system, trials, and changing network parameters (node numbers, learning rates etc.) have been applied. -61- SUMMARY ARTIFICIAL NEURAL NETWORKS AND ITS APPLICATIONS Some 10 billions of highly specialized cells, neurons, are in a human brain. Organized, they are so to be able to undergo the special functions of this organ, in a parallel way to a network structure. Though very efficient in covering the instruction sets formulated for them, conventional computers experience any difficulty in the lines of speech recognition, knowledge-recalling, pattern recognition and (in certain conditions showing impossibility or difficulty to a formulation) being able to discussions and predictions, the lines less than easy for human brains, to functionize even under unfavorable conditions, `noise`. Most of the brain processes have some parallel aspects in their structures, versus the `in-sets` nature of the structures of a numerical computer machine. Hence, the idea of constructing machines with admitable resemblance first in structure than, if might, in functions to the human brain. To overcome the betold difficulties and impossibilities, led to the presence of artificial neural systems. Neural Networks are structures consisting of simple computation units (neurons) set in parallel into huge build-ups based on some desired pattern, through a border between real objects and biological nervous systems. The so called neural computation in this dicipline is an engineering term meaning to the way in which neural networks regulate their own behaviour. Instead of a programmed progressive way of behaviour, neural computations offer unprogrammed adaptive ways of data processing and are treated, to develope their interior dynamics and to learn their own behaviours in sets of `trial and error` pairs. In the development of Intelligent machines the main area is the learning process. Learning is an adaptive process, the system gains, when working, making it more efficient and effective, on the line of that particular work. Learning in neural systems, be termed when changes of strengths develop on some neural connections within the network, the analogy to the brain neurons being expressed by the use of some functional relations. The inputs a unit receives from the all other units multiplied by the connection weights, creates a total effect on the unit, which it processes on a functional basis and then creates only one response or an `output`. A learning mechanism is an algorithm developed by decreases and increases of weights in their values iteratively, in connections. Learning is usally realized by feeding the system with pairs of samples during a training phase. -60-Neural networks a present investigation area, is very recent in a historical viewpoint. Mc Cullough and Pitts first defining a cell network model in 1943, Donald Hebb developing a theory on explaining the learning in 1949, and Frank Rosenblatt et.al. introducing the neural system called perceptron in 1960's, a great excitement at that time. Though the first experiments had for a time, been undertaken on networks consisted of single elements or single units, after inefficiency of these models had proved on some points, designing multi-layered networks became a common practice globally. Later, in 1974, Werbos developed `back propagation network`, used for training processes (which is very popular and common today) for multi-layered, feed forward network systems. This model had later been current in 1985 by the works Rumelhart and Mc Celland, Parker and Lecun. A back propagation network aims to try and find an output vector, corresponding to an input vector, based on a supervised training, by allowing the output error to propagate to the hidden layers. Since this aim calls for the realization of a cost function value between desired and actual outputs, a minimum value is reached. Untill the tolerated error value comes to a desired level, weight adaptation processes are kept toward a minimum. When applied, four problems happen in back propagation algorithm; 1. Network learns in a considerably long time, 2. Trapping up in a local minimum when trying to reach a global minimum, 3. After training a whole set of vectors, to turn to another set of vectors will cause erasing of the first training, 4. It has based on an algorithm remote to that in biologic brain systems. The disadvantages of the four distinct problems prove no handicap for a back propagation network to have been one of the most commonly used models, easiest to handle and with a simple theory. Today various network models for different purposes, both for software and hardware usage are available. In the present work, in an attempt to realize a shorter period for learning and to minimize the final errors, on a back propagation system, trials, and changing network parameters (node numbers, learning rates etc.) have been applied. -61-
Collections