Show simple item record

dc.contributor.advisorKeskin, İbrahim
dc.contributor.authorUykan, Zekeriya
dc.date.accessioned2021-05-08T08:59:21Z
dc.date.available2021-05-08T08:59:21Z
dc.date.submitted1996
dc.date.issued2018-08-06
dc.identifier.urihttps://acikbilim.yok.gov.tr/handle/20.500.12812/660566
dc.description.abstractÖZET Bu çalışmada, Bölüm-2'de Yapay Sinir Ağı'nın bir türü olan Radyanlı Tabanlı Fonksiyonlar Ağı (RTF A) tanıtılmış ve bu ağ için bugüne kadar literatürde geliştirilen öğrenme algoritmalarına bir seçenek olarak, Zekeriya Uykan ve Cüneyt Güzeliş tarafından dört farklı şekilde uygulanabilen yeni bir öğrenme yöntemi (Giriş-Çıkış uzayında öbekleme, GÇÖ) önerilmiştir. Bölüm-3'te doğrusal olmayan sistem tanımı ile ilgili, a) Polinom Yöntemi (Chen ve Billings, 1 989), b) RTFA Yöntemi tanıtılmış ve her iki yöntem için Dik En- Küçük Kareler Yöntemi (Orthogonal Least Squares) ile en uygun terimlerin nasıl seçildiği açıklanmıştır. Literatürde RTFA ile sistem tanımada kullanılan yöntemlere seçenek olarak GÇÖ Yöntemi uygulanmıştır. Ayrıca, sadece örnek giriş-çıkış çiftlerinden yararlanarak, RTFA'da merkezler belli olmak üzere istenen performansın (belirlenen amaç ölçütünün) sağlanabilmesi için saklı katmanda kullanılması gereken hücre sayısının belirlenmesinde herhangi bir öğrenme algoritması koşturmaksızın sadece Dik EKY'nin nasıl kullanıldığı açıklanmıştır. Bu yöntem, Çok-Katmanlı Algılayıcı'da saklı katmanlardaki hücre sayılarının belirlenmesinde kullanılan yöntemlere (arındırma, geliştirme) bir seçenek sunar. Bölüm-4'te Bulanık Kontrolör (BK) ve Nöral Kontrolör (NK) arasındaki ilişkiler araştırılmıştır. Bazı kısıtlamalar altında BK ve NK'nin matematiksel olarak eşdeğer olduğu gösterilmiştir. Ayrıca, BK'nin klasik (bulanık olmayan) kontrolörle ilişkisi araştırılmış ve klasik kontrolörün BK'nin özel bir durumu olduğu gösterilmiştir. Bu tür çalışmalarda `BK tasarımında ve BK sistemlerinin kararlılık analizinde klasik kontrol yöntemlerinden yararlanılabilir mi T sorularına cevap aranır. Literatürde bu konudaki yayınların azlığı, bu çalışmaların önemini arttırmaktadır. Bölüm-55te YSA yöntemleri kullanılarak BK tasarımının nasıl yapıldığı açıklanmıştır. Bu tür çalışmaların amacı, birbirinden farklı alanlar gibi görünen YSA ve Bulanık Mantık konularının beraberce ele alınabileceği ve böylelikle biri için geliştirilen bir yöntemin diğeri için de uygulanabileceği bir ortak çalışma alanı oluşturmaktır.
dc.description.abstractSUMMARY SYSTEM IDENTIFICATION USING RBFN AND NEURO-FUZZY CONTROL After McCulloch and Pitts established the basic principles of the Neural Computation in 1943, Hebb's contribution was considered the basis for the development of many Artificial Neural Networks(ANN). The development of the Back Propagation Algorithm for training Feedforward Multi-Layer Neural Networks (MLNN) provided the wide use of ANN in many different areas. : signal processing, robotics, fault diagnosis, control, etc. On the other hand, another important theory, Fuzzy Logic (FL), occured in parallel to the ANN development. The basic principles of FL were introduced by Zadeh in 1965, having its first application to the area of Control Systems in 1975. However Polish mathemetician Lukasiewiecz developed multiple valued locig in 1920's and the key ideas of the theory were thought of by Black in 1937. An alternative to MLNN is the Radial Basis Function Network (RBFN). The RBF method has traditionally been used for strict interpolation in multidimensional space by Powell (1985), Micchelli (1986) and Broomhead and Lowe (1988). The contraction of a RBFN in its most basic form involves three different layers. The input layer is made up of source nodes (sensory units). The second layer is a hidden layer of high enough dimension. The output layer supplies the responce of the network. The transformation from the input space to the hidden-unit space is nonlinear, whereas the transformation from the hidden unit space to the output space is linear. >F(s) Figure- 1) Radial Basis Function Network. An RBF expansion with p-inputs and a scalar output implements a nonlinear mapping according to the nonlinear relationship as shown in (i). viF(X) = g^S-ci) (1) where {<&(p:-ej),i = !,-.., N] is a set of radial basis functions, denotes a norm. ci and h are the centers and weights of RBF respectively (sjs cı e Rp, i=l,...,M; j=l,...,N). The center vectors (ci) are some fixed points in p-dimensional input space and must sample the input domain. The results of theoretical investigation show that the choice of the radial basis function is not crucial for performance. But the performance of an RBFN critically depends upon the chosen centers. Depending on how the centers of RBFN are specified, there are four main learning strategies having been developed in the literature until today: 1) Fixed Centers Selected at Random (Lowe, 1989): The centers are chosen randomly from the training inputs. This is a constrained minimization since the centers are a subset of the training inputs. The weights of RBFN are the only parameters that would need to be learned and a straightforward procedure for doing this is to use the psedo-inverse method. 2) Orthogonal Least Squares Algorithm (OLS), (Chen et al.,1991): OLS is employed to select a suitable set of centers (regressors) from training inputs (contained minimization). At each step, the center which supplies the maximum increment (among the rest centers) to the variance of the desired output is selected. 3) Self-Organized Selection of Centers (Moody and Darken,1989): RBF are permitted to move the locations of their centers in a self- organized fashion (using Kohonen or K-nearest neighbor algorithm), whereas the linear weights of the output layer are computed using a supervised learning rule (for example LMS algorithm). 4) Supervised Selection of Centers : In this approach, the centers and all other parameters of RBFN are updated using a gradient- descent procedure that represents a generalization of the LMS algorithm. As an alternative to the learning methods above for RBFN, Zekeriya Uykan and Cüneyt Güzeîiş have developed a new method called `Clustering in Input- Output space (CIO)`. While specifying the centers, all the algorithms except OLS in literature consider only input space whereas CIO method considers input- output space. So the changes in output space as well as input space will influence the locations of the centers. Let a RBFN be with p-inputs and q-outputs. From the (Xj,dj) input-output training set, (% e Rp, dj e Rq, jf = [sf df], i = 1S---,N) vectors are made vHup and for M-clustering at the beginning M vectors (cf ) in (p+q) dimensional input- output space are assigned randomly. Using Kohonen or k-nearest algorithm, for M-clustering M vectors which best characterize the input-output space (minimize the quantization error) are determined and the first p entries of XjA are assigned to the centers. On the other hand for the weights, any gradient algorithm is implemented. So, it is possible to apply CIO method in four different ways. 1) First determining the centers using Kohonen or k-nearest algorithm in input-output space and then using batch-learning gradient-descent algorithm for weights. 2) First determining the centers using Kohonen or k-nearest algorithm in input-output space and then using pattern-learning gradient-descent algorithm for weights. 3) One-step Kohonen (or k-nearest) algorithm in input-output space, one-step pattern-learning gradient-descent for weights. 4) One-step Kohonen (or k-nearest) algorithm in input-output space, one-step batch-learning gradient-descent for weights. It should be noted that for a given input- output training set, once the centers of RBFN has been determined, using OLS algorithm it is possible to determine how many neurons in the hidden layer must be used in order to satisfy a predetermined cost function, without implementing any learning algorithm. But this is not possible for MLNN; different strategies each of which includes supervised learning implemented to determine how many neurons must be used in the hidden layers. On the other hand, the theory underlying fuzzy logic, developed by L.A. Zadeh in 1965 as alternative to the classical two-valued logic, is one of the research topics which has found many applications in the control area, especially during the years following 1980. A fuzzy set A defined in a universe of discourse X is expressed by its membership function. A: X-»[ 0.1] (2) where A(x) expresses the extend to which x fulfils the category specified by A. The analysis of some kinds of ANN and the basic principles of FL shows that there are some points where these two areas can be put together, specially in control applications. As a result of recent research and applications, a new approach that viiicombines the learning capability of ANN with the simplicity of FL has been identified as 'Neuro-Fuzzy methods'. In Chapter-3, two methods related to identification of NARMAX models (Non-linear AutoRegressive Moving Average with exogenous inputs, Leontaritis ve Billings, 1985) are presented: 1) Polynomial expansion (S.Chen et ai, 1989): When a polynomial expansion of the NARMAX model is selected, the model becomes linear-in-the-parameters. Providing the model structure, or which terms to include in the model, has been determined, only the values of the parameters are unknown and the identification can thus be formulated as standart least-squares problem which can be solved using various well-developed numerical techniques. 2) RBFN Method: This method considers an alternetive approach for fitting NARMAX models based on RBFN. The major problem remains of how to select an appropriate set of RBFN centers. In order to utilize the advantages of the linear-in- parameters formulation, the centers are often chosen to be a subset of the data. In this stage the CIO Method is proposed. In both methods it has been demonstrated how OLS is implemented to determine the structure of final model. In Chapter-4, fuzzy and neural controllers are searched. For example an approach for comparing the fiizzy and nonfuzzy controller designs is discussed. Relationships are established between the gain parameters for the two classes of controller designs. So it's shown that classical (nonfuzzy) controller is a special case of fuzzy controller. In this chapter, it's also shown that under some restrictions, the functional bihavior of RBFN used as a Neural Controller and fuzzy controller are actually equivalent. This functional equivalance enables us to apply what has been discovered (learning rule, representatinal power, etc.) for one of the methods to the other and vice versa. It is of interest to observe that two models stemming from different origins turn out to be functional equivalent. In Chapter-5, under the functional equivalance between FBFN (Neural Controller) and Fuzzy Controller shown in Chapter-4, the neuro-fuzzy control methods are explained. Neuro-Fuzzy Networks attempt to combine the adventages of FL and ANN. In spite of the great effort to connect these two fields which seem different, there are not many papers about that in literature. 'Fuzzy Membership Function based ANN' is proposed to approximate the nonlinear mapping, where the structure of 'Fuzzy Membership Function based ANN' is similar to that of 'Radial Basis Function Neural Network (RBFN)' which is known to be very effective in the function approximation (universal approximator). ixBeing used the optimization techniques developed for ANN, the nearly optimal Fuzzy Controller is obtained. In other words, it is expected that one could obtain 'nearly optimal Fuzzy Controller' by way of optimization through its Neural Controller Counterpart, since the optimization of Neural Controller is much more applicaple than that of Fuzzy Controller. At the same time, this approach is inherently an alternative way of optimization of Fuzzy Control Rules (IF-THEN Rules) in Fuzzy Controller. There are lots of applications and research fields of ANN and Fuzzy Inference Systems: Robotics,motor control, industry automations, Neural and Fuzzy Computers, pattern recognition, fault diagnosis, system identification, etc. In Chapter-6, the simulation studies are presented.en_US
dc.languageTurkish
dc.language.isotr
dc.rightsinfo:eu-repo/semantics/embargoedAccess
dc.rightsAttribution 4.0 United Statestr_TR
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectBilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontroltr_TR
dc.subjectComputer Engineering and Computer Science and Controlen_US
dc.titleRTFA ile sistem tanıma ve nöral-bulanık kontrol-
dc.typemasterThesis
dc.date.updated2018-08-06
dc.contributor.departmentDiğer
dc.subject.ytmSystem identification
dc.subject.ytmFuzzy control systems
dc.subject.ytmArtificial neural networks
dc.identifier.yokid55615
dc.publisher.instituteFen Bilimleri Enstitüsü
dc.publisher.universityİSTANBUL TEKNİK ÜNİVERSİTESİ
dc.identifier.thesisid55615
dc.description.pages86
dc.publisher.disciplineDiğer


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

info:eu-repo/semantics/embargoedAccess
Except where otherwise noted, this item's license is described as info:eu-repo/semantics/embargoedAccess