Wyniki 1-9 spośród 9 dla zapytania: authorDesc:"RYSZARD WOJTYNA"

Current-mode analog memory with extended storage time for hardware-implemented neural networks

Czytaj za darmo! »

Current-mode signal processing may be more power-saving and faster than a voltage mode one. Its low-power operation results from the fact that no supply current can be consumed if input signal of the processing circuit is equal to zero. As an example of current-mode fast operations, we can mention arithmetic calculations. This is because addition and multiplication can be extremely fast when operating on current signals. Furthermore, the low power consumption mentioned above means a possibility of parallel data processing on a large scale. The larger the scale the higher the total speed of the processing system. For this reason, one observes not only a general tendency to develop hardware and mixed hardware- software signal processing methods [1-5] but also, a growing interest[...]

Analog signal processing suited for neural-network hardware implementation


  Recently one observes a growing interest in applying an analog technique to some unique signal processing tasks [2-17]. Advances in CMOS processes allow us to think realistically, among others, about building large neural networks in a chip form and learn them on silicon. Apart from classical voltage mode processing based on both digital and analog electronics, current mode analog technique is gaining popularity lately. This is because it is well suited to be implemented within modern submicro- and nano-technology integrated circuits that operate with lower and lower supply voltages and consume smaller and smaller amount of power. Hardware implemented neural networks based on the analog technique offer a possibility of fast learning within a chip. Applying this technique one c[...]

Special neural networks for finding symbolic relationships between empirical data


  Information about objects or phenomena encountered in physical world is sometimes available only in a numerical discrete data form. In such situations, there is often a need to establish symbolic relation between the given empirical data. In addition to other methods, it can be realized by using special-type neural networks. First papers dealing with the neural-network approach to this subject were dedicated to astronomy problems [1.2]. A simple neural implementation of relationships described by so called product units has been proposed in [1] as a way to discover the unknown rules governing the empirical date. This idea has been developed in [2], where the product-unit description function has been expanded to a polynomial expression form given by: (1) The equation (1) enables to extend the class of problems that can be described in a symbolic way. When implementing (1) in a special-neural-network form shown in Fig. 1, the symbols in (1) take the following meaning: y is the network output signal, xj are components of its input vector, wij represent power to which xj is raised, n is a number of the xj input components and h the number of neurons included in the hidden layer. The solution shown in Fig. 1 is a special type neural network because as activation operators the exp(.) and ln(.) functions were introduced into proper places, i.e. into the input nodes and hidden layer. Comparing the scheme of Fig. 1 with the middle part of the expression (1) we see that this is really a neural realization of (1). ĄŇ ĄŇ ( ) ĄŇ ĄĐ = = = = + = ˙ ˙ . . . . . . = + h i n j w i j n j ij j h i i y c c w x c c x ij 1 1 0 1 1 0 exp ln , In this paper, we go further and propose to enlarge the group of product-unit and polynomial expressions by adding to it functions of a fractional rational type. As we know, such functions can be presented as ratio of two polynomials which in case of only one independent variable, x, [...]

Perceptron with reciprocal activation functions to implement polynomial relationships


  In many situations, there is a need to create symbolic descriptions of rules governing physical phenomena or measurement data given only in a numerical form. The latter is often connected with a necessity of calibrating measurement instruments. The rule discovering is difficult when we measure complex quantities, like humidity or luminous intensity, and the description takes a multidimensional polynomial form. The rule discovery task can be solved, among others, using atypical perceptron-type neural networks [1-11] implementing a given symbolic relationship. Such an approach is especially attractive when the number of dimensions of the applied polynomial is large. Realizing complex polynomial relations in a form of the perceptron neural network, we utilize the fact that perceptrons can be trained in a rather simple manner when applying the popular back propagation learning technique (BP) or methods originated with BP. Using special perceptron networks, like that presented in the literature as well as that proposed in this paper, some of the intricate relations can be realized in a simple way. Parameters of symbolic relations realized by means of such perceptrons can be directly determined during the network learning process. The simple neural network realization of the complex relations is possible only for some expressions describing a given data set. Polynomial forms are ones of them. In this paper, we deal with polynomial expressions and show that the atypical perceptron neural network realizing the rule discovery task can be based on reciprocal functions exclusively, without using activation functions of other types. The perceptron presented in this paper is a novel simple solution proposed by the authors. Determination of the polynomial parameters by means of the perceptron is performed during the network learning process. The proposed reciprocal-function based perceptron is useful to solve plenty of physical and metrolo[...]

Global-extreme-training of specific neural networks to find laws ruling empirical data DOI:10.15199/ELE-2014-008


  A need to create symbolic descriptions of unknown laws governing a given set of empirical data and to model the data set features is encountered in many situations and concerns plenty of physical systems. First papers on applying particular neural networks with exp(.) and ln(.) activation functions to solve the problem under consideration ware reported in [1-8]. Recently, specific networks with reciprocal-function activation-operators have been proposed in [9], [10]. Such networks allow us to describe the data set behavior be means of rational or polynomial functions, where the latter can be regarded as a particular case of the rational one. Coefficients of the rational or polynomial functions are determined in a way of training the specific neural network. As compared to the previously presented networks, the reciprocal-function-based ones lead to simpler network structures, include a lower number of the network elements and enable more effective network learning. Despite the introduced improvements, learning the networks is still a problem, [11], [12], and we devote plenty of time to make progress in this field. Improving the training technique is the goal of our research and this is the issue to be presented in this paper. Learning problems connected with gradient calculations Neural network learning can be realized in a variety of ways [11-12]. There are methods that are good in finding global optimum and can effectively omit local extremes. Another group is made by techniques which are fast and well suited for local area operations. Ability to generalize the data set behavior outside the range of the training variables is another required feature of the learning methods. For example, so called gradient descent (GD) techniques, where gradient of an objective function is calculated, often fail in case of very large networks [12]. From our experience results that learning the networks under consideration by means of the GD[...]

Evolution method aided by knowledge base to improve learning effects of some neural networks DOI:10.15199/13.2015.4.5


  Creating mathematical models of unknown rules governing empirical data is an important but not easy task. One of ways to carry out this is to apply an approximation technique. To popular approximation functions belong polynomials. In order to obtain a polynomial-based approximation one has to determine values of the polynomial coefficients. This can be done in a variety of ways. Both deterministic and probabilistic methods are applied. One of them is using neural networks, where the coefficients are obtained in the way of learning the network. In this paper, instead of single polynomial rational functions (ratio of two polynomials) are proposed to be the main approximation description. This means that subject of the network learning are coefficients of the rational functions. Superiority of the rational-function-based special networks over the networks with a single polynomial is that the former is more flexible and better suited for the training task. If needed, one can also obtain a polynomial description making use of the rational function. To do this, proper conversion of the rational function must be done, which allows us to retain simplicity of network structure. Even though networks with rational functions may seem to by more complex than the ones using single polynomials, in reality the opposite situation takes place when using the proposed networks. An example illustrating this issue is presented in the paper. The proposed networks have such characteristic features as: 1) the use of reciprocal activation functions of the 1/(.) type as activation functions included in the network; 2) different from typical location of the activation functions, i.e. predominantly in the hidden layer and input nodes. As already mentioned, learning the special network is a way to determine coefficients of the used approximation function [1-13]. Main problems concern the learning and appear if the considered experimental data rules are p[...]

Frequency-domain adaptive approach to problems with limited number of signal samples DOI:10.15199/13.2016.4.3


  Przedmiotem rozważań jest kwestia poprawy metodologii tworzenia w dziedzinie częstotliwości obrazu sygnałów czasu dyskretnego z małą liczbą próbek. Proponowana metoda jest modyfikacją metody opublikowanej w pracy [9]. Podobnie jak w pracy [9], nasza metoda jest oparta na wykorzystaniu algorytmu ewolucyjnego do analizy sygnału w dziedzinie częstotliwości. Jest to inne rozwiązanie niż w metodach klasycznych, gdzie sygnały czasu dyskretnego są opisywane z wykorzystaniem Dyskretnej Transformaty Fouriera (DFT). Nasze podejście do problemu nie wykorzystuje transformaty DFT i jest oparte na syntezie adaptacyjnej. W ten sposób możemy skuteczniej radzić sobie z problemami wycieków spektralnych, powodowanych przez mocne ograniczenie liczby próbek. Nowym elementem proponowanego pomysłu jest opis problemu z użyciem liczb zespolonych zamiast liczb rzeczywistych. W rezultacie analiza może myć bardziej efektywna niż w pracy [9]. Zalety naszego podejścia są przedstawione na przykładach. Słowa kluczowe: sygnały dyskretne z wąskim przedziałem czasu, analiza w dziedzinie częstotliwości, metody ewolucyjne, wycieki spektralne.Frequency-domain analysis is widely performed in the field of digital signal processing. A popular tool for this purpose is Discrete Fourier Transform (DFT) or Fast Fourier Transform (FFT). At such an approach, the problem occurs due to existing spectral leakages. These leakages appear because only truncated part of the signal is attainable for the analysis. Then, we get a spectrum being a product of convoluting the signal spectrum and a time-window spectral picture. As a result, each of real spectral lines is converted into a more complex form, creating a set of spectral lines. These lines present a spectrum including main and side lobes. Solving this problem is not an easy task and many publications are devoted to that matter. Aim of the proposed analysis is to get a frequency-domain picture from limited number of signal-[...]

Some aspects of designing perceptron network to predict bladder-cancer patient-survival

Czytaj za darmo! »

Recently, one observes a growing interest in applying artifi- cial neuron networks (ANN's) to analyze biomedical signals such as EKG, EMG, EEG [1-7]. ANN's are nonlinear systems for information processing that try to imitate the way in which biological neural networks operate. They can learn themselves, basing on properly arranged training patterns, and generalize as well as util[...]

An influence of current-leakage in analog memory on training Kohonen neural network implemented on silicon

Czytaj za darmo! »

The paper presents how the current leakage encountered in capacitive analog memories affects the learning process of hardware implemented Kohonen neural networks (KNN). MOS transistor leakage currents, which strongly depend on temperature, increase the network quantization error. This effect can be minimized in several ways discussed in the paper. One of them relies on increasing holding time of the memory. The presented results include simulations in Matlab and HSpice environments, as well as measurements of a prototyped KNN realized in a 0.18m CMOS process. Streszczenie. W pracy pokazano jak prąd upływu występujący w analogowych komórkach pamięci wpływa na proces uczenia w sprzętowych realizacjach sieci neuronowych Kohonena (KNN). Prądy upływu w tranzystorze MOS, które mocno zależą od temperatury, zwiększają błąd kwantyzacji sieci. Efekt ten może być minimalizowany na różne sposoby, omówione w pracy. Jeden z nich polega na wydłużeniu czasu przechowywania informacji w komórkach pamięci. Przedstawione wyniki zawierają symulacje w środowiskach Matlab i Hspice, a także badania laboratoryjne prototypu sieci KNN, wykonanego w technologii CMOS 0.18m. (Wpływ prądu upływu w analogowych komórkach pamięci na proces uczenia w sprzętowych realizacjach sieci neuronowej Kohonena). Keywords: analog neural networks, Kohonen networks, memory leakage effect, hardware signal processing Słowa kluczowe: analogowe sieci neuronowe, sieci Kohonena, upływność informacji, sprzętowe przetwarzanie sygnałów Introduction One of the most critical problems in hardware implemented neural networks is how to precisely store information about neuron weights. The way of storing the information depends on type of signals representing the neuron weights as well as on how the network is realized. In this paper, we focus on networks proposed by Kohonen in [1] that are trained in an unsupervised manner. Apart from the Kohonen networks, the obtained results can[...]

 Strona 1