Matches in SemOpenAlex for { <https://semopenalex.org/work/W2912131220> ?p ?o ?g. }
Showing items 1 to 70 of
70
with 100 items per page.
- W2912131220 abstract "Data mining techniques have become extremely important with the proliferation of data. One technique that has attracted much attention is the use of feedforward neural networks. This is because feedforward neural networks are excellent at finding relationships between the input and the output in data sets that are not understood. As a result they are commonly used for function approximation and classification for their ability to generalize. However, the traditional training methods for feedforward neural networks have meant that it is difficult to determine what the network has learnt and can lead to exponential training times if the data can be learnt at all. Long training times are a result of the network being of fixed-size, which can mean the network is either too small to learn the data or too large to learn it well. Also the dominant approach to training artificial neurons in networks is to iteratively search for single numeric values for the weights that approximately satisfy the training conditions. The search is guided byattempting to reduce the amount of error in the network. However these iterative approximations are unlikely to produce accurate single weight values that satisfy the learning conditions and the rules the network learns encoded in the weights. In this thesis, a novel method of training neurons is presented, which leads to a dynamic training algorithm for feedforward neural networks in an attempt to overcome the problems of fixed-sized networks. This method of training neurons allows neurons to be interrogated to determine if they can learn a particular input vector. This forms a natural criterion for dynamically allocating neurons into the network. This means thateach input vector can be learnt as it is presented to the network. Therefore the algorithm is a single pass training algorithm and eliminates the local minima problem. The novel approach of training neurons is based on learning the relationships between the input vector into the neuron and the associated output. These relationships are a transform of relationships between the neuron's weights and threshold and define regions in the neuron's weight-space, instead of asingle numeric weight vector for each neuron. This means that rules can be easily extracted from the network which indicate what the network has learnt. We call this method Dynamic Relational learning. In the past, often a statistical sensitivity analysis was performed on the trained neural network to find something about the range of values that the weights could take that would cause the neuron to activate. We call the region in the weight-space that causes a neuron to activate the Activation Volume. The Dynamic Relational algorithm works by examining the surfaces of the volume in the weight-space that causes the neuron to activate. The surfaces of the volume express relationships between the weights in each neuron. We can analyze these surfaces to determine precisely what the neuron has learnt. Using the principles of Dynamic Relational learning we can formulate the maximum number of neurons required to implement any data set. The algorithm is tested on a number of popular data sets to evaluate the effectiveness of this technique and the results are presented in this thesis. Although the algorithm works using binary, methods of converting floatingpoint and other non-binary data sets to binary are given and used. We find that the networks do learn the data sets in a single pass, produce small networks and the Activation Volume is found. We see that the maximum number of neurons required to learn the data sets confirms the formula. We see that it is not necessarily the case that we need more input vectors in the training set than there are weights in the network, this is because each input vector is used to train each weight. We can find which input vectors requireneurons to be allocated into the network. Also we can interrogate whether a neuron knows how to classify an input vector or whether the input is unknown. Finally we can determine precisely what each neuron has learnt and what logical relationships are used to connect the neurons into the layer. A number of new theorems for analyzing constraints have been developed to be able to facilitate Dynamical Relational learning. The current implementation relies on constraint satisfaction programs, and hence this program binds the performance of the implementation." @default.
- W2912131220 created "2019-02-21" @default.
- W2912131220 creator A5057419749 @default.
- W2912131220 date "2017-02-08" @default.
- W2912131220 modified "2023-09-25" @default.
- W2912131220 title "A novel approach to training neurons with dynamic relational learning" @default.
- W2912131220 doi "https://doi.org/10.4225/03/589aa9a94b2cd" @default.
- W2912131220 hasPublicationYear "2017" @default.
- W2912131220 type Work @default.
- W2912131220 sameAs 2912131220 @default.
- W2912131220 citedByCount "0" @default.
- W2912131220 crossrefType "dissertation" @default.
- W2912131220 hasAuthorship W2912131220A5057419749 @default.
- W2912131220 hasConcept C11413529 @default.
- W2912131220 hasConcept C119857082 @default.
- W2912131220 hasConcept C121332964 @default.
- W2912131220 hasConcept C127413603 @default.
- W2912131220 hasConcept C133731056 @default.
- W2912131220 hasConcept C14036430 @default.
- W2912131220 hasConcept C153294291 @default.
- W2912131220 hasConcept C154945302 @default.
- W2912131220 hasConcept C2777211547 @default.
- W2912131220 hasConcept C38858127 @default.
- W2912131220 hasConcept C41008148 @default.
- W2912131220 hasConcept C47702885 @default.
- W2912131220 hasConcept C50644808 @default.
- W2912131220 hasConcept C78458016 @default.
- W2912131220 hasConcept C86803240 @default.
- W2912131220 hasConceptScore W2912131220C11413529 @default.
- W2912131220 hasConceptScore W2912131220C119857082 @default.
- W2912131220 hasConceptScore W2912131220C121332964 @default.
- W2912131220 hasConceptScore W2912131220C127413603 @default.
- W2912131220 hasConceptScore W2912131220C133731056 @default.
- W2912131220 hasConceptScore W2912131220C14036430 @default.
- W2912131220 hasConceptScore W2912131220C153294291 @default.
- W2912131220 hasConceptScore W2912131220C154945302 @default.
- W2912131220 hasConceptScore W2912131220C2777211547 @default.
- W2912131220 hasConceptScore W2912131220C38858127 @default.
- W2912131220 hasConceptScore W2912131220C41008148 @default.
- W2912131220 hasConceptScore W2912131220C47702885 @default.
- W2912131220 hasConceptScore W2912131220C50644808 @default.
- W2912131220 hasConceptScore W2912131220C78458016 @default.
- W2912131220 hasConceptScore W2912131220C86803240 @default.
- W2912131220 hasLocation W29121312201 @default.
- W2912131220 hasOpenAccess W2912131220 @default.
- W2912131220 hasPrimaryLocation W29121312201 @default.
- W2912131220 hasRelatedWork W133240859 @default.
- W2912131220 hasRelatedWork W1485940563 @default.
- W2912131220 hasRelatedWork W1510333322 @default.
- W2912131220 hasRelatedWork W1512218335 @default.
- W2912131220 hasRelatedWork W1544216806 @default.
- W2912131220 hasRelatedWork W1546031801 @default.
- W2912131220 hasRelatedWork W186963548 @default.
- W2912131220 hasRelatedWork W2295912923 @default.
- W2912131220 hasRelatedWork W2365830570 @default.
- W2912131220 hasRelatedWork W2402803852 @default.
- W2912131220 hasRelatedWork W2461336877 @default.
- W2912131220 hasRelatedWork W2892355103 @default.
- W2912131220 hasRelatedWork W2951877063 @default.
- W2912131220 hasRelatedWork W3005559199 @default.
- W2912131220 hasRelatedWork W3022446200 @default.
- W2912131220 hasRelatedWork W3200692463 @default.
- W2912131220 hasRelatedWork W425946088 @default.
- W2912131220 hasRelatedWork W1905011107 @default.
- W2912131220 hasRelatedWork W285359597 @default.
- W2912131220 hasRelatedWork W2857157743 @default.
- W2912131220 isParatext "false" @default.
- W2912131220 isRetracted "false" @default.
- W2912131220 magId "2912131220" @default.
- W2912131220 workType "dissertation" @default.