Preskoči na glavno vsebino

Optimization for Multi Layer Perceptron: Without the Gradient


  These days, the publishing house Nova Publishers published the book, entitled Advances in Machine Learning Research. In it is a chapter entitled OPTIMIZATION FOR MULTI LAYER PERCEPTRON: WITHOUT THE GRADIENT where I describe two new algorithms for neural networks learning (Bipropagation and Border Pairs Method ). Both of them are much more powerful than their predecessors - Backpropagation algorithm. The second algorithm is among other things constructive.

Abstract of the book chapter


During the last twenty years, gradient-based methods have been primarily focused on the Feed Forward Artificial Neural Network learning field. They are the derivatives of Backpropagation method with various deficiencies. Some of these include an inability to: cluster and reduce noise, quantify data quality information, redundant learning data elimination. Other potential areas for improvement have been identified; including, random initialization of values of free parameters, dynamic learning from new data as it becomes available, explanation of states and settings in hidden layers of learned ANN, among other.
This chapter deals with the contemporary, non-gradient approach to ANN learning, which is not based on gradual reduction of remaining learning error anymore and it tries to eliminate most of the mentioned deficiencies. Introduction includes a chronological description of some methods, which deal with solutions of mentioned problems: Initializing Neural Networks using Decision Trees (Arunava Banerjee, 1994), DistAl: An inter-pattern distance-based constructive learning algorithm (Jihoon Yang, 1998), Geometrical synthesis of multilayer feedforward neural networks or Multi-Layer Perceptron (Rita Delogu, 2006) and Bipropagation - a new way of MLP learning (Bojan Ploj, 2009).We continue with the description of a new learning method - Border Pairs Method (BPM), which in comparison with the gradient methods carries numerous advantages or eliminates most of the predecessor’s deficiencies. The BMP implements and uses border pairs – learning patterns pairs in the input space, which are located close to the class border. 
The number of boundary pairs gives us some information about the complexity of the learning process. Boundary pairs are also the perfect basis for the noise reduction. We determine that performing a noise reduction of the border pairs is sufficient.
By dividing the input space, the homogenous areas (clusters) are established. For every linear segment of border we assign one neuron in the first layer. The MLP learning begins in the first layer by adapting individual neurons. Neurons on the first layers are saturated, so we get a binary code on the output of the first layer - the code is unified for all members of the same cluster. Logical operations based on the data from the first layer are executed in the following layers. Testing showed that such learning is reliable, it is not subject to overfitting, and is appropriate for on-line learning and susceptible to concept drift in the process of learning (forgetting and additional learning).

Komentarji

Priljubljene objave iz tega spletnega dnevnika

Bipropagation demo in TensorFlow

Bipropagation is a new Deep Learning algorithm. It is much faster and much more reliable than Backpropagation. Here is the demo from the  ResearchGate and GitHub. Inner layers of the Neural Network have not hidden anymore. Learning is done layer by layer with much fewer iterations. Please cite me in your work.


Click the G+button if you like this demo. Any comments are desirable.

Po poteh nekega algoritma

Ko sem med raziskovanjem za potrebe podiplomskega študija dobil idejo za nov algoritem strojnega učenja, me je prevzel notranji nemir. Zaslutil sem, da sem na sledi pomembnega odkritja in v hipu sem začutil kako se mi po žilah pretaka adrenalin. Pravijo, da je raziskovalna strast lahko večja  celo od tiste hazarderske,  ki je menda zakrivila številne zgodbe iz črne kronike. No, na vso srečo pa raziskovalna strast ni povezana s tako nizkotnimi pobudami kot hazarderska. Ideji algoritma je nato sledil njegov razvoj, ki je trajal več kot leto in je bil prežet s številnimi vzponi in padci. Navidezne težavice so pogosto preraščale v težave, a na srečo se je vedno našla rešitev za njih. V meni sta se tako prepletala dvom in radost, dokler eksperimenti niso potrdili vseh mojih pričakovanj. Takrat so me preplavili prijetni občutki vznesenosti, ki bi jih lahko primerjali z nekakšno zaljubljenostjo. Ko si vznesen si stvarnost slikaš lepšo, kot je v resnici in tako sem naivno pričakoval, da bo s…

A new Deep Learning Algorithm: One-Step Method

We are living in the AI era where progres is faster and faster each and every single day. Here is another one discovery in this field: One Step Method, a new machine learning algorithm which can do many things, amongst other can replace digital circuits with neurons, can find the even better construction of neural network than Border Pairs Method. More you can find in the 3rd chapter of our book: Machine Learning: Advances in Research and Applications from Nova Science Publishers.




This new algorithm is also suitable for Deep Learning in combination with other methods like convolutional learning, bipropagation, border pairs method, autoencoder and others.