A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey–predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions / Yaman, Anil; Iacca, Giovanni; Constantin Mocanu, Decebal; Coler, Matt; Fletcher, George; Pechenizkiy, Mykola. - In: EVOLUTIONARY COMPUTATION. - ISSN 1063-6560. - 29:3(2021), pp. 391-414. [10.1162/evco_a_00286]
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions
Giovanni Iacca;
2021-01-01
Abstract
A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey–predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.File | Dimensione | Formato | |
---|---|---|---|
1904.01709.pdf
accesso aperto
Tipologia:
Post-print referato (Refereed author’s manuscript)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
627.76 kB
Formato
Adobe PDF
|
627.76 kB | Adobe PDF | Visualizza/Apri |
evco_a_00286.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.17 MB
Formato
Adobe PDF
|
1.17 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione