Research

Aspects of learning within networks of spiking neurons


Reference:

Carnell, A. R., 2008. Aspects of learning within networks of spiking neurons. Thesis (Doctor of Philosophy (PhD)). University of Bath.

Related documents:

[img]
Preview
PDF (Carnell_PHD.pdf) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (13MB) | Preview

    Abstract

    Spiking neural networks have, in recent years, become a popular tool for investigating the properties and computational performance of large massively connected networks of neurons. Equally as interesting is the investigation of the potential computational power of individual spiking neurons. An overview is provided of current and relevant research into the Liquid Sate Machine, biologically inspired artificial STDP learning mechanisms and the investigation of aspects of the computational power of artificial, recurrent networks of spiking neurons. First, it is shown that, using simple structures of spiking Leaky Integrate and Fire (LIF) neurons, a network n(P), can be built to perform any program P that can be performed by a general parallel programming language. Next, a form of STDP learning with normalisation is developed, referred to as STDP + N learning. The effects of applying this STDP + N learning within recurrently connected networks of neurons is then investigated. It is shown experimentally that, in very specific circumstances Anti-Hebbian and Hebbian STDP learning may be considered to be approximately equivalent processes. A metric is then developed that can be used to measure the distance between any two spike trains. The metric is then used, along with the STDP + N learning, in an experiment to examine the capacity of a single spiking neuron that receives multiple input spike trains, to simultaneously learn many temporally precise Input/Output spike train associations. The STDP +N learning is further modified for use in recurrent networks of spiking neurons, to give the STDP + NType2 learning methodology. An experiment is devised which demonstrates that the Type 2 method of applying learning to the synapses of a recurrent network — effectively a randomly shifting locality of learning — can enable the network to learn firing patterns that the typical application of learning is unable to learn. The resulting networks could, in theory, be used to create to simple structures discussed in the first chapter of original work.

    Details

    Item Type Thesis (Doctor of Philosophy (PhD))
    CreatorsCarnell, A. R.
    Uncontrolled Keywordsliquid state machine, spike time dependent plasticity (stdp), spiking neurons
    DepartmentsFaculty of Science > Computer Science
    StatusPublished
    ID Code12159

    Export

    Actions (login required)

    View Item

    Document Downloads

    More statistics for this item...