Subject: Introduction to Computational Neuroscience (VU-CSC 323)
In simple terms, a neuron model is a mathematical formula or a computer program that mimics how a biological brain cell (neuron) works. It is designed to take in information, process it, and decide whether to "fire" (send a signal) to other, connected neurons.
The Basic Concept Think of a single neuron as a calculator or a "smart" node in a network. It doesn't think, but it processes information using simple math: - Inputs (x): Data it receives from other neurons. - Weights (w): The importance assigned to each input (some inputs are more important than others). - Activation Function: A rule that decides if the combined input is strong enough to trigger an output, often an "all-or-nothing" pulse (also called a spike or action potential).
1. Hodgkin–Huxley (HH) model
The HH model is like writing a full report of every visitor at the gate of our house: - It tracks how sodium and potassium gates open/close. - It calculates the exact voltage changes. - Very realistic, but complicated.
Imagine NEPA (electric company) monitoring every single wire, transformer, and switch in Lagos. That’s HH — very detailed.
The Hodgkin–Huxley model is simply a way scientists explain how a nerve cell sends an electrical message by opening and closing those doors at the right time.
What Hodgkin and Huxley did They didn’t just say “doors open and close.”, they measured, counted, and described how fast each door opens and closes using mathematics.
So the model answers questions like: - How strong is the nerve signal? - How fast does it travel? - Why does it rise suddenly and fall smoothly?
A very human analogy
Think of a stadium wave: - People (ions) stand up (move) in a specific order. - One section stands quickly (sodium). - Another follows to calm things down (potassium). - The wave travels smoothly around the stadium.
The Hodgkin–Huxley model explains the rules of that wave. The Hodgkin–Huxley model explains how nerve cells send electrical messages by carefully timed opening and closing of tiny ion doors in the cell wall
2. Leaky Integrate-and-Fire (LIF)
The Leaky Integrate-and-Fire (LIF) model is a simplified, intuitive way to describe how a neuron works, without heavy biology or math.
Example Think of a neuron (a brain cell) as a little bucket that collects water (signals). - Integrate: The bucket collects drops of water (incoming signals). - Leaky: The bucket has a tiny hole, so water slowly drips out (the signal fades over time). - Fire: When the bucket gets too full (reaches a threshold), it spills over — that’s the neuron sending a “spike” (an electrical signal).
The LIF model is like saying: • “Forget the details. Just track if the house voltage crosses a threshold.” • If it does, fire a spike. • Then reset.
How the LIF neuron behaves - Integrate (collect input): Incoming signals increase the neuron’s voltage. - Leak (lose charge over time): If no new input arrives, the voltage slowly drops back toward rest. - Fire (spike): When the voltage reaches a threshold, the neuron fires a spike. - Reset (refractory period): After firing, the voltage is reset and the neuron briefly cannot fire again.
Why It’s Useful Scientists use the LIF model because it’s: - Simple (easy math). - Realistic enough (captures the main behavior of neurons). - Efficient (good for simulating big networks of brain cells).
3. Artificial Models
These are simplified versions used in Artificial Intelligence (machine learning). They are less about biology and more about computing and solving problems (like recognizing images).
- Inspired by the brain but simplified. - Made of nodes (neurons) connected in layers. - Used to learn patterns from data — like recognizing faces, translating languages, or predicting prices.
They are the foundational mathematical models used in artificial neural networks (ANN) to simulate the functioning of biological neurons. They are computational, non-biological components that receive, process, and transmit information, acting as the building blocks of AI systems.
Types 1. Feedforward Neural Network: basic model where data flows one way (input → output). 2. Convolutional Neural Network (CNN): use for image and video recognition. 3. Recurrent Neural Network (RNN): use for sequences like speech or text. 4. Transformer Models: powerful for language tasks (used in ChatGPT, translation tools, etc.).