What is machine learning ?

Machine Learning is a statistical science where the goal is to find the statistical regularities in the environment and to model a system to work as if a physical system might have performed in that environment or even better.

As for any intelligent living being’s need to be aware of it’s environment to learn, Machine Learning(ML) systems also require to understand its environment to learn these regularities. We provide this information to the ML system by a set of vectors called the input pattern vectors. Input pattern vectors are a subset of the feature space and feature space is a vector space containing all the events in the environment in a transformed representation. This type of feature transformations are important because it helps us to reduce the dimensionality of the original vector space in many cases.

We call the output or action of the ML system on the environment as the output pattern vector and the output we expect from the ML system as the desired output vector or desired output response.

Ok now we have the inputs to the system and we know what to expect out of the system. So how do we know if the system is working as we expected? One good way to know that would be to find the difference between the output of our system and the desired output. Had it been a static system, things were simple and you only had to literally do what was said in the previous sentence. But in a dynamic system things are a bit different.

So for the time being let’s consider our ML system to be a black box. Let’s also assume that the system’s action can be determined by a parameter vector \theta. So given an input pattern vector \mathbf{s}, we can write our ML system’s output response as \hat{y}(s,\theta) .

Here comes the idea of an empirical risk minimization framework. Usually, given the input and output vectors we can define a loss function that minimizes the error between the predicted response and the desired response. This is called the true risk. But in real world scenarios we never have access to the whole population of data. So we assume that the data we have at our had have the same distribution as that of the population and hence we approximate this as the whole population data distribution and hence the term empirical. We now try to find a function that minimizes the risk (error) between the output response and the desired response. This process is called empirical risk minimization.

So if \mathbf{c}(\mathbf{s},\mathbf{\theta}) is the loss function that computes the error between predicted response and the desired response, we can define our empirical risk function \hat{l}_n (\theta) as

\displaystyle \hat{l}_n (\theta) = \frac{1}{n} \sum_{i=1}^n \mathbf{c}(\mathbf{s}_i,\mathbf{\theta})

Our objective here is to find a \theta that minimizes the above function. To start with we give a random value to \theta and call it \theta_{0} and compute the loss function. By monitoring the loss function we can see if we are getting closer to our optimal \theta . The change in loss function with respect to \theta can be calculated by taking its derivative i.e. \displaystyle \frac{d\hat{l}_n(\theta)}{d\theta}.

So given an initial parameter \theta_{0} (remember we choose the value for this), we can compute the \theta at iteration \mathbf{n+1} as

\displaystyle \theta_{n+1} = \theta_{n} - \gamma_{n} \frac{d\hat{l}_n(\theta)}{d\theta}

where \gamma_n is called the learning rate.

This idea is called the method of gradient descent and is the essence of a huge number of practical machine learning algorithms.

Public Datasets for Neuroscience Research

Here is a list of publicly available datasets for your next neuroscience project:

PS: I will try to update the list as I find more resources 

What is Computational Neuroscience ?

It is not uncommon for me to get asked ‘What Computational Neuroscience is?’ as I’m pursuing a degree in the Computational Modelling track of Applied Cognitive Neuroscience and most people whom I have come across outside of this domain have hardly heard about such a field. So explaining to them what it is in layman term is challenging, as the field itself is sparsely defined. Trying to know what it is, from ‘Wikipedia’ also might not help everyone. I, myself must have taken a while to understand what the field actually is. In this blog post I will try to explain what Computational Neuroscience is, from what I have understood. Maybe I could come up with a better explanation in the future as I go deeper.

Although the name Computational Neuroscience might have emerged recently, the central motivation must have started somewhere in between late 1940s and 1980s during the days of the rise of conventional AI. In the most flamboyant language, Computational Neuroscience can be called an interdisciplinary field of Neuroscience, Cognitive Science, Computer Science and Psychology. In a more abstract language, we can define Computational Neuroscience as a field that aims at explaining the neural mechanisms behind the cognitive abilities of living beings by developing quantitative models (Neural Networks) of those mechanisms or in other words a methodological branch that exploits the resourcefulness of computational research to explain how the neural structures achieve their effect.

One of the widely held belief that has contributed to the development of this field is, understanding how human mind works will help us to design machines that can outperform today’s computer programs for visual recognition, language processing and artificial intelligence. And there is also this other half of the field trying to explain how brain works by taking inspiration from Computer Science. 

So is the brain a computer ? The answer is Yes and No. Though brain was the motivation in the early half of the computer revolution for most of its architectures, as I said in the previous paragraph, today we are coming the other way around, trying to explain functioning of our nervous system in the computational terms. But yet why can’t we accept brain as a computer? There is so much discrepancies than similarities between a brain and a computer. Unlike manufactured computers, our brains are highly plastic – they grow, develop, learn and change. And though we have managed to develop processors that have higher performance rate than our brain, even the most expensive super computer fails before the efficiency of the parallel processing of human mind (Our brains can only communicate in the range of milliseconds whereas the newest computers communicate at the speed of nano and picoseconds ).

The core organization of brain within Computational Neuroscience is loosely structured by the Marr’s levels of Analysis. Marr defined three levels to every system that makes computations. The topmost level is the (1) Computational level where in we have the abstract problem analysis breaking down it’s main components; underneath it comes the (2) algorithmic level where we define the formal procedures; and then the last level (3) hardware level or the physical implementation level of the computation.

We know what happens and how it happens millions and billions of miles away from our planet earth but not yet how our own mind works. Unlike many other branches, here a pure bottom-up approach nor a pure top-down approach is not going to be helpful, which makes it both the most challenging as well as the most exciting field to be in right now.

Getting Started in Computational Neuroscience

The demand for Computational Neuroscientists have been on the rise since the inception of BRAIN Initiative and Human Connectome Project. And today getting into a graduate program is not the only means to kickstart your journey in Computational Neuroscience.  Even if your plan is to get into a graduate program, I would strongly recommend auditing the following MOOCs:

  1. Computational Neuroscience (Coursera): The most comprehensive introductory course on Computational Neuroscience available in the internet. The course starts from the basics of neurobiology and takes you all the way upto building learning algorithms.
  2. Machine Learning by Andrew Ng (Coursera): This requires no explanation. If you have used Coursera or have googled about Machine Learning, you must have came across this name at least once. Andrew Ng is the founder of Coursera and his Machine Learning course was the first one to be offered in Coursera, which is the most taken and recommended course in Coursera.
  3. Neural Network Mathematics (Coursera): Get ready to bleed. Once you complete this course, be ready to call yourself an expert of Machine Learning.
  4. Deep Learning by Google (Udacity)
  5. Principles of fMRI 1 & fMRI 2

Other resources:

  1. [Ebook] Python in Neuroscience (Frontiers in Neuroinformatics)

Prefrontal Modulation of Visual Processing In Humans

This is the transcript of the talk I did in Systems Neuroscience class at UT Dallas.

Disclaimer: The paper being discussed in this post is not my work. You can download the original paper from here.

After reading this paper, I was actually super excited to find myself that this study was only the tip of the iceberg.

Now, since I have got your ATTENTION, let me ask you, what is ATTENTION? ATTENTION, it’s defined as the cognitive and behavioral process of selectively concentrating on a discrete aspect of information. And in this paper, we will be studying visual attention and how various neuronal mechanics are affected by it.


Before going into the details of this paper, it’s is important to know the big picture. What problem is the entire field trying to solve. Understanding visual processing helps us to understand visual attention. And understanding of visual attention when combined with the understanding of motor and auditory attention helps us to better understand working memory, learning and decision making and ultimately the human behavior. And the evidences for the intra-hemispherical control of visual processing, from this study was in accordance with the intra-hemispherical control of auditory and visual processing.

In addition to this, some studies have also demonstrated the plasticity of this region, which is now helping experts to optimize cognitive therapy techniques for rehabilitating patients with prefrontal cortical damage.

Now let’s look at the summaries of some of the researches done before this study.

  1. Visual attention modifies your sensory inputs to improve your perception
  2. There is a correlation between attention and changes in connectivity
  3. Attention affects the sensitivity of neuronal population

Though all these studies were important in understanding visual attention, they failed to consider the temporal dynamics of the prefrontal extrastriate interaction.

And the solution the authors proposed was the inclusion of theory based behavioral testing, along with the physiological recording techniques.

So, what exactly were the authors trying to answer with this specific research? Well, their goal was to find the anatomical, electrophysiological and behavioral evidence for their hypothesis that prefrontal cortex regulates the neuronal activity in extrastriate cortex during visual attention. And they used the temporal resolution of ERPs coupled with lesion analysis to find the evidences to support their hypothesis.

Now let’s look in detail what these researchers did.

For this experiment, 10 patients were selected who had unilateral focal lesion to their dorsolateral prefrontal cortex. Lesion were either due to single stroke or craniotomy. The patients were also matched by 10 controls free of neurological and psychiatric diseases for age, sex and education.

Each of these subjects, so selected,  were seated in a chair 1.6m away from the video monitor in a sound attenuated chamber and they were instructed to fixate on a yellow crosshair and to press a button upon detection of randomly occurring targets embodied in streams of task irrelevant stimuli directed to both visual hemi-field. If the subject detected the target within 300-800ms after presenting the target, it was considered a hit or else a miss.


While the subject was performing the task, brain electrical activity was recorded from electrodes placed at these following sites: Fp1, Fpz, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1 & O2

Target related mean and peak amplitudes of ERP components were measured relative to a 200ms prestimulus baseline. The mean ERP amplitudes so obtained were subjected to a series of ANOVAs with group (patient, controls) as the between-subject factor and visual hemifield & electrode position as the repeated measures factors.

The researchers also obtained difference waveform by subtracting the standard ERPs from the target ERPs and performed a finer temporal analysis of the attention effects on these difference waves.




From this study, it was observed that controls had a hit rate of 93.9% and patients 82.3%. An interaction between group and field of presentation showed a target detection deficit in the contralessional field in patients.


 The reaction time was also seen to be prolonged in the patients.

When the subjects were inattentive, both patients and controls showed the normal pattern of larger early (P1) ERPs over TOc and TOi. Here TOi stands for temporo-ocipital ipsilesion and TOc for temporo-ocipital contralesion.


But for standard stimuli, P1 response over TOi was found to be reduced, however N1 response remained unaffected.


And for target stimuli, N2 components were abolished & P3b response reduced. If you don’t know what a P3b is, the only thing you need to understand is that higher the unexpectedness, lower the P3b response.

So the 2 messages I want you to take away from this post are:

  1. Prefrontal cortex damage reduces your visual discrimination ability and hence attention
  2. Studying prefrontal cortex damage compliments the understanding of our current knowledge about information processing in humans and thereby helping to build better models for Computer Vision

On Publishing Research Articles

These are the notes that I have scribbled down in my notepad, in the same order as it was mentioned (so some points might be misplaced), from the panel discussion that took place at University of Texas at Dallas as part of their Graduate Professional Week – 2016

Ingredients for Publishable Article:

Subject to say about
— Study something novel or new methodology or both
— Tell a nice story — Where this fits?
— For introduction, why this study is important
— Abstract should have a great appeal to general audience

— Good Knowledge of the field
— Find the primary audience of the journal before start writing the article, and address it accordingly

Advice for the 1st  Research Article:

— Be ready to be rejected (expect high rejection) – Don’t take it personal. Rejection is the norm
Write everyday. Get into the habit of writing
— Don’t publish the 1st one, publish the 2nd one. You can publish the 1st one later (pun)
— Writing a good paper is difficult. You can/will/should improve with time
— Break the process down and take one at a time (creating figures, making charts, creating story-line)
Get organized with references (make use of RefWork)
Keep plagiarism in check
Feedback is important. Try to get adequate feedback before sending them to journals
— Have multiple levels of feedback (peers/colleagues, mentors)
— Take comments from editors seriously
— Revise your paper. Revision is key
— Have a fabulous abstract/introduction/1st paragraph
— Don’t submit sloppy work. Affects your’s as well as your institute’s reputation
— Use RefWork [UTD students have life long free access]
Accept criticism 
— Start writing only after understanding the work completely

Problems students face while trying to write the article:

— Perfectionism
— Procrastination

Advice for finding the right journals:

— Read & Read more
—  1 day of reading = 1 week at the lab
— Read what people are writing about in your field, understand where the field is going
— The more you read, the better writer you become
— Keep up with “methods”
— Look at citation to find it’s place in your field
— Find who else is working in your field
— The most cited journal in your bibliography is the best journal to publish your article
— Writing has to be specific as well as generic. Should make sense for both experts as well as newbies

For Read & Review sessions:

— Always thank your reviewer
— Never get into an argument tone
— Spend time to understand the reviewer’s comment

Contents from the handout

Books on Academic Writing:

  1. Academic Writing: A Handbook for International Students by Stephen Bailey
  2. Destination Dissertation: A Traveler’s Guide to Done Dissertation by Sarah K Foss
  3. From Inquiry to Academic Writing: A Practical Guide by Stuart Greene
  4. Academic Writing and Publishing: A Practical Guide by James Hartley
  5. Handbook for Academic Authors by Beth Luey
  6. Academic Writing: A Guide for Management Students & Researchers by Mathukutty M Monippally 
  7. Handbook of Academic Writing: A Fresh Approach by Rowena Murray
  8. How to Write A Lot: A Practical Guide to Productive Academic Writing by Paul J Silvia
  9. Academic Writing for Graduate Students: Essential Tasks and Skills by John M Swales
  10. Stylish Academic Writing by Helen Sword 



  • Dr. Marion Underwood, Dean of Graduate Studies
  • Dr. Ellen Safley, Dean of Libraries 
  • Dr. Yves Chabal, Professor of Material Science & Engineering
  • Dr. Julia Chan, Professor of Chemistry
  • Dr. Frank Dufour, Professor of Arts and Technology
  • Dr. John Goosh, Associate Dean of Graduate Studies
  • Dr. Shayla Holub,  Associate Professor of Developmental Psychology 
  • Dr. Alex Piquero, Associate Dean of Graduate Programs
  • Dr. Karen Prager, Professor of Psychology
  • Dr. Sumit Sarkar, Professor of Information Systems 

Why this Blog ?

Ideas are worthless unless they can be communicated clearly and persuasively to others. And the best place to communicate them is the internet. I believe in an open culture where people from different disciplines and different nations share their ideas and contribute to each other’s works. In this fast ever changing world, the best way to accelerate your research is to keep your works open and accessible to everyone, so that they can learn, share and comment on your work!

This blog is a scientific journal of my learnings, findings, ideas and research works done in the field of Computational Neuroscience. Other than my ideas & works, I would also be sharing my reviews on stuffs that excites me!