Browsed by
Category: A Resource

Neural Networks and Python Code: Be Careful with the Array Indices!

Neural Networks and Python Code: Be Careful with the Array Indices!

Our Special Topics class on Deep Learning (Northwestern University, Master of Science in Predictive Analytics program, Winter, 2017) starts off with very basic neural networks: the backpropagation learning method applied to the classic X-OR problem. I’m writing Python code to go with this class, and the result by the end of the quarter should be five-to-six solid pieces of code, involving either the backpropagation or Boltzmann machine learning algorithm, with various network configurations. The following figure shows the dependence of…

Read More Read More

Deep Learning: The First Layer

Deep Learning: The First Layer

It’s been something of a challenging week. We’ve kicked off our new PREDICT 490 Special Topics Course in Deep Learning at Northwestern University. I’ve got a full house; there’s been a waiting list since Thanksgiving, and everyone is acutely aware of the business climate surrounding deep learning. However (and I’m not terribly surprised here), most people who want to learn Deep Learning (DL) really don’t have a solid foundation in neural networks just yet. Thus, what we’re really doing is…

Read More Read More

Approximate Bayesian Inference

Approximate Bayesian Inference

Variational Free Energy I spent some time trying to figure out the derivation for the variational free energy, as expressed in some of Friston’s papers (see citations below). While I made an intuitive justification, I just found this derivation (Kokkinos; see the reference and link below): Other discussions about variational free energy: Whereas maximum a posteriori methods optimize a point estimate of the parameters, in ensemble learning an ensemble is optimized, so that it approximates the entire posterior probability distribution…

Read More Read More

Brain-Based Computing: Foundation for Deep Learning

Brain-Based Computing: Foundation for Deep Learning

Three Key Brain Strategies Used in Deep Learning for Artificial Intelligence   References for Brain-Based Computing (Methodologies for Deep Learning and Artificial Intelligence) Maren, A.J. (2015) How the Brain Solves Tough Problems. In Making Sense: Extracting Meaning from Text by Matching Entities and Terms to Ontologies and Concepts, Chapter 2 Draft. (Dec. 31, 2015). pdf Maren, A.J. (2015). Brain-Based Computing. (PPT Slidedeck) PPT  

Brain Networks and the Cluster Variation Method: Testing a Scale-Free Model

Brain Networks and the Cluster Variation Method: Testing a Scale-Free Model

Surprising Result Modeling a Simple Scale-Free Brain Network Using the Cluster Variation Method One of the primary research thrusts that I suggested in my recent paper, The Cluster Variation Method: A Primer for Neuroscientists, was that we could use the 2-D Cluster Variation Method (CVM) to model distribution of configuration variables in different brain network topologies. Specifically, I was expecting that the h-value (which measures the interaction enthalpy strength between nodes in a 2-D CVM grid) would change in a…

Read More Read More

The Cluster Variation Method: A Primer for Neuroscientists

The Cluster Variation Method: A Primer for Neuroscientists

Single-Parameter Analytic Solution for Modeling Local Pattern Distributions The cluster variation method (CVM) offers a means for the characterization of both 1-D and 2-D local pattern distributions. The paper referenced at the end of this post provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 1-D and 2-D pattern distributions expressing structural and functional dynamics in the brain. The equilibrium distribution of local patterns, or configuration…

Read More Read More

Making Sense: Extracting Meaning from Text

Making Sense: Extracting Meaning from Text

Making Sense: Extracting Meaning from Text by Matching Terms and Entities to Ontologies and Concepts Text analytics is the means by which computer algorithms can extract meaning and useful insights from raw text sources. This can have enormous impact in realms such as marketing, business intelligence, and political campaigns. However, text analytics is one of the toughest challenges in predictive analytics. The reason why this is so hard? It’s because – when done right – text analytics must effectively connect…

Read More Read More

Novelty Detection in Text Corpora

Novelty Detection in Text Corpora

Detecting Novelty Using Text Analytics Detecting novel events – new words, meaning new events – is one of the most important text analytics tasks, and is an important step towards predictive analytics using text mining. On July 24, 2015, The New York Times (and many other news sources) published an article identifying potential inclusion of classified information in the emails which Hillary Clinton had sent via private email and stored on her private email server. How would we use text…

Read More Read More

Brain-Computer Interfaces, Kullback-Leibler, and Mutual Information: Case Study #1

Brain-Computer Interfaces, Kullback-Leibler, and Mutual Information: Case Study #1

Brain-Computer Interfaces, Kullback-Leibler, and Mutual Information: Case Study #1 In the previous blogpost, I introduced the Kullback-Leibler divergence as an essential information-theoretic tool for researchers, designers, and practitioners interested in not just Brain-Computer Interfaces (BCIs), but specifically in Brain-Computer Information Interfaces (BCIIs). The notion of Mutual Information (MI) is also fundamental to information theory, and it can be expressed in terms of the Kullback-Leibler divergence. Mutual Information is given as: Mutual Information Notation I(x,y) is the mutual information of two…

Read More Read More

The Single Most Important Equation for Brain-Computer Information Interfaces

The Single Most Important Equation for Brain-Computer Information Interfaces

The Kullback-Leibler Divergence Equation for Brain-Computer Information Interfaces The Kullback-Leibler equation is arguably the best place for starting our thoughts about information theory as applied to Brain-Computer Interfaces (BCIs), or Brain-Computer Information Interfaces (BCIIs). The Kullback-Leibler equation is given as: We seek to express how well our model of reality matches the real system. Or, just as usefully, we seek to express the information-difference when we have two different models for the same underlying real phenomena or data. The K-L…

Read More Read More