Abstract: In computational complexity theory, Håstad's switching lemma is a vital analytical tool for proving lower bounds on the size of constant-depth Boolean circuits. Using the switching lemma, Håstad showed in 1986 that Boolean circuits of depth $k$, in which only AND, OR, and NOT gates are allowed, require size $\exp\left(\Omega\left(n^{\frac{1}{k-1}}\right)\right)$ for computing the PARITY function on $n$ variables. In essence, the switching lemma says that, given an arbitrary formula in disjunctive normal form, if we set some fraction of the variables randomly, then with high probability, the restricted function can be computed by a decision tree of small depth. In 2012, Impagliazzo, Matthews and Paturi formulated an extended switching lemma which says that, given a sequence of formulas in conjunctive and/or disjunctive normal form on the same set of variables, if all of them are hit by the same random restriction, then it is exponentially unlikely that there is a large subset of formulas where each formula contributes a large number of variables to their joint decision tree. In the first part of my talk, we will discuss the extended switching lemma and its proof with the help of an illustrative example.
In the second part of the talk, we will critique the neuroidal model for cognition proposed by Valiant. We will start off by discussing the motivation behind Valiant's work. We will then explore the physiology of the brain and some insights from cognitive psychology that aided Valiant's formulation of the neuroidal model. Next, we will describe the actual model in extensive detail and present the algorithm put forward by Valiant for implementing unsupervised memorization within his model as a case study. Finally, we will conclude by discussing the relevance of the neuroidal model to the current attempts by researchers to build cognitive computing systems.