Our group specializes in theoretical and experimental studies of exciting and innovative research fields such as machine learning, extracellular and intracellular stimulations and recordings of neurons in-vitro, synchronization of neural networks and chaotic lasers, physical random number generators and advanced protocols for secure communication.
Ultrafast physical random bit generation at hundreds of Gb/s rates, with verified randomness, is a crucial ingredient in secure communication and has recently emerged using optics-based physical systems. Here we examine the inverse problem and measure the ratio of information bits that can be systematically embedded in a random bit sequence without degrading its certified randomness. These ratios exceed 0.01 in experimentally obtained long random bit sequences. Based on these findings we propose a high-capacity private-key cryptosystem with a finite key length, where the existence as well as the content of the communication is concealed in the random sequence. Our results call for a rethinking of the current quantitative definition of practical classical randomness as well as the measure of randomness generated by quantum methods, which have to include bounds using the proposed inverse information embedding method.
Recently, deep learning algorithms have outperformed human experts in various tasks across several domains; however, their characteristics are distant from current knowledge of neuroscience. The simulation results of biological learning algorithms presented herein outperform state-of-the-art optimal learning curves in supervised learning of feedforward networks. The biological learning algorithms comprise asynchronous input signals with decaying input summation, weights adaptation, and multiple outputs for an input signal. In particular, the generalization error for such biological perceptrons decreases rapidly with increasing number of examples, and it is independent of the size of the input. This is achieved using either synaptic learning, or solely through dendritic adaptation with a mechanism of swinging between reflecting boundaries, without learning steps. The proposed biological learning algorithms outperform the optimal scaling of the learning curve in a traditional perceptron. It also results in a considerable robustness to disparity between weights of two networks with very similar outputs in biological supervised learning scenarios. The simulation results indicate the potency of neurobiological mechanisms and open opportunities for developing a superior class of deep learning algorithms.
Experimental and theoretical results reveal a new underlying mechanism for fast brain learning process, dendritic learning, as opposed to the misdirected research in neuroscience over decades, which is based solely on slow synaptic plasticity. The presented paradigm indicates that learning occurs in closer proximity to the neuron, the computational unit, dendritic strengths are self-oscillating, and weak synapses, which comprise the majority of our brain and previously were assumed to be insignificant, play a key role in plasticity. The new learning sites of the brain call for a reevaluation of current treatments for disordered brain functionality and for a better understanding of proper chemical drugs and biological mechanisms to maintain, control and enhance learning.