Our group specializes in theoretical and experimental studies of exciting and innovative research fields such as machine learning, extracellular and intracellular stimulations and recordings of neurons in-vitro, synchronization of neural networks and chaotic lasers, physical random number generators and advanced protocols for secure communication.

Recent Publications

pressHFL.jpg

Attempting to imitate the brain’s functionalities, researchers have bridged between neuroscience and artificial intelligence for decades; however, experimental neuroscience has not directly advanced the field of machine learning (ML). Here, using neuronal cultures, we demonstrate that increased training frequency accelerates the neuronal adaptation processes. This mechanism was implemented on artificial neural networks, where a local learning step-size increases for coherent consecutive learning steps, and tested on a simple dataset of handwritten digits, MNIST. Based on our on-line learning results with a few handwriting examples, success rates for brain-inspired algorithms substantially outperform the commonly used ML algorithms. We speculate this emerging bridge from slow brain function to ML will promote ultrafast decision making under limited examples, which is the reality in many aspects of human activity, robotic control, and network optimization.

press_2020.JPG

Power-law scaling, a central concept in critical phenomena, is found to be useful in deep learning, where optimized test errors on handwritten digit examples converge as a power-law to zero with database size. For rapid decision making with one training epoch, each example is presented only once to the trained network, the power-law exponent increased with the number of hidden layers. For the largest dataset, the obtained test error was estimated to be in the proximity of state-of-the-art algorithms for large epoch numbers. Power-law scaling assists with key challenges found in current artificial intelligence applications and facilitates an a priori dataset size estimation to achieve a desired test accuracy. It establishes a benchmark for measuring training complexity and a quantitative hierarchy of machine learning tasks and algorithms.

Refractoriness is a fundamental property of excitable elements, such as neurons, indicating the probability for re-excitation in a given time lag, and is typically linked to the neuronal hyperpolarization following an evoked spike. Here we measured the refractory periods (RPs) in neuronal cultures and observed that an average anisotropic absolute RP could exceed 10 ms and its tail is 20 ms, independent of a large stimulation frequency range. It is an order of magnitude longer than anticipated and comparable with the decaying membrane potential time scale. It is followed by a sharp rise-time (relative RP) of merely ∼1 md to complete responsiveness. Extracellular stimulations result in longer absolute RPs than solely intracellular ones, and a pair of extracellular stimulations from two different routes exhibits distinct absolute RPs, depending on their order. Our results indicate that a neuron is an accurate excitable element, where the diverse RPs cannot be attributed solely to the soma and imply fast mutual interactions between different stimulation routes and dendrites. Further elucidation of neuronal computational capabilities and their interplay with adaptation mechanisms is warranted.

Low-Res_Ido Kanter December 2021 Image.jpg.png