For decades, one of the most fundamental questions in neuroscience remained unanswered: How does the brain transform raw information arriving from the eyes — lines, angles and contrasts — into a rich world of objects, space and meaning?
A new study published in the journal Science now provides the first direct evidence for a theory first proposed in 1962. By mapping all inputs to a single neuron in the visual cortex with unprecedented precision, researchers validated the “feedforward model” developed by Nobel Prize winners David Hubel and Torsten Wiesel.
From light to the brain
Human vision begins well before an image is formed in the brain. At the initial stage, the information consists of raw light that must undergo conversion and processing before it acquires meaning.
Light is detected in the retina at the back of the eye and converted into electrical signals — the brain’s internal language. From there, the information passes through an intermediate station known as the thalamus before reaching the visual cortex at the back of the brain.
The path toward understanding how the brain processes visual information has developed gradually over more than a century of research. In 1905, Japanese physician Tatsuji Inouye discovered that damage to the back of the brain led to loss of vision, identifying the visual cortex as critical for sight.
Only decades later did scientists begin to understand the biological mechanisms involved. By the 1950s, it was established that the brain is composed of neurons that communicate through electrical and chemical signals.
From dots to lines
A major breakthrough came in the early 1960s, when Hubel and Wiesel discovered that neurons in early stages of the visual system — in the retina and thalamus — respond to small points of light. In contrast, neurons in the visual cortex respond to lines.
This marked the first time researchers could identify what is known as a neural “computation” — a situation in which a neuron responds to something different from its direct inputs.
The discovery raised a key question: How does the brain transform simple signals from small points into the perception of lines with direction?
Hubel and Wiesel proposed that this occurs because each neuron in the visual cortex receives multiple inputs from neurons detecting points arranged along a straight line. Over the decades, many theories were developed to explain this transformation, but direct proof remained out of reach.
From hypothesis to direct evidence
The main obstacle was technical. To demonstrate how the transformation occurs, scientists needed to measure all the inputs a single neuron receives and how it integrates them. Each neuron in the visual cortex receives hundreds of inputs from the thalamus and thousands more from other brain regions.
Until recently, technology did not allow researchers to measure all these inputs simultaneously.
In the new study, researchers used advanced methods that made this possible for the first time. These included two-photon microscopy, which allows imaging at the level of individual synapses — the tiny connections between neurons — and genetically engineered proteins that emit light when they bind to glutamate, a key neurotransmitter.
This combination enabled scientists to observe, in real time, how neurons communicate within a living brain.
Over several days, the researchers mapped the input connections to a single neuron and identified nearly 90% of its active excitatory inputs. They were also able to distinguish which of those inputs originated in the thalamus.
What the study found
The findings showed that neurons in the visual cortex that are sensitive to orientation receive input from thalamic neurons that are not themselves tuned to orientation. In contrast, connections within the cortex were largely orientation-specific.
Crucially, the spatial arrangement of inputs matched the pattern proposed by Hubel and Wiesel, demonstrating that the brain combines inputs from multiple points to detect lines.
The study also identified distinct properties of thalamic synapses, including the absence of certain calcium signals, highlighting differences between thalamic and cortical inputs that are important for how the brain processes information and adapts over time.
While researchers noted that the feedforward model does not explain all aspects of visual processing, the results provide clear confirmation of its central prediction.
Looking deeper
The study reflects a broader trend of international collaboration in brain research and represents a significant methodological advance. Researchers emphasized that its main contribution lies not only in answering a long-standing question but also in providing a new tool for exploring brain function.
Rather than delivering immediate practical applications, the findings open the door to deeper investigation of how neurons operate and interact in the cortex.
Broader implications
Beyond the scientific breakthrough, the research touches on a deeper question: how the brain translates physical signals into conscious experience.
Scientists say that a better understanding of these processes could eventually help address a wide range of brain-related conditions, including neurodegenerative diseases such as Alzheimer’s and Parkinson’s, as well as psychiatric disorders like schizophrenia and depression.
Each new detail about the structure and function of the brain contributes to understanding these conditions.
The researchers describe the study as a milestone rather than a final answer — a demonstration that it is now possible to map all inputs to a single neuron and understand how they work together.
It also underscores the complexity of the brain, where the transformation from physical signals to perception remains one of the most profound challenges in science.



