You are standing in Times Square, about to cross the street. As the walk sign switches from the orange hand signaling, Halt! to a white figure in mid-stride, an ambulance, its horn blaring, speeds by, the aroma of honey-roasted nuts wafts from a nearby food vendor, and images of snowboarders and supermodels illuminate high-definition digital billboards.
We live in a chaotic environment in which we are constantly bombarded with new sensory information. Our brains must rapidly filter this information to prevent sensory overload. Although neuroscientists know where in the brain different types of sensory cues are processed (thanks to functional imaging techniques such as fMRI) the problem of how the brain sorts through all of this information—a process known as selective attention—is still a mystery.
For decades, neuroscientists have developed models that describe selective attention. It is generally accepted that the brain extracts sensory information from its environment by constructing cognitive maps—internal representations of the experienced world—that are constantly updated by new experiences.
In this spotlight, we explore how the brain navigates the constant barrage of sensory input from its environment and the difficulties inherent in studying problems as fundamental as selective attention. The films presented in this series reveal the complexities of the brain and the extent to which its most fundamental algorithms remain unsolved.
The first film, Into Noise, documents Finnish film director Janna Kyllastinen’s journey through the New York soundscape—from her Brooklyn rooftop, to Finntown, a Brooklyn neighborhood where a large number of Finnish immigrants lived in the early 20th century, a cemetery, the ocean, and the inside of a flotation tank—in search of silence. As Kyllastinen attempts to escape the noise of New York City, she delves into the science behind how sound is processed by the brain, asking why we perceive some sounds as sound and others as noise.
Returning to her homeland of Finland, Kyllastinen realizes that her attempts to find silence are in vain. “I have heard that everything in the world is made out of waves, that everything resonates with a certain frequency,” she says. Through the film’s soundtrack of amplified natural, urban, and industrial sounds played against the backdrop of Stravinsky’s Rite of Spring and a minimalist score of basses and percussion instruments, the audience, too, becomes aware of the endless layers of sounds that surround us. Kyllastinen’s quest for silence is really a quest for harmony, for the familiar sounds of her homeland.
Into Noise (Janna Kyllastinen, 2013)
In the second film in this series, director Jason Chew attempts to elucidate the poorly understood sensory experiences of a synesthetic. Synesthesia is a neurological phenomenon in which stimulation of one sensory pathway leads to involuntary and simultaneous stimulation of another. The subject of Chew’s short film, entitled Colorcondition, has Chromesthesia, a form of synesthesia that causes him to perceive sounds as colors. Synesthesia, as he explains, is attributed to an over-abundance of synaptic connections between brain regions that process sensory information.
Through scenes of his subject painting and playing “Simon Says,” a game that involves repeating sequences of lights and colors, Chew gives his audience a unique window into the world of a synesthetic, depicting how different sensory modalities converge inside the brain to enhance sensory experiences.
Chew, like Kyllastinen, is intrigued by the concept of selective attention. In one scene, as Chew’s subject walks down a street nestled between rows of brownstones and trees assuming their Fall colors, a siren can be heard in the distance. Suddenly, the trees and brownstones go blurry, and the subject stops. As he turns around, his image fades and the camera focuses in on the cars and pedestrians behind him. By manipulating the focus, Chew draws the audience’s attention away from the subject, who is undergoing a powerful sensory experience, and toward the stimulus, leaving the audience to wonder if this sensory experience is sufficient to guide the subject’s attention to the sound of the siren and circumvent some of the complex computations that govern selective attention.
One of the limitations to understanding how the human brain filters out information is the lack of a non-invasive set of tools for studying how large populations of neurons communicate with each other. While animal models have their value, no one really knows to what extent the human brain functions like that of other species. If the ultimate goal of neuroscience is to unlock the secrets of the human brain, then neuroscientists need to develop a toolset for studying it at a microcircuit level.
COLORCONDITION (Jason Chew & Rodrigo Valles, 2016)
In the final film, entitled Bluebrain: Markram’s Vision, documentary film-maker Noah Hutton follows a group of scientists involved in an ambitious project to do just that. The film presented here is the first of a ten-part series on Bluebrain, a Swiss initiative, led by Henry Markam, to simulate the human brain on IBM supercomputers. Hutton began the project in 2009 after seeing Henry Markram give a TED talk in which Markam promised to upload the entire brain to a computer within a decade. Since then, the project has prompted significant criticism from the scientific community, with many arguing that Bluebrain is over-ambitious and unrealistic, given the field’s still relatively limited understanding of how the approximately 100 billion neurons in the human brain are connected. In the segment below, Hutton interviews Markham and other scientists involved in the project about their goals, the challenges of implementing these goals, and criticism from the scientific community. Interwoven throughout are snapshots of some of the imaging and electrophysiological techniques that the team uses to reconstruct the brain.
Although nearly ten years have passed since the TED talk and Markham’s team is still nowhere near realizing its vision, the work has not been entirely fruitless. In 2015, Bluebrain published the first reconstruction and simulation of part of the rat somatosensory cortex, a brain region that receives sensory information from the rest of the body. The model, which comprises around 31,000 neurons, is intended to be used in place of a slice of neocortical tissue. According to Markham, manipulation of the model, for instance, by simulating sensory input from the whiskers, will yield the same results as real experiments. While this is only a minor step towards the eventual goal of modeling the entire brain, it perhaps demonstrates the viability of such a project at some point in the future.
Bluebrain Project (Noah Hutton, 2009-2017)
Though the last several decades have seen some major breakthroughs in our understanding of the brain as well as the development of new tools for studying it, our understanding of its most fundamental principles nonetheless remains primitive. As former President Barak Obama once said, “we can identify galaxies light years away, we can study particles smaller than an atom, but we still haven’t unlocked the mystery of the three pounds of matter that sit between our ears.”
About the Author
Rachel Field is a neuroscientist at New York University's Skirball Institute for Biomolecular Medicine where she studies the plasticity of the brain. Her writing on science and other topics has appeared in The Forward, The Forverts, and The Riverdale Press.