From Green Lights to Red Flags



We make countless decisions throughout the day—some as small as picking what to eat, others as big as accepting a new job offer. Some decisions, like choosing when to cross a busy street or stepping aside to avoid bumping into someone on the sidewalk, feel automatic. Recently, researchers at Princeton University developed a mathematical model that sheds new light on how our brain makes split-second decisions. This study takes us one step further to understanding the processes in the brain that guide our choices.

Decisions, Decisions

Imagine biking through a city. You approach an intersection that has just turned green, signaling you to go. However, from the corner of your eye, you spot a car speeding toward the red light. Instantly, you hit the brakes. Within a fraction of a second, your brain has processed the danger, overridden any hesitations, and triggered an immediate response to help you avoid a dangerous situation.

The ability to process conflicting information quickly, like a green light signaling “go” and a speeding car signaling “stop” is what makes the human brain so remarkable. Your brain does not simply follow a set of programmed rules—it pulls from past experiences, intuition, and split-second sensory processing to make the best judgment. How the brain processes and sorts conflicting signals has not been well understood until now.

Before making a decision, your brain juggles multiple pieces of information at the same time. Sights, smells, sounds, and other senses send signals to the brain’s thalamus, which acts like a relay center and directs the information to processing centers throughout the brain. As different regions process these signals, the prefrontal cortex steps in to assess the situation logically. The prefrontal cortex, located in the front part of the brain, is often referred to as the brain’s “executive center.” It is responsible for higher-order cognitive functions, including decision-making and impulsivity.

The prefrontal cortex uses past experiences and logic to make sense of what we feel. However, not all the information our brain receives is useful or necessary in a given moment. To help amplify important information and filter out distractions, the brain deploys extensive networks of excitatory and inhibitory neurons.

Programming Decision-Making

Scientists have attempted to replicate the brain’s decision-making networks in artificial intelligence using mathematical models. These programs often rely on recurrent neural networks, an approach that makes step-by-step predictions based on a series of inputs. Voice-recognition software like “Siri” and “Alexa” use recurrent neural networks. As you speak, the program processes each word sequentially, remembering context from previous words to better understand the meaning. For example, if you say, “I went to the store to buy…”, the program retains the earlier words to predict that the next word might be “groceries” rather than something unrelated like “mountains.”

While these models provide some insight into how changes in brain connectivity may affect behavior, they simplify the brain’s complexity. They struggle to take into account the millions of neurons that are involved in decision-making. The new model developed by Princeton’s Christophe Landon and Tatiana Engel proposes that a few key neurons in the prefrontal cortex act as the decision-making “ringleaders.” Rather than trying to replicate the whole forest, this approach hones in on a small group of trees.

How the Model Works

This idea came from observing how neurons in the prefrontal cortex of monkeys light up during a context-dependent decision-making task. The task begins by presenting a cue that represents a specific color or motion. A short pause occurs before another shape appears with both color and motion. The animal must then identify either the color or motion of the second shape. The correct answer depends on the context cue that was initially given. If color was presented as the first cue, the decision should be based on the color of the second object. If a motion was depicted, then the decision should be based on motion and ignore color.

The team found that when color was the important cue, specialized cells in the prefrontal cortex suppressed the activity of neurons that pay attention to motion. For motion, the opposite was true. Completing this task successfully seems to depend on how well the prefrontal cortex can block distracting cues.

Decision-Making Essential Reads

Landon and Engel replicated these findings in artificial circuits by engineering a new mathematical framework. Dubbed the latent circuit model, this approach upgrades the formulas of traditional recurrent neural networks. The team then trained the computer software to perform the same context-dependent decision-making task. Their new model significantly outperformed traditional recurrent neural networks. Investigators found that its built-in inhibitory mechanism was able to suppress irrelevant sensory signals based on preceding contextual cues, much like how our brain makes decisions.

Conclusion

The brain’s ability to make decisions quickly and accurately is one of its most impressive feats. The new latent circuit model brings us one step closer to understanding these vast neural networks more clearly. As research continues to delve deeper into the brain’s inner workings, the insights gained from this model may shed light on why certain conditions, such as bipolar disorder and attention-deficit/hyperactivity disorder (ADHD), contribute to poor decision-making. Perhaps these findings may also open new doors for improving the accuracy of artificial intelligence applications like those that power voice recognition software and self-driving cars.


Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts