“Here, we were interested in exploring the relationship between language and computer programming, partially because computer programming is such a new invention that we know that there couldn’t be any hardwired mechanisms that make us good programmers,” Ivanova says.
There are two schools of thought regarding how the brain learns to code, she says. One holds that in order to be good at programming, you must be good at math. The other suggests that because of the parallels between coding and language, language skills might be more relevant. To shed light on this issue, the researchers set out to study whether brain activity patterns while reading computer code would overlap with language-related brain activity.
The two programming languages that the researchers focused on in this study are known for their readability — Python and ScratchJr, a visual programming language designed for children age 5 and older. The subjects in the study were all young adults proficient in the language they were being tested on. While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce.
The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.
“It does pretty much anything that’s cognitively challenging, that makes you think hard,” Ivanova says.
Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. The MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left. This finding goes against the hypothesis that math and coding rely on the same brain mechanisms.
This ability to account for mistakes could be crucial for building machines that robustly infer and act in our interests,” says Tan Zhi-Xuan, PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS) and the lead author on a new paper about the research. “Otherwise, AI systems might wrongly infer that, since we failed to achieve our higher-order goals, those goals weren't desired after all. We've seen what happens when algorithms feed on our reflexive and unplanned usage of social media, leading us down paths of dependency and polarization. Ideally, the algorithms of the future will recognize our mistakes, bad habits, and irrationalities and help us avoid, rather than reinforce, them.”
To create their model the team used Gen, a new AI programming platform recently developed at MIT, to combine symbolic AI planning with Bayesian inference. Bayesian inference provides an optimal way to combine uncertain beliefs with new data, and is widely used for financial risk evaluation, diagnostic testing, and election forecasting.
The team’s model performed 20 to 150 times faster than an existing baseline method called Bayesian Inverse Reinforcement Learning (BIRL), which learns an agent’s objectives, values, or rewards by observing its behavior, and attempts to compute full policies or plans in advance. The new model was accurate 75 percent of the time in inferring goals. ery few of us who play video games or watch computer-generated image-filled movies ever take the time to sit back and appreciate all the handiwork that make their graphics so thrilling and immersive.
One key aspect of this is texture. The glossy pictures we see on our screens often appear seamlessly rendered, but they require huge amounts of work behind the scenes. When effects studios create scenes in computer-assisted design programs, they first 3D model all the objects that they plan to put in the scene, and then give a texture to each generated object: for example, making a wood table appear to be glossy, polished, or matte.
If a designer is trying to recreate a particular texture from the real world, they may find themselves digging around online trying to find a close match that can be stitched together for the scene. But most of the time you can’t just take a photo of an object and use it in a scene — you have to create a set of “maps” that quantify different properties like roughness or light levels.
There are programs that have made this process easier than ever before, like the Adobe Substance software that helped propel the photorealistic ruins of Las Vegas in “Blade Runner 2049”. However, these so-called “procedural” programs can take months to learn, and still involve painstaking hours or even days to create a particular texture.Since the 1940s, classical computers have improved at breakneck speed. Today you can buy a wristwatch with more computing power than the state-of-the-art, room-sized computer from half a century ago. These advances have typically come through electrical engineers’ ability to fashion ever smaller transistors and circuits, and to pack them ever closer together.
But that downsizing will eventually hit a physical limit — as computer electronics approach the atomic level, it will become impossible to control individual components without impacting neighboring ones. Classical computers cannot keep improving indefinitely using conventional scaling.
Quantum computing, an idea spawned in the 1980s, could one day carry the baton into a new era of powerful high-speed computing. The method uses quantum mechanical phenomena to run complex calculations not feasible for classical computers. In theory, quantum computing could solve problems in minutes that would take classical computers millennia. Already, Google has demonstrated quantum computing’s ability to outperform the world’s best supercomputer for certain tasks.
But it’s still early days — quantum computing must clear a number of science and engineering hurdles before it can reliably solve practical problems. More than 100 researchers across MIT are helping develop the fundamental technologies necessary scale up quantum computing and turn its potential into reality.
0 comments:
Post a Comment