Despite the sarcastic title, this work is pretty neat. In a recent Scientific Reports paper (open access, yay!), researchers from the University of Padua in Italy found that fish pretty much see the world as we do, as least when talking about motion illusions. If you’ve spent time as a child, you’re probably familiar with optical illusions (personally, I was obsessed with Magic Eye books; maybe I shouldn’t say was). Motion illusions are a type of optical illusion that make the brain perceive motion from a static image (see picture below).
Their version of the classic Rotating Snakes illusion, abbreviated RSI in the paper because all academic papers need more abbreviations.
Why fish? It turns out that fish don’t have a visual cortex like humans and other mammals. We know fish can see (they need to to hunt and escape predators) but we don’t know exactly what they see. We do know they see changes in light, but can they see texture and contrast and form? In mammals, this additional sight comes from our visual cortex. If fish do get additional visual information, then they must do so in a manner completely distinct from us. That’s why fish were chosen: to see if they perceive an illusion that arises in mammals from our visual cortex.
To find out this interesting piece of scientific information, they crammed a fish tank between two computer monitors. On one monitor was the RSI (the allure of abbreviations has not yet left me). The other monitor had a static version of the image, only subtly different, without the motion illusion. The fish were trained to spot motion to get a food reward (tasty, tasty brine shrimps).
After all was said and done, 18 out of 24 fish were confused (that’s 75%). They thought the illusion was real and tried to get their food reward (their… just desserts). This compares fairly well with the percentage of humans who can see the illusion (that’s 84%).
The experiment didn’t explain how fish, with their lack of visual cortex, saw the motion. If anything it threw more questions into the mix, which I think is a good thing. The object of a good scientific paper shouldn’t be to answer all the questions but to ask more… unless you’re trying for a Theory of Everything (the answer to it all, the mack daddy of theories, the big ToE).
Robert Platt from Northwestern has used a new technology created by Edward Adelson from MIT to make a robot that plugs in USBs. This is more difficult than it sounds (unless you’ve had experience with fourth-dimensional USBs, then it’s exactly as difficult as it sounds). If the robot is not pre-programmed, like these on-the-fly USB pluggers, their external sensors must be highly precise—a centimeter off and your drink will get cold without your USB drink warmer. Or worse. Your pet rock may not charge.
In the unspoken scientific agreement to make robots increasingly human, the sensor system relies on vision. One side of the robot’s rubber gripper is coated with metallic paint. The rest of the gripper is surrounded by a translucent box. Each side of the box emits a different-colored light. When the robot grips, the sides light up depending on how the gel inside of the box deformed. By using computer algorithms that monitor the color and intensity of the light, the three-dimensional structure of the gripped surface can be “seen”. This system worked well. The robot was able to find a dangling USB plug, grab it, and plug it into the port.
The more important discovery here is that the robot can insert the USB correctly on the first try. Technology has truly passed our human limitations.
A decade ago, scientists at the University of Florida taught a Petri dish rat brain to fly a flight simulator. They grew a culture of 25,000 rat neurons and, using 60 electrodes, hooked it up to a common desktop computer. At first, the neurons were simply scattered in the dish, but they quickly started to form connections. “You see one extend a process, pull it back, extend it out – and it may do that a couple of times, just sampling who’s next to it, until over time the connectivity starts to establish itself,” Thomas DeMarse, the lead biomedical engineer of the work, described in a ScienceDaily release. When the neural network was joined to the computer, more connections formed as the “brain” learned to control the simulated F-22. Eventually, the “brain” could control the pitch and roll of the aircraft in a variety of conditions, including hurricane-force winds.
Would a Petri dish brain get motion sickness?
According to the release, “As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments.” A prescient statement for a time before drones (or at least before the public knew). Who knows, maybe the next generation of war will be fought by rat brains.
(For anyone who doesn’t understand the title of this post, I thought I’d bring back some early 2000s references. Remember this?)