Quantcast
Channel: All Posts
Viewing all articles
Browse latest Browse all 2658

The 5 Coolest Things On Earth This Week

$
0
0

A robot in Singapore assembled an IKEA chair, new MRI sensors can peer deep inside the brain, and noise-canceling tech for windows can cut street noise by half. A nice try, but this week’s science will keep people talking.

 

 

DYI Robot

What is it? Move over, Roomba, make space for the coolest home robot yet. Engineers at Nanyang Technological University in Singapore have built a robot that can assemble IKEA furniture. The robot, which has a 3D camera and two arms with grippers capable of six-axis motion, spent a little more than 11 minutes to plan out its motions and just 8 minutes and 55 seconds to assemble IKEA’s Stefan chair. It needed just superhuman 3 seconds to locate all the parts.

Why does it matter? The NTU team believes that the robot could eventually work in workshops and perform “specific tasks with precision in industries where tasks are varied and do not merit specialized machines or assembly lines,” according to the team. The university reported that the team is now working to “deploy the robot to do glass bonding that could be useful in the automotive industry, and drilling holes in metal components for the aircraft manufacturing industry. Cost is not expected to be an issue as all the components in the robotic setup can be bought off the shelf.”

How does it work? The robot first snapped 3D photos of all the parts laid out on the floor in a way that replicated “as much as possible, the cluttered environment after humans unbox and prepare to put together a build-it-yourself chair.” Next, the robot used an algorithm developed by the team to plan out the assembly. “The job of assembly, which may come naturally to humans, has to be broken down into different steps, such as identifying where the different chair parts are, the force required to grip the parts, and making sure the robotic arms move without colliding into each other,” said Assistant Professor Pham Quang Cuong, who led the team. He said the researchers were now “looking to integrate more artificial intelligence into this approach to make the robot more autonomous so it can learn the different steps of assembling a chair through human demonstration or by reading the instruction manual, or even from an image of the assembled product.”

 

Noise Be Gone

What is it? The adroit robot wasn’t the only technology from NTU’s labs that grabbed us this week. Another team at the university has developed a noise-cancelling device for windows that could reduce indoor noise caused by street racket by as much as 50 percent.

Why does it matter? Noise pollution is a modern scourge that can cause sleeping disorders and lead to high blood pressure, stress and other medical issues. “We are currently finding ways to improve the technology further so that it can be used not only at window grilles with large openings, but also provide a cost-effective solution that can be easily installed and replaced,“ said Gan Woon Seng, who teaches at NTU’s School of Electrical & Electronic Engineering. “Ultimately, we aim to integrate this technology into window grilles that can help mitigate urban noise pollution conveniently.”

How does it work? Users can install a small array of the devices, which use “active noise control” technology and work like noise-canceling headphones, on window grills and bars. An embedded microphone inside each one detects incoming noise before it reaches the window, and “a special sound emitting mechanism which works like a speaker and is hooked up to a processing unit” quickly emits “anti-noise” that cancels out the outside din’s sound waves. “Our innovation not only computes the right amount and type of ‘anti-noise’ to emit, but also does it faster than the detected noise can reach inside the building,” Gan said. “Compared to noise cancellation headphones, what we have achieved is far more technically challenging as we needed to control the noise in a large open area, instead of just around the ear,” he explained.

 

Computer Beats Doctors, Again

Top and above: “It’s not so much that we were able to ‘beat’ the pathologist or the radiologist, but rather that the machine was able to add value to what they can offer,” said Anant Madabhushi, a professor of biomedical engineering at Case Western. Images credit: Getty Images.

What is it? News reports of AI systems that can outperform humans in diagnosing disease are quickly becoming common, but researchers at Case Western University have taken a step further. Their “deep learning” algorithms can not only spot heart failure and cancer, but also “[help] identify those patients with less aggressive disease who may not need more aggressive therapy.”

Why does it matter?“It’s not so much that we were able to ‘beat’ the pathologist or the radiologist, but rather that the machine was able to add value to what they can offer,” said Anant Madabhushi, a professor of biomedical engineering at Case Western. “There is desperate need for better decision-support tools that allows them to serve patients, especially in places where there are very few pathologists or radiologists.” He said that the system can “help them become more efficient. For instance, the tools could help reduce the amount of time spent on cases with no obvious disease or obviously benign conditions and instead help them focus on the more confounding cases.”

How does it work? Computers at Madabhushi’s lab can read, log, compare and contrast “hundreds of slides of tissue samples in the amount of time a pathologist might spend on a single slide,” the university reported. “Then, they rapidly and completely catalogue characteristics like texture, shape and structure of glands, nuclei and surrounding tissue to determine the aggressiveness and risk associated with certain diseases.” But Madabhushi says that radiologists shouldn’t worry about their jobs. “I always use the example of Botswana, where they have a population of 2 million people—and only one pathologist that we aware of,” he told The Daily, a blog published by the school. “From that one example alone, you can see that this technology can help that one pathologist be more efficient and help many more people.”

 

The Butterfly Effect

Artist’s depiction of an ocular implant. Image and caption credits: Choo Lab/Caltech.

What is it? A researcher working at Caltech has developed a tiny eye implant inspired by butterfly wings that could continuously measure pressure inside the eye. Doctors believe that sudden spikes in the pressure can damage the optical nerve and lead to glaucoma, the most common cause of blindness in adults after diabetes.

Why does it matter? There are many drugs than can relieve high eye pressure, but they work best when taken right after pressure spikes. That’s easier said than done. Patients must typically visit an ophthalmologist to get their eye pressure checked, and the test involves an unwieldy instrument. “Right now, eye pressure is typically measured just a couple times a year in a doctor’s office,” said the researcher, Caltech’s Hyuck Choo. “Glaucoma patients need a way to measure their eye pressure easily and regularly.”

How does it work? Choo developed an eye implant the width of a few strands of hair that looks like a tiny drum. “When inserted into an eye, its surface flexes with increasing eye pressure, narrowing the depth of the cavity inside the drum,” according to Caltech. Users would measure the depth of the cavity with a handheld reader to arrive at a reading. But since the measurements depend on the angle of the reader, they could be prone to error. That’s where the butterfly comes in. Working with another team at Caltech studying the wings of a longtail glasswing butterfly, Choo embedded in his implant tiny columns that imitate the nanostructures found on the wings, made it less “angle-sensitive” and reduced the error readings threefold. “The nanostructures unlock the potential of this implant, making it practical for glaucoma patients to test their own eye pressure every day,” he said.

 

Something On The Brain

An MRI image of the brain. Image credit: GE Healthcare.

What is it? Researchers at MIT developed a tiny injectable sensor that works in conjunction with magnetic resonance imaging (MRI) and allow them to monitor directly brain signals deep inside the brain. The device could one day “reveal patterns underlying behavior.” The sensor tracks the flow of calcium ions, which enter brain cells when they are active.

Why does it matter? Scientist typically use functional MRI to track brain activity. But traditional fMRI imaging tracks changes in blood flow, which may be caused by many different stimuli. On the other hand, “concentrations of calcium ions are closely correlated with signaling events in the nervous system,” Alan Jasanoff, a professor of biological engineering, brain and cognitive sciences at MIT and the senior author of the study, told MIT News. “You could imagine measuring calcium activity in different parts of the brain and trying to determine, for instance, how different types of sensory stimuli are encoded in different ways by the spatial pattern of neural activity that they induce,” he said.

How does it work? The team designed the sensor from two components: a naturally occurring calcium-binding protein and a magnetic iron oxide nanoparticle. The two clump together in the presence of calcium, making the spot appear darker on an MRI image. “High levels of calcium outside the neurons correlate with low neuron activity; when calcium concentrations drop, it means neurons in that area are firing electrical impulses,” MIT News reported. So far the sensor must be injected directly into the brain, but the team is looking for ways to make it cross the blood-brain barrier, “which would make it possible to deliver the particles without injecting them directly to the test site.”


Viewing all articles
Browse latest Browse all 2658

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>