Along with medicinal drugs, medical technology has improved by leaps and bounds over the last several decades. In the past several decades, devices such as MRIs, CT scanners and ultrasound machines have become commonplace in hospitals. Even today, medical science is still trying to refine and improve medical equipment. Over the last few years, a new type of device has emerged that, if successfully developed and mass produced, may revolutionize hearing aid technology.
From Trumpets to Processing Chips
In the past, patients had few options when trying to cope with declining hearing. From the 1600s up through the turn of the 20th century, people were forced to make do with large, handheld ear trumpets. This began to change in the early 1900s, with the advent of electronic hearing aids that utilized microphones. In the 1950s, transistor hearing aids were introduced, eventually becoming small enough to be worn behind the ear and later within the ear canal itself. The next evolution of the hearing aid relied upon digital technology to filter and amplify various sounds in the user’s ear.
The products available to those with poor hearing have certainly improved over the last century. Today, cochlear implants are used to treat patients with severe hearing loss, along with those that are completely deaf. As effective as these devices are, however, their days might be numbered.
In early 2014, researchers from two of the most prominent American universities announced that they had developed a new type of hearing aid implant. Crafted by researchers from Massachusetts Institute of Technology (MIT) and Harvard Medical School, this new microchip bears little resemblance to the cochlear implants currently used to treat deaf or hard-of-hearing patients.
Overview of Cochlear Implant
To better understand the importance of this new chip, it is helpful to compare it with current medical technology. A modern cochlear device consists of several key parts, each of which enables the implant to function properly:
- Long Cable
- Short Cable
- Speech Processor
- Transmitting Coil
- Electrode array
The process by which cochlear implants transmit information to the brain involves several steps. First, the microphone picks up sound, transferring its auditory input to the speech processor via the long cable. The processor then interprets these sounds and converts them into an electrical signal, which traverses through the long and short cables to the transmitting coil. In turn, the transmitting coil directs electric signals to the receiver/stimulator.
The receiver/stimulator uses this data to not only determine the strength of the electric current it releases, but also which electrodes on the array actually receive an electrical charge. The activated electrodes then stimulate the nerve endings of the cochlea, a section of the inner ear that resembles a snail’s shell. Finally, the ears’ cochlear nerves transport their signals to the brain, which analyzes this data in order to determine what the person is hearing.
A Smaller Alternative?
One of the drawbacks to cochlear implants is that they are readily visible to the human eye, to the point where they could easily be mistaken for some large type of hearing aid. In addition, the external parts must be removed in certain situations, such as before the patient showers or goes swimming. In contrast, the device created by the MIT/Harvard researchers would include no external machinery whatsoever. Instead, the chip would be implanted inside the ear.
The device would rely upon eardrum vibrations in order to function. Once installed, this chip would utilize a sensor in order to identify these vibrations, converting them into electrical signals. Similar to the receiver/stimulator used in cochlear implants, the microchip would then route these signals to electrodes embedded in the cochlear. The cochlear nerves would then perform their regular role of transmitting messages to the brain to analyze and decipher.
In addition to the chip itself, the MIT and Harvard researchers have also built a charging device for the new implant. This charger is not only able to supply the chip with eight hours worth of power, but it is also able to do so in only two minutes. Furthermore, the charger does not need to be plugged into an outlet in order to do its job. Instead, it can draw its energy from a common cell phone.
As encouraging as this new technology might be, it’s not the first cutting edge hearing implant to make use of microchips. A research team led by University of Utah faculty developed a similar device in 2012. In fact, this prototype operated in virtually the same manner as the MIT/Harvard chip; first, a sensor transmitted vibrations to a small chip, which transformed them into electrical signals. These signals were directed to the electrodes in the cochlear region, which passed them on to the brain.
The main difference between the two devices involves how their batteries are recharged. Like the chip built by the MIT and Harvard teams, the implant created in 2012 would run on an implanted battery. Instead of using a cellphone for just two minutes, however, this battery’s power would be replenished overnight by a charger attached to the patient’s ear. There might be an advantage to this charging method, however; the researchers hope that the charger will provide the implant with several days’ worth of power.
Hearing devices have come a long way since the days of portable ear trumpets. If microchip implants do become a reality, patients may soon enjoy greatly improved hearing without the inconveniences of current cochlear machinery.