"That doesn't sound right!" No matter how advanced machines get, this line will always represent the lower bound for human abilities to identify mechanical issues. Whether it is a $100 blender, a $1000 air conditioner, a $100,000 sports car or even a $10,000,000 aircraft, revving it up must not only feel good, but also sound good to give you full assurance that it is working perfectly. If it sounds a bit off, the logical conclusion is that something's amiss and you need to take a closer look.
Interestingly, this is the basis for a growing diagnostics model, where sensors literally 'listen' to the sound coming from machines. These are then compared with a database of sounds related to known conditions for this machine, enabling early diagnosis of damage, wear and tear. Researchers at the Polytechnic University of Catalonia Centre for Innovation Electronics in Spain claimed significant success in acoustic sensing as part of their MOSYCOUSIS project in 2014, and had been evaluating its biomedical applications.
Today, it gains relevance from a commercial perspective as companies scramble for cost-cutting innovation. Start-ups in this space are garnering substantial investment. While listening to vibrations is ratified as a good underlying principle, the quantum of sensitivity of the instrument and its cost are deciding factors when it comes to industrial applications. With the application of AI, platforms are now able to formulate their own rules and draw conclusions. This is now a segment to note in industrial MRO, and has the potential to help airlines save millions in repair costs by early diagnosis of issues. Considering that these sensors pick up ultrasonic vibrations as well, it is apparently more efficient than a keen-eared human technician could ever be!
On the other hand, one must wonder: how massive would this database have to be before we can call it reliable? Even in an industry like aircraft maintenance, repair and overhauling (MRO), the complexity of each moving part and the resultant range of sounds it could possibly emit would pose a huge challenge. Something as simple but non-standard as a coat of paint on the tail could substantially affect the vibrations on the body, despite not posing a risk to the overall body of the machine. New, secretive improvements to existing machines may throw the database out of gear when it comes to "acceptable" sounds. A couple of false alarms could set off what one may call "engineer's hypochondria" – a nagging feeling in the back of your mind that there's something wrong with a machine, without being able to put a finger on it. This may adversely affect repair cycle times and contribute to cost of checks and rechecks without adequate return to show for it. So, are we asking for trouble by trying to listen to machines?
Like every new bit of innovation, this 'deep learning' technique has a heavy dependence on the evolution of technology infrastructure to match the broad range of demand it will be expected to serve. Perhaps, the age of driver-less cars will also coincide with that of mechanic-less garages!