A new sound and vibration detecting technology have been developed by the scientists at Carnegie Mellon University. The sound and vibration recognition will raise context-aware computing. The research will enable smart devices to figure out where they are and what people are doing around them. This will be done by analysing sounds from their microphones, declared Carnegie Mellon University. Smart devices can appear voiceless if they don’t understand where they are or what people around them are doing. The system, called Ubicoustics, adds additional bits of context to smart device communication. It permits a smart speaker to know its location and a smart sensor to acknowledge you are in a tunnel than on the open road.

According to the scientists, this environmental awareness can be intensified by complementary methods for analysing sounds and vibrations. A smart speaker placed on a kitchen countertop cannot understand if it is in the kitchen, said Chris Harrison. The first implementation of the system utilizes built-in speakers to initiate a sound-based activity identification. However, the implementation is an interesting part and how they are doing this is quite appealing. Gierad Laput, a PhD student said that the main concept here is to leverage skilful sound-effect libraries used in the entertainment industry. They are clean, properly labelled, well-organized and different. Additionally, we can modify and project them into hundreds of various types, creating a huge amount of data perfect for teaching deep-learning models.

This system utilizes an energy efficient laser and reflectors to sense whether an object is on or off or whether a chair or an object has moved. The sensor is enabled to monitor multiple objects at the same time. The tags appended to the things function without electricity. This would permit a single laser to observe a number of things across the room. The things in different rooms can also be monitored if there is a line of sight available. The plug and play system is capable to work in any circumstance. It may notify the user if the door is knocked, or move to the next step in a recipe when it senses an activity. The research is in the development stage but anticipates to see robots that can hear what the user is doing, and based on it, hide or offer assistance.

Leave a Reply

Your email address will not be published. Required fields are marked *