Hopefully people more clued-up than me will respond, as I understand, I don't think any filtering is critical (beyond low-pass to prevent aliasing), the aim could be to capture a recording using a suitable technique (e.g. stethoscope type sensor perhaps, there was a discussion on this recently here: Sensing 5 dB and lower sounds
or any other microphone or sensor that will capture the sound or vibrations of interest. Breathing and coughs would have to be differentiated and used as separate training groups I think, or automatically detected (e.g. coughs being louder for instance).
Then, as I understand, it is reduced by 'feature extraction' and a typical example of that is LPC, which will basically find a way to reproduce the sound using things like a noise source and filters (I'm not knowledgeable on it in any detail, but there's a whole range of audio codecs which work in a similar way, GSM's original codec included - always surprising how little data is used to recreate sounds). The 'adjustments' or coefficients which allow the sound to be reproduced are then used to train the model (hundreds or thousands of example files from the same and different patients). Any filtering of (say) 50 or 60 Hz to remove that from the training data, or anything that isn't expected to be of interest, would be filtered digitally prior to the feature extraction.
MATLAB is good for this (i.e. to try things out before coding it up), there is a 'Deep Network Designer' and an example training set which is a set of LPC coefficients from audio. In the MATLAB example, the audio is speech of a vowel sound. I've played with it briefly, but didn't get around to generating or training my own files for any purpose yet (but I plan to, to get up-to-speed with at least some of it).
For a scalable solution probably some way to upload recordings to a cloud service would be needed, and run the data preparation and deep learning or machine learning there, and reattempt with different data preparation, different feature extraction methods and different classification designs (MATLAB provides a palette of these for instance) until good results are obtained from separate test data.
I would say a very high sensitivity microphone ca be used, And with help of ML you can train a dataset of breath of healthy person and a sick person (the dataset must be large and accurate, so unless someone else has collected it already, you will have to do it yourself). So based on some minute differences in sound of breathing, and breathing pattern, it is possible to easily to test for covid-19.
A great idea overall!
Coughing is a highly plosive sound. It can probably be detected by the initial volume slope.
Thanks for all the very helpful replies.
When starting off, I like to aim at developing a low cost product/system that just works.
In this case I think it is a not that straight forward as a mobile phone or tablet could be readily used as a substitute to achieve the same goal, assuming the software can be developed. In my view, phones/tablets are probably the more useful for cough analysis as it can take a once off sample of a cough, while for breathing assessment a phone/tablet might not be that practicable when someone is sleeping, for example, as this requires more continuous monitoring (although if a person is sitting up it would probably work).
If it is going to be a new bit of kit, which I quite like, then what components could be added (besides mic and amplifier) to our new sensor breakout board that would deliver enhancements to make it easier and more reliable for the software to come up with the correct answers.
It is the hardware side that I always struggle with. So, could a decent circuit design, specific for this application, make life easier. What components could be added etc. This is the first question to look at, in my opinion.
1 of 1 people found this helpful
A great discussion. I wonder if Alexa, Google and Siri could be brought in to do this? After some changes and re-training perhaps they could listen out for coughs, analyse them and make suggestions to the owner as appropriate?
Yes indeed. That was one ideas. I certainly felt that one would at least need to start by capturing data and in order to do so you would need something that is reliable and cannot be influenced by noise etc. thereby corrupting the data set.
Following on from the valuable feedback I received from my last acoustics related question on developing a high pass filter, I wanted to open up this highly topical acoustics related question.
As we probably all know by now, the covid-19 coronovirus has some fairly specific symptoms, one of which is a dry cough and/or breathing difficulties.
As I had acoustics projects on my brain, I wondered what techniques and sensors could be used to detect a cough and with the use of machine learning and/or AI is there a means of differentiating different coughs (as well as breathing patterns). Maybe this could then eventually form some type of data set for automated monitoring.
I did a quick online search and this academic paper (published 2018) popped up on my search list, which is a rather useful reference: https://www.hindawi.com/journals/js/2018/9845321/
So, I thought this could make for an interesting community related project and hence I'm putting it out there.
Where does one start, for example? Does this require a low pass filter or a pass band filter? What other acoustic techniques could be used etc.
Similarly with breathing - what techniques and sensors would be best suited to monitor the speed and depth of breathing.
Any thoughts and suggestions, will be welcomed.