So that's how they do it!
(1) strenuously avoid words with labial consonants, which are the only "visible" consonants, but
(2) if they must, pronounce (mostly initial, because later consonants can be slurred more) labial consonants by making substitutions, such as:
(according to one site - different people seem to have different ideas about what substitutions are appropriate)
/g/ for /b/
/θ/ for /f/ and /v/
/n/ for /m/
/kl/ for /p/
/ku/ for /kw/ (e.g. in quality)
/u/ for /w/
(3) priming the audience to expect a problematic word, by saying it beforehand in one's original voice, then having the dummy say the word with problematic consonants substituted.
Looking at the success of ventriloquism over the centuries (it's been around since the Greeks, and the Zulus are supposed to have (had?) it too), ventriloquism and its techniques would appear to be good confirmation that there's a lot of room in speech for errors that nevertheless make no difference to the comprehensibility of the speech stream. Fitting to the lexicon, with the benefit of context, is enough to smooth over things like wrong consonants.
throwing one's voice
I wonder how people speech-read with any success, since they would be unable to observe featural differences such as [+/- voice]. I'll have to get a book out on it.
All these issues must be related to issue of entropy of natural languages. As I understand it, a language with a large amount of entropy could have basically no phonotactics at all - any letter (considering the written language) can follow any other letter. So mishearing a single consonant could be particularly bad - there's a pretty high chance of having another consonant following it. But with less entropy, there's more room for mistakes. I wonder if different languages differ in their entropy values? What does having a different phoneme inventory and system of phonotactics do to the entropy value of a language? Here's some interesting linguistics research that's been carried out regarding the entropy of natural language: link. I still can't find any estimates of entropy of any other natural language besides English, however.
I wonder, also, how well a speech recognition system would be able to pick up on these ventriloquistic differences. Probably too well - it wouldn't be able to make any sense of it.