I'm a software developer currently working on an automated LDS captioning system. I have approval from our stake to test the software in our ward and I'm hoping to have the system up and running in the next few weeks. If it goes well, I'll certainly share with those interested.
There's actually a very good chance that automated captioning could work well in a Sacrament Meeting setting—the sound could be piped directly into the captioning device (e.g. a tablet), so there would be very little noise, and a custom language model could be developed around LDS language.
Speech recognition relies on statistical language models, and those models are developed using a corpus—a body of texts intended to represent normal speech patterns. The problem is, LDS language is quite unique; not only do we have a massive set of unique pronouns in the Book of Mormon, but we also have unique speech patterns (e.g. the words "bear" and "testimony" aren't used together very often in everyday speech, so a generic language model would find it statistically unlikely for those words to be near each other, but an LDS-specific language model would find it very likely). If only there was a massive corpus of LDS language...
Using the thousands of General Conference talks that are freely available, along with the standard works, I believe a very good LDS language model can be developed. The plan is to develop on iOS, so you would plug the house sound directly into the tablet and then hook a tablet up to a TV, which in most chapels could be placed opposite the sacrament table (down low so that only the first couple rows can see the captions, so they don't become a distraction for everyone else).
I'll keep you updated!