Visual speech information is known to improve accuracy and noise robustness of automatic speech recognizers. However, to-date, all audio-visual ASR work has concentrated on "visually clean" data with limited variation in the speaker's frontal pose, lighting, and background. In this paper, we investigate audiovisual ASR in two practical environments that present significant challenges to robust visual processing: (a) Typical offices, where data are recorded by means of a portable PC equipped with an inexpensive web camera, and (b) automobiles, with data collected at three approximate speeds. The performance of all components of a state-of-the-art audio-visual ASR system is reported on these two sets and benchmarked against "visually clean" data recorded in a studio-like environment. Not surprisingly, both audio- and visual-only ASR degrade, more than doubling their respective word error rates. Nevertheless, visual speech remains beneficial to ASR.