- Ph.D., The University of Pennsylvania, Electrical Engineering, 1970
During his 32 year AT&T Bell Labs career (after 1998, AT&T Labs) Allen specialized in nonlinear cochlear modeling, auditory and cochlear speech processing, and speech perception. While at AT&T Allen wrote more than 50 sole-authored journal articles on hearing, cochlear modeling, signal processing, room acoustics, speech perception and the articulation index (AI, a.k.a. speech transmission index (STI), Speech intelligibility index (SII)).
In 1982-1987 Allen he had primary responsibility with the development of the first commercial multiband wideband dynamic range compression (WDRC) hearing aid, later sold as the ReSound hearing aid. During this 5 years he was working closely with clinical audiologists and speech and hearing scientists, and with several hearing aid manufactures (Starkey, Phonak, Etymotic), who subsequently funded Allen's work. During this period Allen wrote the first DSP code and developed the first fitting system, based on loudness in 1/2 octave bands (LGOB) which was used by ReSound as their commercial fitting system for many years. He was also responsible for the first analog compression circuits used in the primary product, that was produced by AT&T for ReSound, at the Allentown PA silicon foundry.
Starting in 1986, Allen developed one of the first systems for non-evasively evaluating cochlear hearing using distortion produce otoacoustic emissions (DPOAEs), known as the (Cub^eDis) measurement system, which for several years (1988-1995) was commercially sold by Etymotic research, and after that by Mimosa Acoustics, for which Allen serves as the Chief Technology Officer (CTO).
From 1998-2003, while at AT&T Labs, a spin off from Bell Labs, Allen worked on Loudness and consonant perception, which is a problem closely related to AI theory.
In Aug. 2003 he join the ECE faculty, University of IL, Urbana where he teaches and works with his students on noninvasive objective diagnostic testing of cochlear and middle ear function, based on acoustic reflectance (aka impedance) methods of the middle ear, auditory psychophysics, speech processing for hearing aid applications (noise reduction and multiband compression), speech and music coding (bit-rate reduction) and speech perception (models of loudness and masking) and hearing aid transducer modeling.
He is most actively working on the theory and practice of human speech recognition, for both normal and hearing impaired hearing, with the goals of improving hearing aid signal processing as well as automatic speech recognition robustness in the presences of noise and filtering. From 2003-present, Allen has a number of students active in various projects on speech perception, middle ear models and hearing aid signal processing (Allen's Research Group) In the last 10 years Allen and his students have collected several large databases of speech perception in noise, by normal and impaired human subjects. This work has resulting in many publications on human speech perception. From 2005-present Allen has also studied reading disabilities in young children. This work has been in collaboration with Prof. Cynthia Johnson of the UIUC Speech and Hearing Science Department (and many of her students).
A third major research topic is the diagnosis of middle ear disorders, based on acoustic impedance measurements.This work is well documented in the publications from 1974-2015 (http://auditorymodels.web.engr.illinois.edu//index.php/Main/Publications).
Allen has successfully developed several complex and innovative research programs, first at Bell Labs in 1995 (cochlear modeling), followed by the development of the Bell Labs multiband compression hearing aid (1985-88) (Now labeled as GN-ReSound en.wikipedia.org/wiki/ReSound), followed by his speech perception research at UIUC in 2003 with his group of highly productive students. This research has provided many deep insights into difficult, significant and challenging problems of speech perception. Specifically Allen and his students have identified the basic features of many plosive and fricative speech sounds. This has allowed them to manipulate the perception of speech with surgical precision.
Allen is well-versed in cochlear modeling, auditory neurophysiology, speech perception, speech processing, psychophysics, audiology as well as musical, speech and middle ear acoustics, acoustic impedance and reflectance, analog and digital signal processing, and clinical audiology. Allen has more than 20 US patents on hearing aids, signal processing and middle ear measurement diagnostics.
He teaches courses in mathematical physics (http://auditorymodels.web.engr.illinois.edu/index.php/Courses/ECE493-2013AdvEngMath), speech processing (http://auditorymodels.web.engr.illinois.edu/index.php/Courses/ECE537-2014SpeechProcessing), analog and digital signal processing, and clinical audiology, electroacoustics, transducer design (http://auditorymodels.web.engr.illinois.edu/index.php/Courses/ECE403-2013AudioEngineering). His special interest is speech perception, which brings together many of these fields in a relevant way. Additionally Allen has co-taught ECE-545 (Advanced Physical Acoustics) (https://courses.illinois.edu/schedule/2013/fall/ECE/545).
This coming semester he has introduced a new math course for undergraduates "ECE 298 JA - Concepts in Engineering Math" (http://www.ece.illinois.edu/academics/courses/profile/ECE298JA-120158, http://auditorymodels.web.engr.illinois.edu/index.php/Courses/ECE298JA-F15).
Since the early 1990's, Allen has been a visiting scientist in the Departments of Otolaryngology of Columbia University, City university of New York, and University of Calgary, and was an Osher Fellow at the Exploratorium Museum, San Francisco (www.exploratorium.edu/). He has been very active in IEEE and the ASA, running both major conferences (IEEE-ICASSP 1985, New York) and many small workshops.
- Music perception
- Musical Acoustics (guitars, fiddles, some wind instruments)
- Evanescent wave propagation in horns
- Transducer physics and modeling (Loudspeakers)
- Wave propagation in inhomogeneous media. Acoustic horns.
- Models of the outer hair cells of the cochlea. Biophysical model of hair cell membrane mechanical properties, as a function of membrane voltage. (with Paul Fahey, Univ. Scranton, physics dept.)
- Modeling the middle ear in the time domain, with wave models. How does the eardrum transform the acoustic energy and funnel it into the cochlea?
- Robust human speech recognition
- Speech and music coding
- Articulation index modeling of confusion matrix measurements, of consonant vowel sounds, in noise
- Speech processing for hearing aid applications: Special signal processing techniques for removing reverberation and noise; multiband compression for loudness recruitment abatement;
- Auditory psychophysics: Intensity just noticeable difference (JND), speech psychophysics; confusion matrices; information processing by the auditory system
- Human speech recognition: Reverse engineering, measuring and modeling speech cues used by the human auditory system, when recognizing speech in large amounts of noise and with filtering; Articulation index; confusion matrices; information processing by the auditory system with speech as the signal
- Noninvasive diagnostic testing of the cochlear and middle ear: Otoacoustic emissions measured in the ear canal; noninvasive diagnostics; distortion product measurements; SFOAE; impedance; power reflectance of the ear canal
- Cochlear modeling: Mathematical models of cochlear function, including basilar membrane motion, biophysical models of outer hair cells; models of the micromechanics, including the tectorial membrane and cilia motions.
- Adaptive signal processing
- Analog integrated circuits
- Antennas for communication and wireless sensing
- Biomedical Imaging, Bioengineering, and Acoustics
- Computed imaging systems
- Electromagnetic theory
- Embedded, real-time, and hybrid systems
- Information theory
- Microwave devices and circuits
- Nonlinear systems and control
- Radio and optical wave propagation
- Signal detection and estimation
- Speech processing
- Speech recognition and processing
- System modeling and measurement
Books Authored or Co-Authored (Original Editions)
- Allen, J. B. (2005); "Articulation and Intelligibility," Morgan and Claypool Inc., LaPorte, CO 80535 Peer reviewed monograph, ISBN: 1598290088; 130 pages of original material including literature survey back to 1900 of speech perception work, and a model on how the auditory system processes speech.
Selected Articles in Journals
- Riya Singh and Jont Allen (2012); "Sources of stop consonant errors in low-noise environments," JASA, 131(4), Apr, p. 3051-3068.
- Yoon, Y., Allen, J.B. and Gooler, D. (2012). Relationship between Consonant Recognition in Noise and Hearing Threshold, J. of Speech, Language and Hearing Research, doi: 10.1044/1092-4388(2011/10-0239), April 2012.
- Daniel Rasetshwane, Stephen Neely, Jont Allen, and Christopher Shera (2012); "Reflectance of acoustic horns and solution of the inverse problem"; Jol Acoust. Soc. Am, Vol 131(3) March, pp 1836-1873
- Feipeng Li and Jont B. Allen. (2011) Manipulation of Consonants in Natural Speech; IEEE Trans. Audio, Speech and Language processing, pp. 496-504.
- Kapoor, Abhinauv and Allen, Jont B. "Perceptual Effects of Plosive Feature Modification," JASA v131(1) Jan. pp 478–491 (2012)
- Lobdell, Bryce, Allen, Jont and Hasawaga-Johnson, Mark, Intelligibility predictors and neural representation of speech, Speech communication 53, pp. 185-194 (2011)
- Weece, R. and Allen, J. B., "A clinical method for calibration of bone conduction transducers to measure the mastoid impedance," Hearing Research, 263:216--223 (2010)
- Parent, Pierre and Allen, Jont; "Wave model of the human tympanic membrane," (2010) Hearing Research 263:152--167;
- Li, Feipeng and Menon, Anjali and Allen, Jont B., A psychoacoustic method to find the perceptual cues of stop consonants in natural speech J. Acoust. Soc. Am. Apr 127(4) pp 2599-2610, (2010).
- Withnell, R.H. and Parent, P. and Jeng, PS and Jont, J.B. Using wideband reflectance to measure the impedance of the middle ear. The Hearing Journal, (2009), 62(10), pp 36-41.
- Feipeng Li and Jont Allen. Speech perception and cochlear signal processing. IEEE Signal Processing Magazine, 26(4), pp 73-77 July 2009.
- Feipeng Li and Jont B. Allen. Additivity law of frequency integration for consonant identification in white noise. J. Acoust. Soc. Am. 126(1) pp 347-353, Aug 2009
- Travino, A, Colemann, T., Allen, J. "A Dynamical Point Process Model of Auditory Nerve Spiking in Response to Complex Sounds," Journal of Computational Neuroscience, Springer, March 2009.
- S. A. Phatak, Y. Yoon, D. M. Gooler, and J. B. Allen. Consonant loss profiles for hearing impaired listeners. J. Acoust. Soc. Am., 126(5), pp 2683-2694, Nov. 2009.
- RH Withnell, PS Jeng, Kelly Waldvogel, Kari Morgenstein, and Jont B. Allen. An in-situ calibration for hearing thresholds. J. Acoust. Soc. Am., 125(3), 1605-11, March (2009).
- PS Jeng, Jont Allen, JA Miller, and Harry Levitt., "Wideband power reflectance and power transmittance as tools for assessing middle-ear function," Perspectives on Hearing and Hearing Disorders in Childhood, 18(2):44-57, 2008. ASHA Journal, (http://journals.asha.org/perspectives/terms.dtl).
- S. Phatak, Andrew Lovitt, and Jont B. Allen., "Consonant confusions in white noise," J. Acoust. Soc. Am., 124(2):1220-33, 2008.
- Marion S. Regnier and Jont B. Allen., "A method to identify noise-robust perceptual features: application for consonant /t/." J. Acoust. Soc. Am., 123(5):2801- 2814, 2008.
- Phatak, S. and Allen, J.B., "Consonant and vowel confusions in speech-weighted noise," J. of the Acoust. Soc. Am., 121(4):2312-26, April 2007.
- Sen, D. and Allen, Jont B., "Functionality of cochlear micromechanics--as elucidated by the upward spread of masking and two tone suppression," Acoustics Australia, 34, 43-51, 2006.
- Allen, Jont B. and Jeng, Patricia S. and Levitt, Harry, "Evaluating Human Middle Ear Function via an Acoustic Power Assessment," Jol. of Rehabil. Res. Dev., July vol 42(4), pp 63-78 (2005).
- J. B. Allen, "Harvey Fletcher's role in the creation of communication acoustics," Journal of the Acoustical Society of America, vol. 99, no. 4, pp. 1825-1839, April 1996.
- J. B. Allen, "How do humans process and recognize speech?" IEEE Transactions on Speech and Audio, vol. 2, no. 4, pp. 567-577, October 1994.
- J. B. Allen and P. F. Fahey, "Using acoustic distortion products to measure the cochlear amplifier gain on the basilar membrane," Journal of the Acoustical Society of America, vol. 92, no. 1, pp. 178-188, July 1992.
- S. Puria and J. B. Allen, "A parametric study of cochlear input impedance," Journal of the Acoustical Society of America, vol. 1, no. 89, pp. 287-309, January 1991.
- J. B. Allen and M. M. Sondhi, Cochlear macromechanics: Time-domain solutions, Journal of the Acoustical Society of America, vol. 66, no. 1, pp. 120-132, July 1979.
- J. B. Allen and D. A. Berkley, "Image method for efficiently simulating small-room acoustics," Journal of the Acoustical Society of America, vol. 65, pp. 943-950, 1979.
- S. T. Neely and J. B. Allen, "Invertability of a room impulse response," Journal of the Acoustical Society of America, vol. 66, pp. 165-169, 1979.
- J. B. Allen and L. R. Rabiner, "A unified approach to short-time Fourier analysis, synthesis," Proceedings of the IEEE, vol. 65, no. 11, pp. 1558-1564, November 1977.
- J. B. Allen. "Cochlear micromechanics - A mechanism for transforming mechanical to neural tuning within the cochlea," Journal of the Acoustical Society of America, vol 62, pp. 930-939, 1977.
- J. B. Allen, "Short-time spectral analysis, synthesis, with modifications, by discrete Fourier transform," (This article was the basis for IEEE Fellow Award), IEEE Transactions on Acoustical Speech and Signal Processing, vol. 25, pp. 235-238, 1977.
- Member of the editorial board of: EURASIP Journal on Audio, Speech and Music Processing http://www.hindawi.com/journals/asmp/editors.html
- IEEE speech and language technical committee (SLTC) representative in Speech Perception (2004-present)
- Acoustical Soc. of Am.: Active member on Publication Policy (2003-present)
- Acoustical Soc. of Am : Active member of Archives and History committee (2004-present)
- ICASSP General chair 1988 New York City
- Life-time IEEE Fellow (2009)
- Phonak Faculty Award (2007, 2008)
- Visiting Professor Kings College of London, Aug. 2007
- IBM Faculty Award (2005)
- IEEE Third Millennium Award (2000)
- SoundID Scientific Advisory Board (California 1995-2000)
- 1991, International Distinguished Lecturer for the IEEE Signal Processing Society
- 1990 Osher Fellow, Exploratorium Museum San Francisco
- 1986-1995 Resound Hearing Aid Scientific Advisory Board member (ReSound is the first company to introduce wide dynamic range multiband compression into the hearing aid market)
- 1986, IEEE Acoust., Speech, Signal Processing (ASSP) Meritorious Service Award
- 1985, Fellow, IEEE
- 1981, Fellow, Acoustical Society of America (ASA)
- IEEE International Distinguished Lecturer for the Signal Processing Society (1991)
- International Symposium on Middle ear Mechanics in Research and Otology (MEMRO); Kyungpook National University (June 28, 2012)
Public Service Honors
- IEEE ASSP Society Award 1986 (1983-1985)
- ECE 298 - Concepts in Engineering Math
- ECE 310 - Digital Signal Processing
- ECE 311 - Digital Signal Processing Lab
- ECE 403 - Audio Engineering
- ECE 420 - Embedded DSP Laboratory
- ECE 493 - Advanced Engineering Math
- ECE 537 - Speech Processing Fundamentals
- MATH 487 - Advanced Engineering Math