Publications

A modular architecture for articulatory synthesis from gestural specification

Abstract

This paper proposes a modular architecture for articulatory synthesis from a gestural specification comprising relatively simple models for the vocal tract, the glottis, aero-acoustics, and articulatory control. The vocal tract module combines a midsagittal statistical analysis articulatory model, derived by factor analysis of air-tissue boundaries in real-time magnetic resonance imaging data, with an α β model for converting midsagittal section to area function specifications. The aero-acoustics and glottis models were based on a software implementation of classic work by Maeda. The articulatory control module uses dynamical systems, which implement articulatory gestures, to animate the statistical articulatory model, inspired by the task dynamics model. Results on synthesizing vowel-consonant-vowel sequences with plosive consonants, using models that were built on data from, and simulate the behavior of, two …

Date
December 1, 2019
Authors
Rachel Alexander, Tanner Sorensen, Asterios Toutios, Shrikanth Narayanan
Journal
The Journal of the Acoustical Society of America
Volume
146
Issue
6
Pages
4458-4471
Publisher
AIP Publishing