## Resonant frequencies and the vocal tract length

This post is about resonant frequencies of a tube, in the context of speech and the neutral vocal configuration. Two formulas are given: the first to calculate the resonant frequencies when the length is known, and second, to calculate the length when the frequency of a formant is known. Finally, there is a real-life example: a calculation of a speaker’s vocal tract length after measuring the formants in schwa.

The speech mechanism in vowels is described by a model that uses the physical properties of tubes. A tube is a simple apparatus that, if attached to a source of sound, can emit harmonic frequencies. When attached to a sound speaker at the end, the tube acts as a resonator that “has an infinite number of resonances, located at frequencies given by odd-quarter wavelength” (Kent and Read 14). The resonant frequencies of a tube closed at one end are calculated by using this formula (Johnson 96):

$Fn=\frac{(2n-1)c}{4L}$

Where n is an integer, L is the length of the tube and c is the speed of sound (about 35,000 cm/sec).

This was very interesting to me, so I decided to experiment with the formula in R language. The purpose was to calculate average frequencies of a vocal tract in the neutral configuration (a position of vocal organs where a tube without obstacles is created from the larynx to the lips). So, the formula written above in R looks like this:

freq <- ((2*i-1)*35000)/(4*tract.len)

For a given speed of sound c=35000, the formant number i and the tract length, we can calculate estimated formant values. As an example, we can insert L = 17.5 cm in the formula, the average length of human tract16 from glottis to lips (15). In this case the first formant, or the first resonance frequency, occurs at 500 Hz, the second at 1500 Hz, the third at 2500 Hz, and so on. Here is the output form R code located here:

> Resonance(17.5)
Tract length is 17.5 cm.
formant 1: 500 Hz
formant 2: 1500 Hz
formant 3: 2500 Hz
formant 4: 3500 Hz
formant 5: 4500 Hz

Of course, we can reverse the calculation; by entering formant frequency and the order of the formant we can calculate an average length:

prep <- 35000*((formant/2)-0.25)
length <- (prep/freq)

This is the result of  Length function of the code:

> Length(1000, 1)
Estimated tract length is 8.75 cm, where formant number 1 has value of 1000 Hz.

This length corresponds to vocal tract lengths measured in infants.

To make the calculations even more interesting, we can measure the frequency of the first formant of speakers, and then “calculate” the length lengths of the vocal tracts. Here is an example: we recorded a speaker and examined the sound data. Since schwa sound is pronounces in (approximately) the neutral configuration, we measured the formants where this sound (IPA: ə) was articulated. In this case, that was near the end of the word  abjured /əbˈdʒʊəd/. The first three formant values in the sample female speaker were:

Time_s   F1_Hz   F2_Hz   F3_Hz
4.633178   549.304326   1750.098455   2915.885791

If we enter 549.3 Hz in the second formula, we get:

> Length(549.304326,1)
Estimated tract length is 15.92 cm, where formant number 1 has value of 549.3043 Hz.

This is, it seems, an acceptable value for this speaker.

The measurements and image was obtained by using Praat, free phonetic software. Calculation and the code example were written in R programming language.

## Physics of Speech

Once set in a vibratory motion, the vocal folds create a series of movement within the vocal tract above the larynx (the rate at which vocal folds vibrate is recognized as the fundamental frequency of sound). An object will increase the vibrations [2] that are close to its own natural frequencies.  In speech, some frequencies are dampened, while some increased – in accordance to the resonant properties of the tract (its cavities, tissues).

The discussion about resonance in the speech production leads us to a well-known theory about the production of vowels: the source-filter theory, which postulates that the vocal folds are the source of the sound; after the sound is made it passes through a filter shaped by the vocal tract cavities (Ladefoged, Elements 103). This filter is “frequency-selective and constantly modifies the spectral characteristics of sound sources during articulation” (Clark, Introduction 217), and it changes during articulation (218).

However, the vocal tract is not the only filter involved: the sound is modified, and after it leaves the vocal tract, the air in the “outside world” affects the sound [3], but it is also affected by the physical properties of the head, which “functions as a … reflecting surface … [,] a spherical baffle of about 9 cm radius” (Clark, Introduction, 221).

The currently valid theory [1] of phonation is the aerodynamic myoelastic theory: the creation of sound is explained by taking into account aerodynamic forces, muscle activity, tissue elasticity and “the mechanically complex nature of the vocal fold tissue structure” (Clark, Introduction 37).

The speech mechanism in vowels can be described by a model that uses the physical properties of tubes. [4] A tube is a simple apparatus that, if attached to a source of sound, can emit harmonic frequencies [5]. When attached to a sound speaker at the end, the tube acts as a resonator that “has an infinite number of resonances, located at frequencies given by odd-quarter wavelength” (Kent and Read, 14). The resonant frequencies of a tube closed at one end are calculated by using the following formula (Johnson, 96):

Fn = (2n – 1)c/4L

Where n is an integer, L is the length of the tube and c is the speed of sound (about 35,000 cm/sec). This formula is derived from definition of frequency (f), which is in our case the same as the speed of sound (c) divided by wavelength[6] (∆) or:

f = c/∆

A tube is an approximation of the shape of the vocal tract, from larynx to lips. The acoustic energy is supplied by the vocal cords, which are located at the lower, closed end of the apparatus. This model is used to calculate average resonant frequencies in a configuration of the vocal tract that makes “uniform cross-sectional area” (Kent and Read 15), as in vowel schwa [ə] (See: Johnson, 97). Of course, this is an idealised and simplified representation, but it is useful because in this example “the configuration of the vocal tract approximates a parallel-side tube … closed at one end (the larynx) and open at the other (the lips)” (Clark, Introduction 218).

As an example, we can insert L = 17.5 cm in the formula, the average length of human tract [7] from glottis to lips (Kent and Read, 15). In this case the first formant, or the first resonance frequency, occurs at 500 Hz, the second at 1500 Hz, the third at 2500 Hz, and so on. Stevens cites Goldstine’s estimation of the vocal tract, stating that the average length in females is 14.1 cm (25). The calculated results for this sample length are then F1=620.5 Hz, F2=1861.7 Hz and F3=3102.8 Hz (more about formant calculation/synthesis).

However, this neutral position of the vocal tract can account for only a small number of sounds. Extended, the model of vowel production becomes more complicated, but explains the basic physics behind the vowel production. For example, in the pronunciation of the back vowel /ɑ/, the tongue separates the vocal tract and makes two tubes above the larynx. The first tube extends from pharynx to glottis, where it is closed, and the second from pharynx to the lips – and the tubes are roughly the same length (Ladefoged, Elements 123). The resonant frequency of each idealised tube will have the double value of the resonant frequency of the whole tube. If we take our example of 14.1 for females, and enter the value into the second formula, the first resonant frequency will be at 1041 Hz, which is also (for the sake of convenience) the same frequency as for the second tube. However, the air outside the mouth cavity will interact with the sound, and configuration of the pharynx will affect the first tube frequencies, which means that one resonance will be lower, and another higher – resembling the results measured in samples of the spoken vowel.

In other vowels the configuration of the tract becomes even more complex, because the tongue moves and changes the shape of the cavities, introducing other calculations, such as the Helmholtz resonator. For example, in the production of [i], the tongue makes a small-diameter constriction between the tubes in which a volume of air significantly contributes to the overall “shape” of a vowel. This volume of air must be taken into account when calculating the frequencies of the tubes (126).

Although simplified, the calculations from acoustic theories provide strong evidence in favour of the working principles, the proof being the general correlation between the calculated and the measured results (Kent and Read 22).

[1] According to Clark and his book published in 1990.

[2] In speech the origin of vibration usually refer to the vocal folds, but when a person in unable to produce sound by the vocal folds, usually because of illness, other means can be employed (Pinker, Instincts 165)

[3] This is the „radiation factor/impedance“, a filter that intensifies high frequencies by 6 dB for each octave. Within the pulse coming from the vocal folds, frequency peaks decrease for about -12 dB per octave. Thus, “these two … factors account for a -6dB/octave slope in the output spectrum”. (Ladefoged, Elements 105). Such sharp fall of the energy peak also means that “the intensity of the harmonics falls quite rapidly at high frequencies” (Clark, Introduction 212). It is then logical that most of the significant data in a sound signal is bellow 5.000 Hz, assuming that the upper hearing limit in humans is 20.000 Hz.

[4] In the 1960s Fant devised “nomographs” – diagrams that can be used to calculate the first four formants by using “the lengths of the resonators and their cross-sectional areas” (Clark, Introduction 222). The “nomographs” are quite famous in the history of acoustic research, but we will not expound them in detail in this paper. However, it is worth noting that “the two tube representation is only a crude approximation of the complex resonant cavity system of the human vocal tract during vowel production” (222).

[5] Mathematically, harmonics are the integer multiples of the fundamental frequency. Harmonics that correspond to the fundamental frequency of the object are the resonance frequencies. Various parts of the vocal tract act as resonators, so some frequencies of the sound are enhanced or dampened by the resonant properties of the tissues and vocal cavities. The enhanced frequencies of the sound are called formants, and they are visible in the spectrum as black bands. [?]

[6] “The distance, measured in the direction of propagation, between two points in the same phase in consecutive cycles of a wave. Symbol: ∆” (Trask, Dictionary 1995). Ladefoged (Elements 115) gives an insightful example: if a sound has frequency of 350 Hz, it will be heard for 1s at a distance of 350 m, since sound propagates at 35000 m/s (in common conditions); in this case the wavelength of the sound is 1 m (there are 350 peaks of sound at the 350 m distance).

[7]Lass gives 15 cm as an average distance in males. (Lass, Experimental 33). Clark (Introduction) provides insightful results reached by Pickett: “The length of a woman’s tract is about 80–90 per cent of a man’s, while child’s, depending on age, may be around 50 per cent of a man’s” (219).

This post is based on a draft for one of the introductory chapters in my paper.
Previous text: Sound (Related to Speech)

## Sound (Related to Speech)

Sound is a form of energy (Crystal 32). It is a series of pressure fluctuations in a medium (Johnson 4). In speech the medium is usually air, although sound can propagate through solid objects and water, for example. Once the air particles become energised by the vocal folds vibration, a series of rarefaction and compression events begin. Compression occurs when particles are shifted closer to each other, which results in increased density within medium. Rarefaction is the opposite, when particles retract so density in medium reduces.

Compression, rarefaction, and other terms related to acoustics are often explained through a simple device – a pendulum. A pendulum, or a swing, is “a weight hung from a fixed point so that it can swing freely” (Oxford Dictionary). Once set in motion it will oscillate between two maximum points and its central, equilibrium, position.

Here is a graphical representation of a pendulum. The point E is the equilibrium, while the points M1 and M2 mark the maximum points on both sides of the pendulum. The swinging motion from E to M1, then back to E and up to M2, can be shown in the coordinate system as a sinusoid. The figure shows such a sinusoid, with a series of maximum and minimum swinging points. The crossing point of the sinusoid and the line show the phase in oscillation when the pendulum reaches its starting point E. Particles do not travel through a medium; instead, they create a propagating pressure fluctuation: “A sound wave is a travelling pressure fluctuation that propagates through any medium that is elastic enough to allow molecules to could together and move apart” (Johnson 3). In other words, while each particle moves back and forth and acts “like the bob of pendulum … the waves of compression move steadily outward” (Ladefoged, Elements, 8). Here is an animation of the air molecules in a sound wave propagation.

Combined, a pendulum and a sinusoid illustrate the properties of sound waves and they help explain the terminology related to the physics of speech. For example, the distance between points E and M1 (or E and M2) is the amplitude. It shows the maximum oscillation points of the particles or, in sound, “the extent of maximum variation in air pressure” (Ladefoged, Elements, 14). A pendulum’s period (or a cycle) is a trajectory from E to M1, M2 and back to E. The number of such periods in a second is frequency, and it is measured in hertz (Hz). A pendulum with one oscillation per second has 1 Hz (equation 1). A sound of 100 Hz has an identifiable part that repeats once in a tenth part of a second.

1 Hz = 1/s

The energy of a sound wave depends on the force that created it. The bigger the energy in making the sound wave, the bigger pressure level in the medium it creates. The energy of a sound wave is related to its amplitude: a very strong wave will have big amplitude, and vice versa. The sound pressure, or its intensity, is measured in dB (decibels).

The human ear is very sensitive to pressure variations, estimated at 1013 units of intensity (Crystal 36). For easier reference, the logarithmic scale is used. Thus, units of 1013 are scaled to 130 dB (36).

A simple sinusoid below is an abstraction of a simple periodic sine wave. For its description, three items are needed: amplitude, frequency and phase [1] (Johnson 7). From the picture we see that the frequency of the sound is 1 per unit of time, while the amplitude reaches its peaks at 2 and -2 on the vertical scale. Unlike simple periodic waves, complex periodic waves “are composed of at least two sine waves” (8). One such complex wave has a pressure oscillation (an amplitude) that is the result of the pressure oscillations of at least two waves (Ladefoged, Elements 37), and, of course, the phases of the waves involved. Every complex wave can be seen as composed of several simple waves, and the merit of such model is that “any complex waveform can be decomposed into a set of sine waves having particular frequencies, amplitudes and phase relations)” (Johnson 11). The process of “breaking complex wave down into its sinusoidal components” (Clark 203) is well-known in physics and is called the Fourier analysis, named after the scientist who “developed its mathematical basis” (203) in XIX century.

The second group of waves is aperiodic waves. They are characterised by the lack of repetitive pattern. Two types of waves are grouped under the term aperiodic: white noise and transients. White noise contains a completely random waveform, while waveform in transients does not repeat; in speech, an example for white noise is a fricative such as [s] (Johnson 12). Aperiodic sounds can also be subjected to Fourier analysis.

Sometimes pressure fluctuations in form of sound that hit an object cause the object to vibrate. The vibrations occur if the acting frequency is within the “effective frequency range” or resonator bandwidth (Ladefoged, Elements 68). Such induction of vibrations by another vibrating object is called resonance. Every object has a specific range of frequencies that it can respond to, and those frequencies correspond to the dominant frequencies of the sound the object can create – or as Ladefoged explains it: “… [T]he resonance curve of a body has the same shape as its spectrum” (65). In speech, the speech organs have the function of resonators: they filter (enhance and dampen) properties of waves, recognised as the speech sounds.

[1]  Phase is “the timing of the waveform relative to same reference point” (Johnson 8).

You can get SVG versions of the images (click for the pendulum of for the sinusoid).

This post is based on a draft for one of the introductory chapters in my paper.
Previous text: The Speech Organs and Airstream