Skip to content

portolans/formant-analyzer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

95 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FORMANT PLOTTER

Buy on the App Store: https://itunes.apple.com/us/app/formant-analyzer/id799183655?mt=8&uo=4&at=11l6hc&ct=fnd

This is an iOS project to analyze formants. The user speaks and the formant is plotted on the screen immediately. It is designed for speaking a single vowel syllable. It will try to isolate the vowel sound from any surrounding consonants if it can.

Formant Research

Other related tools and formant information

Vowel formant chart:

vowel		F1	F2	F3
ee	male	270	2290	3010
	female	310	2790	3310
	child	370	3200	3730
e	male	530	1840	2480
	female	610	2330	2990
	child	690	2610	3570
ae	male	660	1720	2410
	female	850	2050	2850
	child	1030	2320	3320
ah	male	730	1090	2440
	female	590	1220	2810
	child	680	1370	3170
oo	male	300	870	2240
	female	370	950	2670
	child	430	1170	3260

The Formant Plotter

The program starts in green state. When the user starts talking (i.e. RMS goes above 0dBm for at least 0.1 seconds), the program goes into listening state and records the sound. When the user stops talking (i.e. RMS goes below 0dBm for at least 0.1 seconds), the program returns to ready state and draws graphs.

Graph drawing is done as follows: The recorded sound is truncated to remove the first and last 10% of the data. Then perform a Fast Fourier Transform (FFT) with autocorrelation. The result is plotted linear from 0 - 4000 Hz on the X axis and from -60 to 0 dB log scale on the Y axis.

The second graph is drawn as follows: An image is placed on the background for the chart (you create an image to start with) and two dots are plotted on the chart, representing the highest and lowest sample value from the recording. That's it.

The correct algorithm which takes the FFT results which were plotted above and creates the vowel plot is discussed in Formant Research above.

Some potential next steps include:

  • Use autocorrelation to increase trimming accuracy
  • Windowing on the truncated sound buffer so that edge samples have an attenuated effect
  • Root polishing. The code has been written but commented out (please see PlotView.m). If we can test and refine this part, we will have better estimates of roots of LPC polynomials, and formant frequencies. We may not want VERY accurate estimates of formant frequencies and may not need root polishing.
  • Elimination of weak roots (far away from unit circle). They do not produce a peak in H(w) and should be ignored. I hope that if we reduce order of LPC, we may not see such weak roots. This should be investigated after reduction of LPC filter order.

About

iOS application for finding formants in spoken sounds

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Swift 79.7%
  • MATLAB 16.1%
  • Ruby 4.2%