Culture Wikia
This article is about voice encoder. For dictation machine, see voice recorder.
File:Kraftwerk Vocoder custom made in early1970s.JPG

Early 1970s vocoder, custom built for electronic music band Kraftwerk

A vocoder (English pronunciation: , short for voice encoder) is a category of voice codec that analyzes and synthesizes the human voice signal for audio data compression, multiplexing, voice encryption, voice transformation, etc.

The earliest type of vocoder, the channel vocoder, was originally developed as a speech coder for telecommunications applications in the 1930s, the idea being to code speech in order to reduce bandwidth (i.e. audio data compression) for multiplexing transmission. In the channel vocoder algorithm, among the two components of an analytic signal, considering only the amplitude component and simply ignoring the phase component tends to result in an unclear voice; on methods for rectifying this, see phase vocoder.

In the encoder, the input is passed through a multiband filter, then each band is passed through an envelope follower, and the control signals from the envelope followers are transmitted to the decoder. The decoder applies these (amplitude) control signals to corresponding filters for re-synthesis. Since these control signals change only slowly compared to the original speech waveform, the bandwidth required to transmit speech can be reduced. This allows more speech channels to share a single communication channel, such as a radio channel or a submarine cable (i.e. multiplexing).

By encrypting the control signals, voice transmission can be secured against interception. Its primary use in this fashion is for secure radio communication. The advantage of this method of encryption is that none of the original signal is sent, only envelopes of the bandpass filters. The receiving unit needs to be set up in the same filter configuration to re-synthesize a version of the original signal spectrum.

The vocoder has also been used extensively as an electronic musical instrument (see #Uses in music). The decoder portion of the vocoder, called a voder, can be used independently for speech synthesis (see #History).


The human voice consists of sounds generated by the opening and closing of the glottis by the vocal cords, which produces a periodic waveform with many harmonics. This basic sound is then filtered by the nose and throat (a complicated resonant piping system) to produce differences in harmonic content (formants) in a controlled way, creating the wide variety of sounds used in speech. There is another set of sounds, known as the unvoiced and plosive sounds, which are created or modified by the mouth in different fashions.

The vocoder examines speech by measuring how its spectral characteristics change over time. This results in a series of signals representing these modified frequencies at any particular time as the user speaks. In simple terms, the signal is split into a number of frequency bands (the larger this number, the more accurate the analysis) and the level of signal present at each frequency band gives the instantaneous representation of the spectral energy content. Thus, the vocoder dramatically reduces the amount of information needed to store speech, from a complete recording to a series of numbers. To recreate speech, the vocoder simply reverses the process, processing a broadband noise source by passing it through a stage that filters the frequency content based on the originally recorded series of numbers. Information about the instantaneous frequency of the original voice signal (as distinct from its spectral characteristic) is discarded; it was not important to preserve this for the purposes of the vocoder's original use as an encryption aid. It is this "dehumanizing" aspect of the vocoding process that has made it useful in creating special voice effects in popular music and audio entertainment.

Since the vocoder process sends only the parameters of the vocal model over the communication link, instead of a point-by-point recreation of the waveform, the bandwidth required to transmit speech can be reduced significantly.

Analog vocoders typically analyze an incoming signal by splitting the signal into a number of tuned frequency bands or ranges. A modulator and carrier signal are sent through a series of these tuned bandpass filters. In the example of a typical robot voice, the modulator is a microphone and the carrier is noise or a sawtooth waveform.[clarification needed] There are usually between eight and 20 bands.

The amplitude of the modulator for each of the individual analysis bands generates a voltage that is used to control amplifiers for each of the corresponding carrier bands. The result is that frequency components of the modulating signal are mapped onto the carrier signal as discrete amplitude changes in each of the frequency bands. (See Modulation.)

Often there is an unvoiced band or sibilance channel. This is for frequencies that are outside the analysis bands for typical speech but are still important in speech. Examples are words that start with the letters s, f, ch or any other sibilant sound. These can be mixed with the carrier output to increase clarity. The result is recognizable speech, although somewhat "mechanical" sounding. Vocoders often include a second system for generating unvoiced sounds, using a noise generator instead of the fundamental frequency.


<templatestyles src="Multiple image/styles.css" wrapper=".tmulti"></templatestyles>

SIGSALY (1943–1946) speech encipherment system
HY-2 Vocoder (designed in 1961), was the last generation of channel vocoder in the US.[1]

The development of a vocoder was started in 1928 by Bell Labs engineer Homer Dudley,[2] who was granted patents for it, Template:US patent application on March 21, 1939,[3] and Template:US patent application on Nov 16, 1937.[4]

Then, to show the speech synthesis ability of its decoder part, the Voder (Voice Operating Demonstrator, Template:US patent application[5]), was introduced to the public at the AT&T building at the 1939–1940 New York World's Fair.[6] The Voder consisted of a switchable pair of electronic oscillator and noise generator as a sound source of pitched tone and hiss, 10-band resonator filters with variable-gain amplifiers as a vocal tract, and the manual controllers including a set of pressure-sensitive keys for filter control, and a foot pedal for pitch control of tone.[7] The filters controlled by keys convert the tone and the hiss into vowels, consonants, and inflections. This was a complex machine to operate, but with a skilled operator could produce recognizable speech.[6][media 1]

Dudley's vocoder was used in the SIGSALY system, which was built by Bell Labs engineers in 1943. SIGSALY was used for encrypted high-level voice communications during World War II. Later work in this field was conducted by James Flanagan.


  • Terminal equipment for Digital Mobile Radio (DMR) based systems.
  • Digital Trunking
  • Digital Voice Scrambling and Encryption
  • Digital WLL
  • Voice Storage and Playback Systems
  • Messaging Systems
  • VoIP Systems
  • Voice Pagers
  • Regenerative Digital Voice Repeaters
  • Cochlear Implants
  • Musical and other artistic effects

Modern implementations[]

Main articles: Speech codec and Audio codec

Even with the need to record several frequencies, and additional unvoiced sounds, the compression of vocoder systems is impressive. Standard speech-recording systems capture frequencies from about 500 Hz to 3,400 Hz, where most of the frequencies used in speech lie, typically using a sampling rate of 8 kHz (slightly greater than the Nyquist rate). The sampling resolution is typically at least 12 or more bits per sample resolution (16 is standard), for a final data rate in the range of 96–128 kbit/s, but a good vocoder can provide a reasonably good simulation of voice with as little as 2.4 kbit/s of data.

"Toll quality" voice coders, such as ITU G.729, are used in many telephone networks. G.729 in particular has a final data rate of 8 kbit/s with superb voice quality. G.723 achieves slightly worse quality at data rates of 5.3 kbit/s and 6.4 kbit/s. Many voice vocoder systems use lower data rates, but below 5 kbit/s voice quality begins to drop rapidly.

Several vocoder systems are used in NSA encryption systems:

  • LPC-10, FIPS Pub 137, 2400 bit/s, which uses linear predictive coding
  • Code-excited linear prediction (CELP), 2400 and 4800 bit/s, Federal Standard 1016, used in STU-III
  • Continuously variable slope delta modulation (CVSD), 16 kbit/s, used in wide band encryptors such as the KY-57.
  • Mixed-excitation linear prediction (MELP), MIL STD 3005, 2400 bit/s, used in the Future Narrowband Digital Terminal FNBDT, NSA's 21st century secure telephone.
  • Adaptive Differential Pulse Code Modulation (ADPCM), former ITU-T G.721, 32 kbit/s used in STE secure telephone

(ADPCM is not a proper vocoder but rather a waveform codec. ITU has gathered G.721 along with some other ADPCM codecs into G.726.)

Vocoders are also currently used in developing psychophysics, linguistics, computational neuroscience and cochlear implant research.

Modern vocoders that are used in communication equipment and in voice storage devices today are based on the following algorithms:

  • Algebraic code-excited linear prediction (ACELP 4.7 kbit/s – 24 kbit/s)[8]
  • Mixed-excitation linear prediction (MELPe 2400, 1200 and 600 bit/s)[9]
  • Multi-band excitation (AMBE 2000 bit/s – 9600 bit/s)[10]
  • Sinusoidal-Pulsed Representation (SPR 600 bit/s – 4800 bit/s)[11]
  • Robust Advanced Low-complexity Waveform Interpolation (RALCWI 2050bit/s, 2400bit/s and 2750bit/s)[12]
  • Tri-Wave Excited Linear Prediction (TWELP 600 bit/s – 9600 bit/s)[13]
  • Noise Robust Vocoder (NRV 300 bit/s and 800 bit/s)[14]

Linear prediction-based[]

Main article: Linear predictive coding

Since the late 1970s, most non-musical vocoders have been implemented using linear prediction, whereby the target signal's spectral envelope (formant) is estimated by an all-pole IIR filter. In linear prediction coding, the all-pole filter replaces the bandpass filter bank of its predecessor and is used at the encoder to whiten the signal (i.e., flatten the spectrum) and again at the decoder to re-apply the spectral shape of the target speech signal.

One advantage of this type of filtering is that the location of the linear predictor's spectral peaks is entirely determined by the target signal, and can be as precise as allowed by the time period to be filtered. This is in contrast with vocoders realized using fixed-width filter banks, where spectral peaks can generally only be determined to be within the scope of a given frequency band. LP filtering also has disadvantages in that signals with a large number of constituent frequencies may exceed the number of frequencies that can be represented by the linear prediction filter. This restriction is the primary reason that LP coding is almost always used in tandem with other methods in high-compression voice coders.


Waveform-Interpolative (WI) vocoder was developed in AT&T Bell Laboratories around 1995 by W.B. Kleijn, and subsequently a low- complexity version was developed by AT&T for the DoD secure vocoder competition. Notable enhancements to the WI coder were made at the University of California, Santa Barbara. AT&T holds the core patents related to WI, and other institutes hold additional patents. Using these patents as a part of WI coder implementation requires licensing from all IPR holders.[15][16][17]

Artistic effects[]

See also: List of vocoders

Uses in music[]


Channel vocoder setting as a musical application

Main article: Synthesizer

For musical applications, a source of musical sounds is used as the carrier, instead of extracting the fundamental frequency. For instance, one could use the sound of a synthesizer as the input to the filter bank, a technique that became popular in the 1970s.


Werner Meyer-Eppler, a German scientist with a special interest in electronic voice synthesis, published a thesis in 1948 on electronic music and speech synthesis from the viewpoint of sound synthesis.[18] Later he was instrumental in the founding of the Studio for Electronic Music of WDR in Cologne, in 1951.[19]

File:DM Recording Studio.jpg

Siemens Synthesizer (c.1959) at Siemens Studio for Electronic Music was one of the first attempts to use a vocoder to create music

One of the first attempts to use a vocoder in creating music was the "Siemens Synthesizer" at the Siemens Studio for Electronic Music, developed between 1956 and 1959.[20][21][media 2]

In 1968, Robert Moog developed one of the first solid-state musical vocoders for the electronic music studio of the University at Buffalo.[22]

In 1968, Bruce Haack built a prototype vocoder, named "Farad" after Michael Faraday.[23] It was first featured on "The Electronic Record For Children" released in 1969 and then on his rock album The Electric Lucifer released in 1970.[24][media 3]

In 1970, Wendy Carlos and Robert Moog built another musical vocoder, a ten-band device inspired by the vocoder designs of Homer Dudley. It was originally called a spectrum encoder-decoder, and later referred to simply as a vocoder. The carrier signal came from a Moog modular synthesizer, and the modulator from a microphone input. The output of the ten-band vocoder was fairly intelligible, but relied on specially articulated speech. Later improved vocodersTemplate:By whom[citation needed] use a high-pass filter to let some sibilance through from the microphone; this ruins the device for its original speech-coding application, but it makes the "talking synthesizer" effect much more intelligible.

Carlos and Moog's vocoder was featured in several recordings, including the soundtrack to Stanley Kubrick's A Clockwork Orange, in which the vocoder sang the vocal part of Beethoven's "Ninth Symphony". Also in the soundtrack was a piece called "Timesteps", which featured the vocoder in two sections. "Timesteps" was originally intended as merely an introduction to vocoders for the "timid listener", but Kubrick chose to include the piece on the soundtrack, much to the surprise of Wendy Carlos.[citation needed]

Kraftwerk's Autobahn (1974) was one of the first successful albums to feature vocoder vocals. Another of the early songs to feature a vocoder was "The Raven" on the 1976 album Tales of Mystery and Imagination by progressive rock band The Alan Parsons Project; the vocoder was also used on later albums such as I Robot. Following Alan Parsons' example, vocoders began to appear in pop music in the late 1970s, for example, on disco recordings. Jeff Lynne of Electric Light Orchestra used the vocoder in several albums, such as Time (featuring the Roland VP-330 Plus MkI). ELO songs such as "Mr. Blue Sky" and "Sweet Talkin' Woman", both from Out of the Blue (1977), use the vocoder extensively, as does "The Diary of Horace Wimp" from the album Discovery (1979). Featured on the album are the EMS Vocoder 2000W MkI, and the EMS Vocoder (-System) 2000 (W or B, MkI or II). Giorgio Moroder made extensive use of the vocoder on the 1975 album Einzelgänger and 1977 album From Here to Eternity.

Another example of its use is Pink Floyd's song "Dogs", from their album Animals (1977), where the band put the sound of a barking dog through the device.

A vocoder was used by Jo Partridge to produce the Martian's unearthly exultations of "Ulla" in the 1978 concept album Jeff Wayne's Musical Version of The War of the Worlds.

Since 1979, a vocoder has been used at the start and end of the Main Street Electrical Parade at Disneyland and Walt Disney World.

Phil Collins used a vocoder to provide a vocal effect for his 1981 international hit single "In the Air Tonight".[25]

Vocoders are often used to create the sound of a robot talking, as in the Styx song "Mr. Roboto" (1983).

Roger Taylor of Queen used the Vocoder on two songs on Queen's eleventh studio album The Works, "Radio Ga Ga" and "Machines (Or 'Back to Humans')". He also used the device on the song "I Cry For You" from his solo album Strange Frontier.

Vocoders have appeared on pop recordings from time to time ever since, most often simply as a special effect rather than a featured aspect of the work. However, many experimental electronic artists of the new-age music genre often utilize vocoder in a more comprehensive manner in specific works, such as Jean Michel Jarre (on Zoolook, 1984) and Mike Oldfield (on QE2, 1980 and Five Miles Out, 1982).

Vocoder module and use by M. Oldfield, can be clearly seen on his "Live At Montreux 1981" DVD. Track "Sheba"

There are also some artists who have made vocoders an essential part of their music, overall or during an extended phase. Examples include the German synthpop group Kraftwerk, Stevie Wonder ("Send One Your Love", "A Seed's a Star") and jazz/fusion keyboardist Herbie Hancock during his late 1970s period. In 1982 Neil Young used a Sennheiser Vocoder VSM201 on six of the nine tracks on Trans.[26] Tommy James used a Vocoder in the production of his group's (the Shondells) 1968 number one hit 'Crimson and Clover'.[citation needed] Perhaps the most heard, yet often unrecognized, example of the use of a vocoder in popular music, is on Michael Jackson's 1982 album Thriller, in the song "P.Y.T. (Pretty Young Thing)". During the first few seconds of the song, the background voicings "ooh-ooh, ooh, ooh", behind his spoken words, exemplify the heavily modulated sound of his voice through a Vocoder.[27] The bridge also features a vocoder as well ("Pretty young thing/You make me sing"), courtesy of session musician Michael Boddicker.

Coldplay have used a vocoder in some of their songs. For example, in "Major Minus" and "Hurts Like Heaven", both from the album Mylo Xyloto (2011), Chris Martin's vocals are mostly vocoder-processed. "Midnight", from Ghost Stories (2014), also features Martin singing through a vocoder;[28] in "O", from the same album, Martin can be heard repeating "Don't ever let go" into a vocoder. The hidden track "X Marks The Spot" from "A Head Full of Dreams" has also been recorded through a vocoder.

Noisecore band Atari Teenage Riot have used vocoders in variety of their songs and live performances such as Live at the Brixton Academy (2002) alongside other digital audio technology both old and new.

Among the most consistent uses of vocoder in emulating the human voice are Daft Punk, who have used this instrument from their first album Homework (1997) to their latest work Random Access Memories (2013) and consider the convergence of technological and human voice "the identity of their musical project".[29] For instance, the lyrics of "Around the World" (1997) are integrally vocoder-processed, "Get Lucky" (2013) features a mix of natural and processed human voices, and "Instant Crush" (2013) features Julian Casablancas singing into a vocoder.

Voice effects in other arts[]

See also: Robotic voice effects, Talk box, and Auto-Tune

"Robot voices" became a recurring element in popular music during the 20th century. Apart from vocoders, several other methods of producing variations on this effect include: the Sonovox, Talk box, and Auto-Tune,[media 4] linear prediction vocoders, speech synthesis, [media 5][media 6] ring modulation and comb filter. Vocoders are used in television production, filmmaking and games, usually for robots or talking computers. The robot voices of the Cylons in Battlestar Galactica were created with an EMS Vocoder 2000.[26] The 1980 version of the Doctor Who theme, as arranged and recorded by Peter Howell, has a section of the main melody generated by a Roland SVC-350 Vocoder. A vocoder was also used to create the iconic voice of Soundwave, a character from the Transformers series.

In 1967 the Supermarionation series Captain Scarlet and the Mysterons Template:Citation needed span to supply the deep, eerie threatening voice of the disembodied Mysterons and well as the bass tones for the Spectrum agent Captain Black when he is seized under their telepathic control. It was also used in the closing credits theme, of the first 13 episodes to provide the synthetic repetition of the words "Captain Scarlet".[citation needed]

In 1972, Isao Tomita's first electronic music album Electric Samurai: Switched on Rock was an early attempt at applying speech synthesis technique Template:Citation needed span in electronic rock and pop music. The album featured electronic renditions of contemporary rock and pop songs, while utilizing synthesized voices in place of human voices. In 1974, he utilized synthesized voices in his popular classical music album Snowflakes are Dancing, which became a worldwide success and helped to popularize electronic music. Emerson, Lake and Palmer used it for the album Brain Salad Surgery (1973).[30]

See also[]

  • Homer Dudley
  • Voder
  • Phase vocoder
  • Silent speech interface
for musical applications
  • Werner Meyer-Eppler
  • List of vocoders
  • Auto-Tune
  • Audio timescale-pitch modification


  1. "HY-2 Vocoder". Crypto Machines.
  2. Mills, Mara (2012). "Media and Prosthesis: the Vocoder, the Artificial Larynx, and the History of Signal Processing". Qui parle. 21 (1): 107–149.
  3. US application 2151091, Dudley, Homer W., "Signal Transmission", published May 21, 1939, assigned to Bell Telephone Laboratories, Inc.  (filed October 30, 1935)
  4. US application 2098956, Dudley, Homer W., "Signaling system", published November 16, 1937, assigned to Bell Telephone Laboratories, Inc.  (filed December 2, 1936)
  5. US apprication 2121142, Dudley, Homer, "Signal Transmission", published June 21, 1938, assigned to Bell Telephone Laboratories, Inc.  (filed April 7, 1937)
  6. 6.0 6.1 "The 'Voder' & 'Vocoder' Homer Dudley, USA, 1940". 120 Years of Electronic Music ( The Vocoder (Voice Operated reCorDER) and Voder (Voice Operation DEmonstratoR) developed by the research physicist Homer Dudley, ... The Voder was first unveiled in 1939 at the New York World Fair (where it was demonstrated at hourly intervals) and later in 1940 in San Francisco. There were twenty trained operators known as the ‘girls’ who handled the machine much like a musical instrument such as a piano or an organ, ... This was done by manipulating fourteen keys with the fingers, a bar with the left wrist and a foot pedal with the right foot.
  7. "The Voder (1939)". Talking Heads: Simulacra. Haskins Laboratories.; based on James L. Flanagan (1965). "Speech Synthesis". Speech Analysis, Synthesis and Perception. Springer-Verlag. pp. 172–173.
    See: schematic diagram of the Voder synthesizer.
  8. "Voice Age" (licensing). VoiceAge Corporation.
  9. "MELPe – FAQ". Compandent Inc.
  10. "IMBE and AMBE®". Digital Voice Systems, Inc. Cite has empty unknown parameter: |1= (help)
  11. "SPR Vocoders". DSP Innovations Inc.
  12. "RALCWI Vocoder IC's". CML Microcircuits. CML Microsystems Plc.
  13. "TWELP Vocoder". DSP Innovations Inc.
  14. "Noise Rubust Vocoders". Raytheon BBN Technologies. Archived from the original on 2014-04-02.
  15. Kleijn, W.B.; Haagen, J.; (AT&T Bell Labs., Murray Hill, NJ). "A speech coder based on decomposition of characteristic waveforms". IEEE 1995 International Conference on Acoustics, Speech, and Signal Processing, 1995. ICASSP-95. doi:10.1109/ICASSP.1995.479640.CS1 maint: multiple names: authors list (link)
  16. Kleijn, W.B.; Shoham, Y.; Sen, D.; Hagen, R.; (AT&T Bell Labs., Murray Hill, NJ). "A low-complexity waveform interpolation coder". IEEE ICASSP 1996. doi:10.1109/ICASSP.1996.540328.CS1 maint: multiple names: authors list (link)
  17. Gottesman, O.; Gersho, A.; (Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA). "Enhanced waveform interpolative coding at low bit-rate". IEEE Transactions on Speech and Audio Processing (November 2001). doi:10.1109/89.966082.CS1 maint: multiple names: authors list (link)
  18. Meyer-Eppler, Werner (1949), Elektronische Klangerzeugung: Elektronische Musik und synthetische Sprache, Bonn: Ferdinand Dümmlers
  19. Diesterhöft, Sonja (2003), "Meyer-Eppler und der Vocoder", Seminars Klanganalyse und -synthese (in German), Fachgebiet Kommunikationswissenschaft, Institut für Sprache und Kommunikation, Berlin Institute of Technology, archived from the original on 2008-03-05
  20. "Das Siemens-Studio für elektronische Musik von Alexander Schaaf und Helmut Klein" (in German). Deutsches Museum.
  21. Holmes, Thom (2012). "Early Synthesizers and Experimenters". Electronic and Experimental Music: Technology, Music, and Cluture (4th ed.). Routledge. pp. 190–192. ISBN 978-1-136-46895-7. Unknown parameter |chapterurl= ignored (help)
    (See also excerpt of pp. 157160 from the 3rd edition in 2008 (ISBN 978-0-415-95781-6))
  22. Bode, Harald (October 1984). "History of Electronic Sound Modification" (PDF). J. of Audio Engineering Society. 32 (10): 730–739.
  23. BRUCE HAACK – FARAD: THE ELECTRIC VOICE. Bruce Haack. Stones Throw Records LLC. 2010.CS1 maint: others in cite AV media (notes) (link)
  24. "Bruce Haack's Biography 1965–1974". Bruce Haack Publishing.
  25. Flans, Robyn (5 January 2005). "Classic Tracks: Phil Collins' "In the Air Tonight"". Mix Online. Retrieved 25 February 2015.
  26. 26.0 26.1 Tompkins, Dave (2010–2011). How to Wreck a Nice Beach: The Vocoder from World War II to Hip-Hop, The Machine Speaks. Melville House. ISBN 978-1-61219-093-8.
  27. "The Vocoder: From Speech-Scrambling To Robot Rock". NPR Music. May 13, 2010.
  28. "Midnight is amazing! But it sounds like Chris's voice has autotune in some parts. I thought Coldplay doesn't use autotune?". Coldplay "Oracle". 5 March 2014. Retrieved 25 March 2014.
  29. "Daft Punk: "La musique actuelle manque d'ambition"" (interview). Le Figaro. May 3, 2013.
  30. Jenkins, Mark (2007), Analog synthesizers: from the legacy of Moog to software synthesis, Elsevier, pp. 133–4, ISBN 0-240-52072-6, retrieved 2011-05-27
Multimedia references
  1. One Of The First Vo[co]der Machine (Motion picture). c. 1939.
      A demonstration of the Voder (not the Vocoder).
  2. Siemens Electronic Music Studio in Deutsches Museum (multi part) (Video).
      Details of the Siemens Electronic Music Studio, exhibited at the Deutsches Museum.
  3. Bruce Haack (1970). Electric to Me Turn – from "The Electric Lucifer" (Phonograph). Columbia Records.
      A sample of earlier Vocoder.
  4. T-Pain (2005). I'm Sprung (CD Single/Download). Jive Records.
      A sample of Auto-Tune effect (a.k.a. T-Pain effect).
  5. Earlier Computer Speech Synthesis (Audio). AT&T Bell Labs. c. 1961.
      A sample of earlier computer based speech synthesis and song synthesis, by John Larry Kelly, Jr. and Louis Gerstman at Bell Labs, using IBM 704 computer. The demo song “Daisy Bell”, musical accompanied by Max Mathews, impressed Arthur C. Clarke and later he used it in the climactic scene of screenplay for his novel 2001: A Space Odyssey.
  6. TI Speak & Spell (Video). Texas Instruments. c. 1980.
      A sample of speech synthesis.

External links[]