Bregman, Albert S. Auditory scene analysis: the perceptual organization of sound. Cope, David. New Directions in Music, 7th ed. Dodge, Charles and Thomas A. Computer Music: Synthesis, Composition, and Performance, 2nd ed.

Author:Araramar Meztira
Language:English (Spanish)
Published (Last):1 September 2008
PDF File Size:9.39 Mb
ePub File Size:9.13 Mb
Price:Free* [*Free Regsitration Required]

Computer music is the application of computing technology in music composition , to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs.

It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis , digital signal processing , sound design , sonic diffusion, acoustics , and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music , and the very first experiments and innovations with electronic instruments at the turn of the 20th century.

In the s, with the widespread availability of relatively affordable home computers that have a fast processing speed, and the growth of home recording using digital audio recording and digital audio workstation systems ranging from GarageBand to Pro Tools , the term is sometimes used to describe music that has been created using digital technology.

Much of the work on computer music has drawn on the relationship between music and mathematics , a relationship which has been noted since the Ancient Greeks described the " harmony of the spheres ".

There were newspaper reports from America and England early and recently that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports some of which were obviously speculative.

Research has shown that people speculated about computers playing music, possibly because computers would make noises, [1] but there is no evidence that they actually did it.

In the CSIR Mark 1 was used to play music, the first known use of a digital computer for the purpose. The music was never recorded, but it has been accurately reconstructed. However, the CSIR Mark 1 played standard repertoire and was not used to extend musical thinking or composition practice, as Max Mathews did, which is current computer-music practice.

The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark 1 , late in This recording can be heard at the this Manchester University site. Researchers at the University of Canterbury , Christchurch declicked and restored this recording in and the results may be heard on SoundCloud.

Two further major s developments were the origins of digital sound synthesis by computer, and of algorithmic composition programs beyond rote playback. Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented during Osaka Expo '70 and "Panoramic Sonore" by music critic Akimichi Takeda. Ezaki also published an article called "Contemporary Music and Computers" in Since then, Japanese research in computer music has largely been carried out for commercial purposes in popular music , though some of the more serious Japanese musicians used large computer systems such as the Fairlight in the s.

Early computer-music programs typically did not run in real time , although the first experiments on CSIRAC and the Ferranti Mark 1 did operate in real time. From the late s, with increasingly sophisticated programming, programs would run for hours or days, on multimillion-dollar computers, to generate a few minutes of music.

In the late s these systems became commercialised, notably by systems like the Roland MC-8 Microcomposer , where a microprocessor -based system controls an analog synthesizer , released in Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound.

Advances in computing power and software for manipulation of digital media have dramatically affected the way computer music is generated and performed. Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a wide variety of algorithms and approaches.

Computer music systems and approaches are now ubiquitous, and so firmly embedded in the process of creating music that we hardly give them a second thought: computer-based synthesizers, digital mixers, and effects units have become so commonplace that use of digital rather than analog technology to create and record music is the norm, rather than the exception.

Despite the ubiquity of computer music in contemporary culture, there is considerable activity in the field of computer music, [ clarification needed ] as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches. Later, composers such as Gottfried Michael Koenig and Iannis Xenakis had computers generate the sounds of the composition as well as the score. Koenig produced algorithmic composition programs which were a generalisation of his own serial composition practice.

This is not exactly similar to Xenakis' work as he used mathematical abstractions and examined how far he could explore these musically. Koenig's software translated the calculation of mathematical equations into codes which represented musical notation. This could be converted into musical notation by hand and then performed by human players. His programs Project 1 and Project 2 are examples of this kind of software. Later, he extended the same kind of principles into the realm of synthesis, enabling the computer to produce the sound directly.

SSP is an example of a program which performs this kind of function. All of these programs were produced by Koenig at the Institute of Sonology in Utrecht in the s. Computers have also been used in an attempt to imitate the music of great composers of the past, such as Mozart. A present exponent of this technique is David Cope. He wrote computer programs that analyse works of other composers to produce new works in a similar style.

He has used this program to great effect with composers such as Bach and Mozart his program Experiments in Musical Intelligence is famous for creating "Mozart's 42nd Symphony" , and also within his own pieces, combining his own creations with that of the computer.

Since its inception, Iamus has composed a full album in , appropriately named Iamus , which New Scientist described as "The first major work composed by a computer and performed by a full orchestra. Computer-aided algorithmic composition CAAC, pronounced "sea-ack" is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use.

The label computer-aided composition lacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The label algorithmic composition is likewise too broad, particularly in that it does not specify the use of a computer.

The term computer-aided , rather than computer-assisted, is used in the same manner as computer-aided design. Machine improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples.

The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic reinjection. This is different from other improvisation methods with computers that use algorithmic composition to generate new music without performing analysis of existing music examples.

Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data.

Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's Illiac Suite for String Quartet and Xenakis' uses of Markov chains and stochastic processes.

Modern methods include the use of lossless data compression for incremental parsing, prediction suffix tree , string searching and more.

OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch a. This problem was solved in the Variable Markov Oracle VMO available as python implementation, [36] using an information rate criteria for finding the optimal or most informative representation.

Live coding [38] sometimes known as 'interactive programming', 'on-the-fly programming', [39] 'just in time programming' is the name given to the process of writing software in realtime as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live.

From Wikipedia, the free encyclopedia. For the magazine, see Computer Music magazine. This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources.

Unsourced material may be challenged and removed. See also: Computer music programming languages. Main article: Algorithmic composition. See also: Generative music , Evolutionary music , and Genetic algorithm. See also: Machine learning , Machine listening , Music and artificial intelligence , and Computer models of musical creativity. This section contains embedded lists that may be poorly defined, unverified or indiscriminate. Please help to clean it up to meet Wikipedia's quality standards.

Where appropriate, incorporate items into the main body of the article. May Main article: Live coding. Archived from the original on 7 November Retrieved 18 October MuSA Conference. Organised Sound. Cambridge University Press.

BBC News Online. Retrieved 18 June Computer Music Journal. Archived from the original on 18 January The Guardian.

Retrieved 28 August British Library. BBC News. Retrieved 4 December Backbeat Books. Leonardo Music Journal. MIT Press. Retrieved 9 July Retrieved 28 October Back then computers were ponderous, so synthesis would take an hour. Bibcode : Sci


ISBN 13: 9780028646824

Computers and analogue synthesizers brought with them a new kind of musical revolution for composers and musiciens: the ability to compose thier own sonic material. Around , Lejaren A. Hiller and Leonard M. Isaacson presented the first music composed with the help of a computer: Illac Suite for string quartet, realized at the studio of the Universtity of Illinois's Computer Research Center.


Computer music

This text reflects the current state of computer technology and music composition. The authors offer clear, practical overviews of program languages, real-time synthesizers, digital filtering, artificial intelligence, and much more. Preface to the Second Edition. Preface to the First Edition. Fundamentals of Computer Music.

Related Articles