This notation language was originally designed for transcribing Irish folk tunes, but has since evolved into a considerably richer language allowing, for example, polymetric output on multiple staves. This music notation format has the advantage of being extremely concise and fairly readable.
The AC Toolbox is a Macintosh PPC application to assist the algorithmic composition of music. A legacy version for older, 68K-based computers is also available. Several models for defining musical events are included. They can be used by defining objects such as sections, shapes, masks, or note structures. It is also possible to play, plot, modify, and examine objects in a number of ways. Extensive online help is available. In addition to Midi input and output, the AC Toolbox can also produce text files suitable for use as data in other programs. In particular, score files for Csound, note list files for Common Lisp Music, and tables for MAX can be produced. An important method of creating data in the Toolbox is the use of generators. A number of generators have been included reflecting various approaches to the creation of musical material including tendency masks, stochastic functions, chaotic systems, transition tables, recursive subdivisions, metric indispensabilities, morphological mutations, etc. The AC Toolbox is implemented in Lisp and input syntax often reflects the conventions of this language. It is also possible for a user to extend the Toolbox by adding Lisp functions. For example, additional generators can be defined in Lisp to use with the Toolbox.
Alphanumeric Language for Music Analysis, implemented at the Institute for Computer Research in the Humanities (NYU). The idea was to incorporate more than Western staff notations. They implemented some crude translators, proof-listeners (analgous to proof-reading), etc.
AML consisted of an interpreter and a compiler. The compiler was written in and used a lobotomized version of the Digital Research MAC assembler. The compiler generated code that was read by the interpreter. The AML interpreter was written in Intel 8080 assembly language. The interpreter created up to 8 virtual machines that drove analog synthesizers using various D/A and A/D hardware. Each virtual machine consisted of a stack oriented computer that processed code specifically designed for generating music. The instruction set consisted various operators for manipulating the stacks, reading note lists, computing note lists in real-time and drawing pseudo random numbers including fractals. A version supporting MIDI was the final development, and this version ran on an Apple 2 computer equipped with an Intel 8080 board which talked to a Roland MPU-401 MIDI interface. A wide range of students and composers in the LA area used this system, many through the UCLA extension program. LA Composer Jeff Rona wrote several works for AML, and it was first demonstrated in Denton TX with his "Step Music" with dancers Sean Green and Dianna McNeil. There is an article in the ICMC proceedings for the Denton conference.
Forth-like stack-based language, in which letters a-g are the musical notes. A change to capitals indicates a change up an octave, to lowercase is down an octave (symbols < and > do this explicitly). Also is multitasking.
MARS is a development system for realtime Digital Signal Processing techniques, sound synthesis, filters and sound effects. Sound and MIDI environments can be developped which allow it to be used as a MIDI musical instrument. MARS is a system for audio research, musical production and the education of computer music, for people who like a programmable and flexible sound machine with realtime performance.
A lisp-like language that can manipulate MIDI data and do other sequencer-related operations (creating new tracks, etc.) within the Cakewalk sequencer. See http://www.cakewalk.com/devxchange/cal.asp for examples of CAL programs.
Common Lisp Music (CLM) is a sound synthesis package in the Music V family written primarily in Common Lisp. The instrument design language is a subset of Lisp, extended with a large number of generators: oscil, env, table-lookup, and so on. The run-time portion of an instrument can be compiled into C or Lisp code. Since CLM instruments are lisp functions, a CLM note list is just a lisp expression that happens to call those functions. Recent additions to CLM include support for real-time interactions and integration with the Snd sound editor.
Cmix is a package of routines for editing, processing, and creating soundfiles. It also includes a library of routines designed to make it easier to write c programs which deal with soundfiles. A version for Linux, called RTcmix, is maintained by Dave Topper.
Common Music Notation (CMN) is a music notation package written in Common Lisp, using CLOS and the Sonata font. It provides for all the usual needs of music notation in a fully customizable, programmable environment.
Common Music (CM) is an object-oriented music composition environment. It produces sound by transforming a high-level representation of musical structure into low-level control statements for a number of different synthesis targets: MIDI, CSound, Common Lisp Music (CLM), Music Kit, CMix, CMusic, RT, Mix and Common Music Notation (CMN). Common Music provides an extensive library of compositional objects and encourages the user to modify and extend the system through subclassing and specialization. Common Music is implemented in Common Lisp and runs on a variety of computers, including NeXT, Macintosh, SGI, SUN, and 386.
Provides a C++ class library for representing music scores. Works with scores in ALMA, *kern, NIFF, and Esac. Provides a visualisation and analysis paradigm for music. Huge corpus available, with translators from practially all major encodings.
Csound is a popular and widely used software synthesis package in the tradition of so-called music-N languages, among which the best-known is Music V. It consists of an orchestra- and score-driven executable, written in C for portability. Basically Csound reads some files and creates the result as a soundfile on disk or, on faster machines, realtime through a DAC.
CYBIL is a compositional language for the efficient specification of arbitrarily complex Csound scores. It is integrated into CECILIA and can be used to generate scores for any kind of Csound orchestra. The syntax of CYBIL owes to Leland Smith's SCORE language and, to an extent, to Heinrich Taub's Common-Music. Scores are specified as parameter lines with the help of a number of data generators such as sequences, masks, lines and exponentials. These generators can be further modified by the use of functions such as random, urns and constrained random and parameters cross-referencing.
JSFX audio effects for Reaper are written in EEL2, a scripting language that is compiled on the fly and allows you to modify and/or generate audio and MIDI, as well as draw custom vector based UI and analysis displays. EEL2 is based on AVS's EEL. AVS is a programmable visualization plugin for Winamp
FAUST (Functional Audio Stream) is a functional programming language specifically designed for real-time signal processing and synthesis. A distinctive characteristic of FAUST is to be fully compiled. The FAUST compiler translates DSP specifications into very efficient C++ code that works at sample level. The generated code is self contained and doesn't depend on any library or runtime. Moreover a same FAUST specification can be used to generate native implementations for most OS (Linux, OSX, Android, iOS) or platforms (LV2, Ladspa, VST, PD, Csound, SC,..) Faust distribution can be downloaded at: http://sourceforge.net/projects/faudiostream The GIT repository can be cloned with the following command : git clone git://git.code.sf.net/p/faudiostream/code faust
The Foo environment consists of the Foo Kernel layer and the Foo Control layer. The Foo Kernel layer is implemented in Objective-C and is made accessible to Scheme through a set of types and primitives added to the Elk Scheme interpreter. The Foo Control layer is implemented in Scheme and OOPS, an object-oriented extension to Scheme. Whereas the Foo Kernel layer implements the generic sound synthesis and processing modules as well as a patch description and execution language, the Foo Control layer offers a symbolic interface to the kernel and implements musically salient control abstractions. The user interacts with the Foo environment by writing Scheme programs which eventually will define and execute synthesis patches in non-real-time.
FORMULA is a language/multitasking OS for the Atari ST and Mac. It is based on (and built on top of) Forth. The basic idea of FORMULA is to represent music as cooperating processes. For instance, each part of a symphony might be a different process. Also, the generation of pitches, durations, velocities, and tempo can similarly be controlled by separate processes.
Haskore is a collection of Haskell modules designed for expressing musical structures in the high-level, declarative style of functional programming. In Haskore, musical objects consist of primitive notions such as notes and rests, operations to transform musical objects such as transpose and tempo-scaling, and operations to combine musical objects to form more complex ones, such as concurrent and sequential composition. From these simple roots, much richer musical ideas can easily be developed.
Hyperlisp is a real-time MIDI programming environment embedded in Macintosh Common Lisp. The environment was developed specifically for the Hyperinstruments project at the MIT Media Laboratory, and is optimized for interactive systems which require fast response times. Hyperlisp provides two main services for the music programmer: routines for MIDI processing and primitives for scheduling the application of functions. Programs written in Macintosh Common Lisp can use these services for a wide variety of real-time MIDI applications.
JFugue is a set of Java classes for music programming. It uses simple strings to represent musical data, including notes, chords, and instrument changes. JFugue also allows you to define music using patterns, and you can do interesting transformations on those patterns to come up with new musical segments that are derived from existing pieces of music. JFugue can write MIDI files. The JFugue webpage is full of clear examples and instructions. JFugue makes music programming incredible easy!
Java Music Specification Language (JMSL) is a programming environment for experiments in music performance, composition, and intelligent instrument design. Based on HMSL (Hierarchical Music Specification Language), JMSL is a Java package which affords the composer all the functionality of the Java programming language as well as the hierarchical structuring, scheduling, and philosophy of HMSL.
JSyn uses native methods written in 'C' to provide real-time audio synthesis for Java programmers. It is based on the traditional model of unit generators which can be connected together to form complex sounds. For example, you could connect a white noise generator to a low pass filter that is modulated by random ramp generators to create a wind sound.
Interpreted multi-tasking awk-like language designed for MIDI algorithmic and realtime manipulation. Multi-window GUI with pull-off menus and buttons is implemented using the language, and includes a multi-track sequencer, and drum pattern editor. Source code for all tools is included and can be customized easily.
Object-oriented compositional environment based on Nasal, a clean and flexible dynamically typed scripting language with garbage collection. Suitable for live performance (real-time recompiling of objects), algorithmic composition and experiments.
A language for specifying and manipulating sound. It is a visual language and is based on units called "sound objects" rather than the "notes" of standard music notation. Structures specified in Kyma can be compiled for real-time samples generation on a digital signal processor. Kyma is described in an issue of CMJ devoted to object-oriented music applications.
Loki is a text to MIDI converter. It was developed to help transcribe music to MIDI. It contains facilities to harmonise and manipulate melodies as well. The next release will provide interactive facilities. Shareware and Fun.
A graphical, object-oriented language in which precompiled input/output primitives of specific function can be 'patched' together graphically onscreen to create large interactive systems. Primarily but not exclusively MIDI-oriented. User primitives can be compiled in C.
Macro language for configuration and control of analog oscillators, filters, VCAs, LFOs, amp, mixer, etc., music notation, SMPTE sync, but especially rich in traditional compositional vocabulary. (Nestable) macros could be triggered by name, parameter value (e.g. pitch), time (abs or rel in min:sec:frames or measures:beats:ticks), controller activity, alphanumeric input. (No published documentation or description.)
MPL is a collection of functions in APL that manipulate note and conductor data in matrices. Deveoped at Oberlin. Used extensively at U. Melbourne in Australia. Currently used only by the author on Mac OSX with APLX from microAPL in UK. Listen to Goss on http://www.timara.oberlin.edu/~gnelson/mp3s/Long.mp3s.html
THE MSQ PROJECT provides an open, easily readable and editable file format qualified for algorithmic manipulation and composition as well as for real time controlling MIDI instruments. The MSQ file format, a plain ASCII text file format, represents sequences of MIDI commands in strictly chronological order even when multiple MIDI tracks are present. It does not only allow the translation of MIDI files to some readable text but is a well defined and MIDI compatible file format itself.
8-bit sampling, graphical additive synthesis and command line sequencing. Information able to be entered through keyboard or light-pen. 'Film Music Processor' allowed editing to be carried out in a variety of time codes, working in frames, music cues and sync points.
MusicDNA Composer is a web-based application for the creation of tonal music. Composers use a simple API to create melodies, harmonies and counterpoint, from the simplest single-voice piano song all the way up to a symphony. Code is compiled into both MIDI and sheet music in PNG and PDF format, which can be downloaded or displayed directly through the browser. The output can therefore be easily synthesized or performed by instrumentalists. The language "knows" basic harmony and counterpoint, and can take care of much of the calculation necessary in programming if the composer chooses. Themes or melodies can be pulled out into functions and shared among composers, thus allowing them to collaborate in the same way that programmers do. Since the songs are stored on the MusicDNA servers, you can also edit your compositions from anywhere. The site also allows you to store MP3s, MIDIs, sheet music or other media, so that you may give out the URL to your creations.
The Music Kit is an object-oriented software system for building music, sound, signal processing, and MIDI applications in the NEXTSTEP programming environment. It has been used in such diverse commercial applications as music sequencers, computer games, and document processors. Professors and students have used the Music Kit in a host of areas, including music performance, scientific experiments, computer-aided instruction, and physical modeling.
MusicXML is a universal translator for common Western musical notation from the 17th century onwards. It is designed as an interchange format for notation, analysis, retrieval, and performance applications.
There was an Algol based language MUSIGOL fashioned after the Bell Labs MUSIC I-V programs. It was created at the University of Virginia. MUSIGOL ran on a Burroughs B5500 and used an Adage Ambilog 200 as a DAC.
NetSound is a structured-audio compositor and synthesizer which renders sound in real-time using a variety of synthesis algorithms. It is being developed by the Machine Listening Group at the Media Lab.
A functional programming language for composition and sound synthesis. Uses a Lisp syntax, a signal processing and signal representation core, and a rich semantics dealing with time and transformations.
OpenMusic is a highly visual environment for the composer on the Macintosh. While drawing benefit from the huge amount of knowledge and experience gathered around the PatchWork software, OpenMusic implements a set of radically new features that make it a second generation compositional software. Based on Digitool Macintosh Common Lisp, OpenMusic provides a visual programming interface to Lisp programming as well as to CLOS (Common Lisp Object System). Thus OpenMusic is an Object Oriented (OO) environment. Objects are symbolized by icons that may be dragged and dropped all around. Most operations are then performed by dragging an icon from a particular place and dropping it to an other place. These places include the OpenMusic Workspace as well as the Macintosh finder itself. A lot of classes implementing musical data / behaviour are provided. They are associated with graphical editors and may be extended by the user to meet specifical needs. Different representation of a musical process are handled among which common notation, midi piano-roll, sound signal. High level in-time organisation of the music material is proposed through the " maquette " concept.
Opusmodus is aimed at composers of all kinds - of art music, concert music, choral music, film music, jazz, electroacoustic music, music for games and new media, songwriters. Opusmodus is a comprehensive computer-aided environment for the whole work of music composition a virtual space where a composer can develop ideas and experiments for projects large and small. Opusmodus allows you to explore more than one structure at the same time. It also allows the composer to study the interaction between the different structures with more meaningful outcome.
Open Sound World, or OSW, is a scalable, extensible programming environment that allows musicians, sound designers and researchers to process sound in response to expressive real-time control. OSW combines a familiar visual patching paradigm with solid programming-language features such as a strong type system and hierarchical name spaces. OSW also includes an intuitive model for specifying new components using a graphical interface and high-level C++ expressions, making it easy to develop and share new music and signal-processing algorithms.
Patchwork is a graphical interactive environment for computer assisted composition which is aimed at helping composers generate, represent and manipulate musical material. Its general, extendible environment can be easily adapted to suit radically different aesthetic needs.
pcmusic takes a text (ASCII) file written in the pcmusic input language and creates a soundfile (in .wav format) that corresponds to it. For this reason, pcmusic is a member of the class of programs sometimes known as "acoustic compilers". pcmusic is a version of the cmusic sound synthesis program for the IBM PC and compatibles.
PMML is a musical event description/manipulation language designed for computer-controlled performances with MIDI instruments. Direct music description, algorithmic composition, and music transformation are all supported.
A python module for generating and manipulating musical events. Output is currently only in csound sco format, but the goal of pysco is to support MIDI streams and files as well, and user-extensible support for other kinds of output.
Q is a modern functional programming language based on the term rewriting calculus. Programs are simply collections of equations which are used to evaluate expressions in a symbolic fashion. Q offers an elaborate interface to Grame's MidiShare, and also has a basic audio interface. The latter will be improved over time (additional modules for doing modular synth and dsp stuff are in the planning stage). The MidiShare interface already makes Q a nice environment for (realtime) MIDI programming.
Quasimodo is an advanced, real-time, extensible, MIDI-controllable environment for generating and processing audio and MIDI data. Quasimodo supports the Csound programming language, plugin opcodes, themeable graphics, a simple scripting language for user-interface design, and an intuitive graphical user interface for real-time manipulation. It supports both Csound scorefiles, real-time MIDI input, and its own user interface for playing audio and MIDI compositions.
This library - a collection of patches for MAX (an interactive graphical programming environment for multimedia, music, and MIDI) - offers the possibility to experiment with a number of compositional techniques, such as serial procedures, permutations and controlled randomness. Most of these objects are geared towards straightforward processing of data. By using these specialized objects together in a patch, programming becomes much more clear and easy. Many functions that are often useful in algorithmic composition are provided with this library - therefore the composer could concentrate rather on the composition than the programming aspects. The Real Time Composition Library (RTC-lib) was developed during my extensive work on Lexikon-Sonate (1992 ff.), an interactive realtime composition for computer-controlled MIDI-piano. Regardless the fact that it was conceived for a concrete project it became more and more obvious that its functionalities are open and generic enough to be used by other composers in different compositional contexts. Although - from the theoretical point of view - based on paradigms which have been extracted from serial thinking - and its further developments until nowadays it does not force towards a certain aesthetic, but provides a programming environment for testing and developing musical strategies. Please note that "serial" has here another connotation than it normally has (especially in the US): "serial" here refers to a certain way of musical thinking rather then dodecaphonic techniques, which have been abandoned by the serial theory itself (cf. Gottfried Michael Koenig and Karlheinz Stockhausen).
SAOL is a music-synthesis and effects-processing language which is a component of the MPEG-4 standard (ISO/IEC 14496-3). It follows a Music-N paradigm, but has a number of novel extensions, most notably the ability to define new unit generators within the language. In MPEG-4, SAOL is used to transmit synthesis descriptions controllable with MIDI or by a new lightweight score format called SASL, and to transmit effects-processing algorithms which apply to natural (waveform-encoded) audio within the MPEG-4 audio scene.
An acoustic compiler - a program which takes a source file written in the sapphire programming language and generates a sample. Sapphire can create sounds of arbitrary complexity, although it may take several hours for it to do so. Think of it as a ray-tracer for noises.
Scala is an editor, librarian, and analysis tool for musical tunings. One can create, manipulate and combine scales in many ways using the Scala command language. It can also tune synthesizers and retune MIDI files.
The Sound Description Interchange Format (SDIF) is a recently adopted standard that can store a variety of sound representations including spectal, time domain, and higher-level models. SDIF consists of a data format and a set of standard sound descriptions and their official representation. SDIF is flexible in that new sound descriptions can be represented, and new kinds of data can be added to existing sound descriptions.
Silence is an extensible system for making music on computers by means of software alone. It implements Music Modeling Language (MML), which represents music as a directed acyclic graph of nodes that can be notes, groups of notes, transformations of notes, or processes generating notes. MML is to sounds as VRML is to pictures. Silence currently uses its own Java interface to Csound as a synthesis engine.
Synthesis toolKit Instrument Network Interface - a language designed to be MIDI compatible and extend MIDI in incremental but profound ways. It uses text-based messages. SKINI was designed to be extensable and hackable for a number of applications: imbedded synthesis in a game or VR simulation, scoring and mixing tasks, real-time and non-real time applications which could benefit from a controllable sound synthesis, JAVA controlled synthesis, or eventually maybe JAVA synthesis.
The Standard Music Description Language (SMDL) is defined (in ISO/IEC Draft International Standard 10743) as "an architecture for the representation of music information, either alone, or in conjunction with text, graphics, or other information needed for publishing or business purposes. Multimedia time sequencing information is also supported." SMDL is a HyTime application conforming to International Standard ISO/IEC 10744 Hypermedia/Time-based Structuring Language.
The Smalltalk music object kernel (Smoke) music representation language facilitates the formal description of low-level musical data such as note events, and also of higher-level structures such as chord progressions and musical form "objects." In object-oriented software terms, the representation is described in terms of software class hierarchies of objects that share state and behavior and implement the description language as their protocol.
SOUL is a language for audio development focused on portability and speed, but also on accessibility for audio enthusiasts and professionals. Conceived as "an audio equivalent of OpenGLSL or OpenCL", its goal is to produce audio programs than can run not only on the CPU but also on heterogeneous or remote hardware, without an impact on the complexity for the developer. Claiming that the CPU is not the best place to run these programs, the authors are confident that domain-specific hardware will be the future of sound programming, and accessible and portable tools to exploit these will be necessary. Like graphic shaders, SOUL programs run within another "host" application, that can be written with any language or framework without bearing on its performance. SOUL is currently in beta.
Common Composer's Programming Language (CCPL) is a computer programming environment aiming at composers and researchers in the field of electroacoustic music. SoundModel is part of CCPL, does not depend on any particular synthesis method, and can be interfaced to many different methods as long as the parameters of the method are closely related to the acoustical result. The language can be used to describe a large universe of sonic-structures with complex dynamic behaviour and complicated interdependencies which makes it useful in a computer music environment.
A successor to SuperCollider 2 for Macintosh OS9. Synthesis takes place in a separate process (the server), controlled by OSC messages generated by a client. Any software that can produce OSC messages can be used to control the client, however. The library of sound-processing functions is now implemented as C++ plugins to the server, allowing users to write their own using the server API. SC Server includes a client whose language is similar to SC 2, but is extended with some borrowing from functional programming notation. It features powerful data manipulation classes and multi-threading for algorithmic composition. GUI, MIDI in/out, HID (joystick/other) input, Wacom tablet input are also supported. Optimized for real-time, live-performance scenarios.
SuperCollider is an environment for real time audio synthesis which runs on a Power Macintosh with no additional hardware. SuperCollider features: a built in programming language with real time incremental garbage collection, first class functions/closures, a small object oriented class system, a mini GUI builder for creating a patch control panel, a graphical interface for creating wave tables and breakpoint envelopes, MIDI control, and a large library of signal processing and synthesis functions. It is an extended version of Pyrite, but no longer runs under Max.
SC allows the user to create music from almost any source: text, fractals, association-structures... in almost any style.... It is quite a difficult language to learn, but it offers almost endless possibilities.
VMM is a programming language that allows you to send and receive raw midi messages and has built-in libraries for higher level MIDI functions. VMM makes Multithreading super easy, and that's what makes it so powerfull.
Zel is a computer language for creating MIDI data. Its features include
-low language overhead--"a b c" plays a b c . -powerful macro capabilities with parameter passing, -automatic distribution of notes into multiple tracks, -file inclusion, -controller/tempo/velocity sequence generation, -automatic pitch-bend generation, -integer/fractional/decimal/MBT/SMPTE duration formats, -fine control of note displacement, -unlimited tracks, -attribute inheritance ( track->chord->note ) -random or sequential pick from a list of weighted macros -can automatically apply macro based on note timing -sysex file inclusion and sub-parser -musical thread isolation using parentheses -looping -define and transpose sets of notes and reference them -supports MIDI text and meta-events