Main Website Icon

SOFTWARE-O-PHONES:
Homemade Software Instruments

By Henry Lowengard


First published in Experimental Musical Instruments Volume IX #4, June 1994.
Copyright 1994 Henry Lowengard

EXPERIMENTAL SOFTWARE INSTRUMENTS
Computer music is a widening field which encompasses commercial synthesizers and samplers, their associated sequencers, effects devices and digital recording editors. In the old "rough-and- ready" age of electronic music, there was a tendency to explore sound as sound, mostly because the level of the tools allowed little else. Now the pendulum has swung the other way -- using modern electronic instruments, the tendency is to call up some imitative program and treat it as a glorified organ stop. The MIDI standard also lead to the "pianification" of commercial synthesizer controllers. Yes, there are alternate MIDI controllers: wind controllers, guitar controllers, voice trackers -- I have some of them myself. Yes, cheaper memory and higher computing speed has lead to some amazing sounding devices. However, the element of exploration seems to be gone.

A few years ago, after having spent many years building oscillators, taping them, and tearing them apart, and years of steady computer hacking, and even a little home customizing of a commercial keyboard, I decided that the personal computer industry had matured enough so I could write my synthesizers instead of building them. The machine which best allowed this was the Commodore Amiga. All of the following programs run ONLY on Amigas. These programs are also in the public domain and can be found in the software section of my webpage at http://www.echonyc.com/~jhhl/binary.html or can be obtained from me. Send a blank disk and an idea of what you want (and maybe a nice kitschy Colortonetm postcard or two) and I'll send it back. Remember: in my case you are only fully licensed if you give a copy to someone else!

After this was published in EMI, I made a recording of a number of the instruments mentioned in the article, which was excerpted on an EMI compilation tape. Here is an Mp3 of the uncut original demo recording.
Broken up into sections:

RGS, A SONOGRAM PAINTING PROGRAM

Spectral analysis is a powerful method for discovering underlying structures in sounds. This analysis separates a sound into an orchestra of sine waves, each at a constant frequency but varying amplitude. This analysis leaves you with a lot of numbers, which fortunately make more sense when expressed visually. A sonogram represents amplitude as a range of color or a gray scale in a flat plane. Timbral aspects of the sound become clearer in a sonogram: for example, variations in harmonic content and frequency show up as wiggly lines and blobs. My program RGS (Real-time Graphical Synthesis) develops from a simple idea: to "paint" a sonogram directly, in the manner of other computer painting programs, while simultaneously synthesizing the sound that sonogram represents. In RGS, you are painting amplitudes (using a mouse or graphics tablet) into a "canvas" of fixed frequencies organized in a series of skinny time frames. The sound which corresponds to the painted sonogram plays in an endless loop, at a speed based on the sample playback rate and the number of sound samples each time frame controls. The same sonogram can be reinterpreted with different time scales and frequency ranges.

The painting tools in RGS are different from the ones used in other computer paint programs, because time is different from frequency. The tools are more horizontally oriented, in order to put correlated sound into time. There are special tools which draw harmonics -- lines equally spaced in frequency - and also to erase everything BUT harmonics from a complex sonogram. There are time and frequency constraints which let you build rhythms (constrained time) and very pure harmonic material (constrained frequency). RGS can analyze a sound sample into its sonogram as well, which you are then free to edit and resynthesize. You can save out the synthesized sample (in Amiga's IFF 8SVX format) for use by other programs.

Because it works immediately, it can be also used as a musical instrument, although I'd have to say it's hard to synchronize it with other processes because of the looping nature of the sound. RGS is also a MIDI controller where each of the 128 frequencies corresponds to a MIDI note. You select which time frame gets played by dragging the cursor over the sonogram with the mouse. When used this way, the program filters out all but the loudest 16 or so of the harmonics in each frame so the attached MIDI synthesizers won't overflow. If the synthesizer is retuned microtonally to a "scale" of harmonics, it can do a crude reconstruction of the drawn spectrogram! You could capture this MIDI stream and bring it into a scoring program (to make parts for very gifted orchestra) or simply use RGS as an immediate graphically based controller for a variety of MIDI peripherals.

RGS can resynthesize a portion of the spectrogram in "non-real time" at a higher quality than the real-time synthesis. This way you can isolate the sound of individual formants in speech, or otherwise build up a complex sound in piecewise manner.

People always ask me what some picture would sound like if treated as a sonogram and synthesized this way -- but don't realize that a vertical line is the equivalent of mashing the keys of a 128-key pipe organ and that dense horizontal lines become distorted throbbings as the loud, close harmonics create a loud, low difference tone. Well, maybe in certain contexts, it wouldn't be out of place ... but RGS is primarily meant for developing a sense of how sounds can be constructed out of sine waves and the equivalence between timbres, chords, beats, rhythms and entire compositions.

You can peruse RGS' documentation file in the convenient HTML fashion; it has lots of chit chat involving tips and the evolution of the program over the years.

Here are some sound examples of RGS.

A very powerful system similar to RGS - in some ways more powerful, and of course running on more modern equipment, is Metasynth by UI Software for the Mac. For Windows Machines, there is Coagula by Rasmus Ekman,who seems to have a lot of the same interests that I do.

HARM, A SOUND EFFECT INSTRUMENT
There are a lot of good quality sound effects devices available now. Some of them allow the properties of their effects to be controlled by MIDI devices in a live, interactive way. However, there are a lot of times when I've wanted to explore effects which either don't exist yet or aren't in the machine I personally own, or just want to try out effect ideas for myself. That is the idea behind my program HARM. HARM provides a framework for creating sound effects digitally so that new effect experiments can quickly be written. Usually some aspects of the effect are controlled by the computer's mouse and keyboard -- or even MIDI -- but the range of options is kept simple so they may be changed easily in performance. Some of the effects HARM can do are:

independent stereo pitch shifting
playing a short sample as if it were autoharp strings,
real-time backwards pitch shifting,
variable amplitude modulation,
real time "slowing down" of incoming sound
pitch detection combined with pitch shifting to "monotonize" an incoming sound
MIDI-controlled transposition of live sound,
MIDI transposed playing of sequences of phrases from a short sample,
overlapping and interpolation of short clips of sound

and other noise-making effects. Because of the unusual nature of the effects, HARM even makes interesting feedback noises. All of these effects also throw related bands of color on the computer screen, which can be sent to a projection television or other video processor. Some of my effects only provide video! HARM effects react rapidly enough to give the feel of a musical instrument. I've developed playing techniques involving subtle trackball wiggling and carefully controlling the input volume and feedback levels by cupping my hand over the microphone. Listen to it here. The MIDI controlled effects take on new powers when controlled by a MIDI guitar or Casiohorn.
I went on the air on WBAI-FM on Jan 11, 1993 at the graveyard hours of 1:30-3:30 AM and HARMed listener phone calls. The phone lines were quite jammed. You can hear it here: Part I, Part II

LYR, AN EXTENSIBLE MIDI AUTOHARP
 
LYR
 
LYR
 
LYR
 
The first instrument I wrote was called LYR (pronounced "lyre"). LYR is a MIDI controller which acts like an extended autoharp. A series of strings appear on the screen and the mouse pointer turns into a flatpick. The keys on the computer keyboard act like the chord bars on a real autoharp -- they disable sets of strings from sounding. I can have some 92 chords set up this way. The strummed notes keep playing until they are either damped by the appropriate chord bar, or until they are recycled for use in another chord. The same chords can "finger" virtual frets on the strings, so that fretted instruments can also be simulated and strange hybrids can be experimented with. Most synthesizers do not have 128 note polyphony (which is the maximum that LYR can play simultaneously), so LYR provides recycling to keep the synthesizer from making its own decision about which notes to keep playing when it overflows. When the string is plucked, additional information can be sent: the position of the pick can determine the MIDI velocity, pitchbend or control value associated with the note it plays. This gives a lot of control over MIDI parameters in an easily accessible way. However, a single strum over the strings is the extent of mouse control. There's no way to simulate fingerpicks! I've tried to remedy this with a tiny sequencer which sequences the strums and allows them to be drawn over the strings in their proper places. It does not sequence the chords,though, which are chosen "live". Musically, the paradigm of autoharpery translates into stable, formant-like ranges of properly inverted chords. Or, when controlling a MIDI drum kit, a very strange drummer! Its explicit control over note assignment and velocity makes for very cleanly controlled and visualized clouds of sound.

The ever resourceful Rasmus Ekman has a Windows program similar in philosophy but very different in sound and technique in his granular resynthesis interactive instrument Granulab.

It turns out that Jeff Harrington made an entire album using LYR named Obliterature. You can pick up a copy from the Internet!

BITE, AN AUDIO TRAVESTY PROGRAM
 
BITE
One of the interesting back alleys of computer art is the travesty program. This is a program which analyses some text and generates variations of that text which are similar in style to it. Some examples are "Racter," which "wrote" the book The Policeman's Beard is Half Constructed in the early 80s, and, in the musical world, the Mozart pastiches done by David Cope. I've written a program, Sound Bite, which does the same thing for audio material. A longish sample is taken and chopped up into word-like "bites". These bites are crudely keyed by average wavelength and amplitude, and then matched to a growing vocabulary of other sound bites. BITE then creates a simple sound grammar by keeping track of which bites follow which. When the whole sample is consumed, the program then generates a new sound stream using the grammar and vocabulary. BITE can be put into a cycle where it listens for a while, then generates, then listens and so on in an unattended manner. Leaving it listening to talk radio or television it sometimes comes up with a bizarre precis. The generated sound usually skips and stutters like a broken record, and with music it is similar to having a phrase stuck in your head repeating from different points, skipping around to the "hooks" or even picking out small sections obsessively for no apparent reason. BITE sometimes produces (uncovers?) strange "hidden meanings" out of speeches and other vocal material.

I don't save any of the computer data generated with BITE because it takes a lot of memory to store, and it's meant to be an effect and not something predictable. The idea is to let it develop its own personality.
Hear some Sound Bite examples!

BUZZ, AN AREXX CONTROLLED SOFTWARE SYNTHESIZER
For various purposes, I thought it would be nice to have a synthesizer server which could have a number of oscillators and be patched together via the Amiga's AREXX batch language. Because the Amiga is not too speedy, I use double buffering techniques and allow for a variable sample rate (which can also be used for an aliasing effect). The oscillators can have the usual array of wave forms (sine, square, triangle, up-saw, down-saw) as well as a programmable wave form (although it takes a long time to set it up). The oscillators can AM and FM modulate each other in software, and have precise frequency adjustments, so that, for example, a drone with a slowly changing FM or AM modulation can be set up. It also has a MIDI capability which allows entire patches to be associated with each MIDI key. Note that there is no envelope support here! it's really designed to drone and act as a frequency source. The AREXX control also means that it can be running while other programs send it changes.

Buzz docs are found here.

About the Author: Henry Lowengard is a New York based computer programmer, animator, microtonal musician and autoharpoholic. Visit Henry's web page at http://www.echonyc.com/~jhhl. Send queries to jhhl-at-panix.com or 324 Wall St. Apt 5 / Kingston NY 12401
Include a SASE, floppy disk, postcards or other enticements for a copy of the software described in the article or information about other projects.


This article was originally published in Experimental Musical Instruments, and since revised. They are no longer publishing, but you can get all the back issues on CD ROM on Bart Hopkin's webpage. I cannot recommend it highly enough; go order back issues and cassettes from them!

[Back to the main page]
Henry Lowengard, jhhl-at-panix.com /324 Wall St. Apt 5 / Kingston NY 12401/

© 1994-2024 Henry Lowengard