Nyquist is always growing with new functions. Functions that are most fundamental are added to the core language. These functions are automatically loaded when you start Nyquist, and they are documented in the preceding chapters. Other functions seem less central and are implemented as lisp files that you can load. These are called library functions, and they are described here.
To use a library function, you
must first load the library, e.g. (load "pianosyn")
loads the piano synthesis
library. The libraries are all located in the lib
directory, and you
should therefore include this directory on your XLISPPATH
variable. (See
Section Installation.) Each library is documented in one of the following
sections. When you load the library described by the section, all functions
documented in that section become available.
The file statistics.lsp
defines a class and functions to compute simple statistics, histograms, correlation, and some other tests. See the source code for complete details.
The Nyquist IDE has a simple facility to plot signals. For more advanced plotting, you can use gnuplot.sal
to generate plots for gnuplot
, a separate, but free program. See the source for details.
The labels.sal
program can convert lists to label files and label files to lists. Label files can be loaded along with audio in Audacity to show metadata. See the source for details.
See regression.sal
for simple linear regression functions.
See vectors.lsp
for a simple implementation of vector arithmetic and other vector functions.
The piano synthesizer (library name is pianosyn.lsp
) generates
realistic piano tones using a multiple wavetable implementation by Zheng (Geoffrey)
Hua and Jim Beauchamp, University of Illinois. Please see the notice about
acknowledgements that prints when you load the file. Further informations and
example code can be found in
demos/piano.htm
.
There are several useful functions in this library. These functions auto-load the
pianoysn.lsp
library if it is not already loaded:
piano-note(duration, step,
dynamic)
[SAL](piano-note duration step dynamic)
[LISP]piano-note-2(step, dynamic)
[SAL](piano-note-2 step dynamic)
[LISP]piano-note
except the duration is nominally 1.0.piano-midi(midi-file-name)
[SAL](piano-midi midi-file-name)
[LISP]piano-midi2file(midi-file-name,
sound-file-name)
[SAL](piano-midi2file midi-file-name sound-file-name)
[LISP]These functions
implement a compressor originally intended for noisy speech audio, but
usable in a variety of situations.
There are actually two compressors that can be used in
series. The first, compress
, is
a fairly standard one: it detects signal level with an RMS
detector and uses table-lookup to determine how much gain
to place on the original signal at that point. One bit of
cleverness here is that the RMS envelope is “followed” or
enveloped using snd-follow
, which does look-ahead to anticipate
peaks before they happen.
The other interesting feature is compress-map
, which builds
a map in terms of compression and expansion. For speech, the recommended
procedure is to figure out the noise floor on the signal you are compressing
(for example, look at the signal where the speaker is not talking).
Use a compression map that leaves the noise alone and boosts
signals that are well above the noise floor. Alas, the compress-map
function is not written in these terms, so some head-scratching is
involved, but the results are quite good.
The second compressor is called agc
, and it implements automatic gain
control that keeps peaks at or below 1.0. By combining compress
and
agc
, you can process poorly recorded speech for playback on low-quality
speakers in noisy environments. The compress
function modulates the
short-term gain to to minimize the total dynamic range, keeping the speech at
a generally loud level, and the agc
function rides the long-term gain
to set the overall level without clipping.
compress-map(compress-ratio,
compress-threshold,
expand-ratio, expand-threshold, limit: limit, transition:
transition, verbose: verbose)
[SAL](compress-map compress-ratio compress-threshold
expand-ratio expand-threshold :limit limit :transition
transition :verbose verbose)
[LISP]limit:
(a keyword parameter) to T
.
This effectively changes
the compression ratio to infinity at 0dB. If limit:
is nil
(the default), then the compression-ratio continues to apply above 0dB.transition:
, sets the amount below the
thresholds (in dB) that a smooth transition starts. The default is 0,
meaning that there is no smooth transition. The smooth transition is a
2nd-order polynomial that matches the slopes of the straight-line compression
curve and interpolates between them.shape
function. The sound maps input
dB to gain. Time 1.0 corresponds to 0dB, time 0.0 corresponds to
-100 dB, and time 2.0 corresponds to +100dB, so this is a
100hz “sample rate” sound. The sound gives gain in dB.db-average(input)
[SAL](db-average input)
[LISP]compress(input, map, rise-time, fall-time [, lookahead])
[SAL](compress input map rise-time fall-time
[lookahead])
[LISP]compress-map
(see above). Adjustments in gain have
the given rise-time and fall-time. Lookahead tells how far ahead to look
at the signal, and is rise-time by default.agc(input,
range, rise-time, fall-time [, lookahead])
[SAL](agc input range rise-time fall-time
[lookahead])
[LISP]This library, in soften.lsp
, was written to improve the quality of
poorly recorded speech. In recordings of speech, extreme clipping generates
harsh high frequency noise. This can sound particulary bad on small speakers
that will emphasize high frequencies. This problem can be ameliorated by
low-pass filtering regions where clipping occurs. The effect is to dull the
harsh clipping. Intelligibility is not affected by much, and the result can
be much more pleasant on the ears. Clipping is detected simply by looking for
large signal values. Assuming 8-bit recording, this level is set to 126/127.
The function works by cross-fading between the normal signal and a filtered signal as opposed to changing filter coefficients.
soften-clipping(snd,
cutoff)
[SAL](soften-clipping snd cutoff)
[LISP]There's nothing really “graphical” about this library (grapheq.lsp
), but
this is a common term for multi-band equalizers. This implementation uses
Nyquist's eq-band
function to split the incoming signal into different
frequency bands. Bands are spaced geometrically, e.g. each band could be one
octave, meaning that each successive band has twice the bandwidth. An interesting
possibility is using computed control functions to make the equalization change
over time.
nband-range(input, gains, lowf, highf)
[SAL](nband-range input gains lowf highf)
[LISP]SOUND
). The gain controls and number of bands is given by gains, an
ARRAY of SOUND
s (in other words, a Nyquist multichannel SOUND
). Any sound in the
array may be replaced by a FLONUM
. The bands are
geometrically equally spaced from the lowest frequency lowf to the
highest frequency highf (both are FLONUM
s).nband(input, gains)
[SAL](nband input gains)
[LISP]nband-range
with a range of 20 to 20,000 Hz.
The reverse.lsp
library implements functions to play sounds in reverse.
s-reverse(snd)
[SAL](s-reverse snd)
[LISP]SOUND
). Sound must be shorter
than *max-reverse-samples*
, which is currently initialized to
25 million samples. Reversal allocates about 4 bytes per sample. This function
uses XLISP in the inner sample loop, so do not be surprised if it calls the
garbage collector a lot and runs slowly. The result starts at the starting
time given by the current environment (not necessarily the starting time
of snd). If snd has multiple channels, a multiple channel,
reversed sound is returned.
s-read-reverse(filename, time-offset: offset, srate: sr, dur: dur, nchans: chans, format: format, mode: mode, bits: n, swap: flag)
[SAL](s-read-reverse filename :time-offset offset
:srate sr :dur dur :nchans chans :format format :mode mode
:bits n :swap flag)
[LISP]s-read
(see Section Sound File Input and Output), except it reads
the indicated samples in reverse. Like
s-reverse
(see above), it uses XLISP in the inner loop, so it is slow.
Unlike s-reverse
, s-read-reverse
uses a fixed amount of
memory that is independent of how many samples are computed. Multiple channels
are handled.
The time-delay-fns.lsp
library implements chorus, phaser, and flange effects.
phaser(snd)
[SAL](phaser snd)
[LISP]SOUND
). There are no parameters,
but feel free to modify the source code of this one-liner.flange(snd)
[SAL](flange snd)
[LISP]stereo-chorus(snd, delay: delay, depth: depth, rate1: rate1, rate2: rate2 saturation: saturation)
[SAL](stereo-chorus snd :delay delay :depth depth
:rate1 rate1 :rate2 rate2 :saturation saturation)
[LISP]SOUND
(monophonic). The output is a stereo sound with out-of-phase chorus effects applied separately for the left and right channels. See the chorus
function below for a description of the optional parameters. The rate1 and rate2 parameters are rate parameters for the left and right channels.chorus(snd, delay: delay, depth: depth, rate: rate, saturation: saturation, phase: phase)
[SAL](chorus snd :delay delay :depth depth :rate rate :saturation saturation :phase phase)
[LISP]FLONUM
) oscillating
at rate Hz (a FLONUM
). The sinusoid is
scaled by depth (a FLONUM
. The delayed signal is mixed
with the original, and saturation (a FLONUM
) gives the fraction of
the delayed signal
(from 0 to 1) in the mix.
Default values are delay 0.03, depth 0.003, rate 0.3,
saturation 1.0, and phase 0.0 (degrees).
The bandfx.lsp
library implements several effects based on multiple
frequency bands. The idea is to separate a signal into different frequency
bands, apply a slightly different effect to each band, and sum the effected
bands back together to form the result. This file includes its own set of
examples. After loading the file, try f2()
, f3()
, f4()
,
and f5()
to hear them.
Further discussion and examples can be found in
demos/bandfx.htm
.
There is much room for expansion and experimentation with this library. Other effects might include distortion in certain bands (for example, there are commercial effects that add distortion to low frequencies to enhance the sound of the bass), separating bands into different channels for stereo or multichannel effects, adding frequency-dependent reverb, and performing dynamic compression, limiting, or noise gate functions on each band. There are also opportunities for cross-synthesis: using the content of bands extracted from one signal to modify the bands of another. The simplest of these would be to apply amplitude envelopes of one sound to another. Please contact us ([email protected]) if you are interested in working on this library.
apply-banded-delay(s, lowp, highp, num-bands, lowd, highd, fb, wet)
[SAL](apply-banded-delay s lowp highp num-bands lowd highd fb wet)
[LISP]SOUND
s into FIXNUM
num-bands bands from a
low frequency of lowp to a high frequency of highp (these are
FLONUMS
that specify steps, not Hz), and applies a delay to
each band. The delay for the lowest band is given by the FLONUM
lowd (in seconds) and the delay for the highest band is given by
the FLONUM
highd. The delays for other bands are linearly
interpolated between these values. Each delay has feedback gain
controlled by FLONUM
fb. The delayed bands are scaled by
FLONUM
wet, and the original sound is scaled by 1 -
wet. All are summed to form the result, a SOUND
.apply-banded-bass-boost(s, lowp, highp, num-bands, num-boost, gain)
[SAL](apply-banded-bass-boost s lowp highp num-bands num-boost gain)
[LISP]SOUND
s into FIXNUM
num-bands
bands from a low frequency of lowp to a high frequency of
highp (these are FLONUMS
that specify steps, not Hz), and
scales the lowest num-boost (a FIXNUM
) bands by gain, a
FLONUM
. The bands are summed to form the result, a
SOUND
.apply-banded-treble-boost(s, lowp, highp, num-bands, num-boost, gain)
[SAL](apply-banded-treble-boost s lowp highp num-bands num-boost gain)
[LISP]SOUND
s into
FIXNUM
num-bands bands from a low frequency of lowp to
a high frequency of highp (these are FLONUMS
that specify
steps, not Hz), and scales the highest num-boost (a FIXNUM
)
bands by gain, a FLONUM
. The bands are summed to form the
result, a SOUND
. Some granular synthesis functions are implemented in the
gran.lsp
library file. There are many variations and control
schemes one could adopt for granular synthesis, so it is impossible to
create a single universal granular synthesis function. One of the
advantages of Nyquist is the integration of control and synthesis
functions, and users are encouraged to build their own granular
synthesis functions incorporating their own control schemes. The
gran.lsp
file includes many comments and is intended to be a
useful starting point. Another possibility is to construct a score
with an event for each grain. Estimate a few hundred bytes per score
event (obviously, size depends on the number of parameters) and avoid
using all of your computer's memory.
sf-granulate(filename, grain-dur, grain-dev, ioi, ioi-dev, pitch-dev, [file-start, file-end])
[SAL](sf-granulate filename grain-dur grain-dev ioi
ioi-dev pitch-dev [file-start file-end])
[LISP]sf-granulate
together. (See the gran-test
function in gran.lsp
.)
John Chowning developed voice synthesis methods using FM to simulate
resonances for his 1981 composition "Phone." He later recreated the
synthesis algorithms in Max, and Jorge Sastre ported these to SAL.
See demos/FM-voices-Chowning.sal
for more details.
Jorge Sastre contributed demos/atonal-melodies.sal
, code
that generates atonal melodies. You can find links to an example
score and audio file in the code and also at
http://algocompbook.com/examples.html
.
The midishow.lsp
library has functions that can print the contents fo MIDI
files. This intended as a debugging aid.
midi-show-file(file-name)
[SAL](midi-show-file file-name)
[LISP]midi-show(the-seq [, out-file])
[SAL](midi-show the-seq [out-file])
[LISP]The reverb.lsp
library implements artificial reverberation.
reverb(snd,
time)
[SAL](reverb snd time)
[LISP]The dtmf.lsp
library implements DTMF encoding. DTMF is the
“touch tone” code used by telephones.
dtmf-tone(key, len, space)
[SAL](dtmf-tone key len space)
[LISP]FIXNUM
from 0 through 9) or the atom STAR
or POUND
. The duration of
the done is given by len (a FLONUM
) and the tone is followed by
silence of duration space (a FLONUM
).speed-dial(thelist)
[SAL](speed-dial thelist)
[LISP]LIST
of keys as
described above under dtmf-tone
). The duration of each tone is 0.2
seconds, and the space between tones is 0.1 second. Use stretch
to
change the “dialing” speed.
The spatial.lsp
library implements various functions for stereo
manipulation and spatialization. It also includes some functions for
Dolby Pro-Logic panning, which encodes left, right, center, and surround
channels into stereo. The stereo signal can then be played through
a Dolby decoder to drive a surround speaker array. This library has
a somewhat simplified encoder, so you should certainly test the
output. Consider using a high-end encoder for critical work. There
are a number of functions in spatial.lsp
for testing. See the
source code for comments about these.
stereoize(snd)
[SAL](stereoize snd)
[LISP]widen(snd, amt)
[SAL](widen snd amt)
[LISP]SOUND
or a number.span(snd, amt)
[SAL](span snd amt)
[LISP]SOUND
or a number.swapchannels(snd)
[SAL](swapchannels snd)
[LISP]prologic(l, c,
r, s)
[SAL](prologic l c r s)
[LISP]SOUND
s representing the front-left,
front-center, front-right, and rear channels, respectively.
The return value is a stereo sound, which is a Dolby-encoded mix of the
four input sounds. pl-left(snd)
[SAL](pl-left snd)
[LISP]SOUND
, encoded as the front left channel.pl-center(snd)
[SAL](pl-center snd)
[LISP]SOUND
, encoded as the front center channel.pl-right(snd)
[SAL](pl-right snd)
[LISP]SOUND
, encoded as the front right channel.pl-rear(snd)
[SAL](pl-rear snd)
[LISP]SOUND
, encoded as the rear, or surround, channel.pl-pan2d(snd, x, y)
[SAL](pl-pan2d snd x y)
[LISP]pl-pan2d
provides not only left-to-right
panning, but front-to-back panning as well. The function
accepts three parameters: snd is the (monophonic) input SOUND
,
x is a left-to-right position, and y is a front-to-back position.
Both position parameters may be numbers or SOUND
s. An x value
of 0 means left, and 1 means right. Intermediate values map linearly
between these extremes. Similarly, a y value of 0 causes the sound
to play entirely through the front speakers(s), while 1 causes it to play
entirely through the rear. Intermediate values map linearly.
Note that, although there are usually two rear speakers in Pro-Logic systems,
they are both driven by the same signal. Therefore any sound that is
panned totally to the rear will be played over both rear speakers. For
example, it is not possible to play a sound exclusively through the
rear left speaker.pl-position(snd, x, y, config)
[SAL](pl-position snd x y config)
[LISP]pl-pan2d
, it accepts a (monaural) input
sound as well as left-to-right (x) and front-to-back (y) coordinates,
which may be FLONUM
s or SOUND
s. A fourth parameter config
specifies the distance from listeners to the speakers (in meters). Current
settings assume this to be constant for all speakers, but this assumption
can be changed easily (see comments in the code for more detail).
There are several important differences between pl-position
and
pl-pan2d
. First, pl-position
uses a Cartesian coordinate
system that allows x and y coordinates outside of the
range (0, 1). This model assumes a listener position of (0,0). Each speaker
has a predefined position as well. The input sound's position,
relative to the listener, is given by the vector (x,y).pl-doppler(snd,
r)
[SAL](pl-doppler snd r)
[LISP]The drum machine software in demos/plight
deserves further explanation.
to use the software, load the code by evaluating:
load "../demos/plight/drum.lsp" exec load-props-file(strcat(*plight-drum-path*, "beats.props")) exec create-drum-patches() exec create-patterns()
Drum sounds and patterns are specified in the beats.props
file (or
whatever name you give to load-props-file
). This file
contains two types of specifications. First, there are sound file specifications.
Sound files are located by a line of the form:
set sound-directory = "kit/"
This gives the name of the sound file directory, relative to the
beats.props
file. Then, for each sound file, there should be a line of
the form:
track.2.5 = big-tom-5.wav
This says that on track 2, a velocity value of 5 means to play the sound file
big-tom-5.wav
. (Tracks and velocity values are described below.)
The beats.props
file contains specifications for all the sound files
in demos/plight/kit
using 8 tracks. If you make your own specifications
file, tracks should be numbered consecutively from 1, and velocities should be
in the range of 1 to 9.
The second set of specifications is of beat patterns. A beat pattern is given by a line in the following form:
beats.5 = 2--32--43-4-5---
The number after beats
is just a pattern number. Each pattern
is given a unique number. After the equal sign, the digits and dashes are
velocity values where a dash means “no sound.” Beat patterns should be
numbered consecutively from 1.
Once data is loaded, there are several functions to access drum patterns and
create drum sounds (described below). The demos/plight/drums.lsp
file
contains an example function plight-drum-example
to play some drums.
There is also the file demos/plight/beats.props
to serve as an
example of how to specify sound files and beat patterns.
drum(tracknum, patternnum, bpm)
[SAL](drum tracknum patternnum bpm)
[LISP]beats.10
. If the third character
of this pattern is 3 and tracknum is 5, then on the third beat, play
the soundfile assigned to track.5.3
. This function returns a SOUND
.drum-loop(snd, duration, numtimes)
[SAL](drum-loop snd duration numtimes)
[LISP]SOUND
is returned.length-of-beat(bpm)
[SAL](length-of-beat bpm)
[LISP]FLONUM
.