In Nyquist, all functions are subject to transformations. You can think of transformations as additional parameters to every function, and functions are free to use these additional parameters in any way. The set of transformation parameters is captured in what is referred to as the transformation environment. (Note that the term environment is heavily overloaded in computer science. This is yet another usage of the term.)
Behavioral abstraction is the ability of functions to adapt their behavior to the transformation environment. This environment may contain certain abstract notions, such as loudness, stretching a sound in time, etc. These notions will mean different things to different functions. For example, an oscillator should produce more periods of oscillation in order to stretch its output. An envelope, on the other hand, might only change the duration of the sustain portion of the envelope in order to stretch. Stretching a sample could mean resampling it to change its duration by the appropriate amount.
Thus, transformations in Nyquist are not simply operations on signals. For example, if I want to stretch a note, it does not make sense to compute the note first and then stretch the signal. Doing so would cause a drop in the pitch. Instead, a transformation modifies the transformation environment in which the note is computed. Think of transformations as making requests to functions. It is up to the function to carry out the request. Since the function is always in complete control, it is possible to perform transformations with “intelligence;” that is, the function can perform an appropriate transformation, such as maintaining the desired pitch and stretching only the ”sustain” portion of an envelope to obtain a longer note.
The transformation environment consists of a set of special variables. These variables should not be read directly and should never be set directly by the programmer. Instead, there are functions to read them, and they are automatically set and restored by transformation operators, which will be described below.
The transformation environment consists of the following elements. Although
each element has a “standard interpretation,” the designer of an
instrument or the composer of a complex behavior is free to interpret the
environment in any way. For example, a change in *loud*
may change
timbre more than amplitude, and *transpose*
may be ignored by
percussion instruments:
*warp*
*warp*
is
interpreted as a function from logical (local score) time to physical
(global real) time. Do not access *warp*
directly. Instead, use
local-to-global(t)
to
convert from a logical (local) time to real (global) time. Most often,
you will call local-to-global(0)
. Several transformation operators
operate on *warp*
, including at (@
), stretch (~
),
and warp
. See also get-duration()
and get-warp()
.*loud*
*loud*
directly. Instead, use get-loud()
to get the current value of
*loud*
and either loud
or loud-abs
to modify it.*transpose*
*transpose*
directly.
Instead, use get-transpose()
to get the current value of
*transpose*
and either transpose
or transpose-abs
to
modify it.*sustain*
*sustain*
of 0.5, while very
legato playing might be expressed with a *sustain*
of 1.2.
Specifically, *sustain*
stretches the duration of notes (sustain)
without affecting the inter-onset time (the rhythm). Do not access
*sustain*
directly. Instead, use get-sustain()
to get the
current value of *sustain*
and either sustain
or
sustain-abs
to modify it.*start*
*start*
has a
precise interpretation: no sound should be generated before *start*
.
This is implemented in all the low-level sound functions, so it can
generally be ignored. You can read *start*
directly, but use
extract
or extract-abs
to modify it. Note 2: Due
to some internal confusion between the specified starting time and
the actual starting time of a signal after clipping, *start*
is not fully implemented.*stop*
*start*
, no sound should be generated after this time.
*start*
and
*stop*
allow a composer to preview a small section of a work
without computing it from beginning to end. You can read *stop*
directly, but use extract
or extract-abs
to modify it.
Note: Due to some internal confusion between the specified
starting time and the actual starting time of a signal after
clipping, *stop*
is not fully implemented.*control-srate*
*control-srate*
directly, but
use control-srate
or control-srate-abs
to modify it.*sound-srate*
*sound-srate*
directly, but use sound-srate
or sound-srate-abs
to modify it.
Previous examples have shown the use of seq
, the sequential behavior
operator. We can now explain seq
in terms of transformations.
Consider the simple expression:
play seq(my-note(c4, q), my-note(d4, i))
The idea is to create the first note at time 0, and to start the next
note when the first one finishes. This is all accomplished by manipulating
the environment. In particular, *warp*
is modified so that what is
locally time 0 for the second note is transformed, or warped, to the logical
stop time of the first note.
One way to understand this in detail is to imagine how it
might be executed: first, *warp*
is set to an initial value that has no
effect on time, and my-note(c4, q)
is evaluated. A sound is returned and
saved. The sound has an ending time, which in this case will be 1.0
because the duration q
is 1.0
. This ending time, 1.0
,
is used to construct a new *warp*
that has the effect of shifting
time by 1.0. The second note is evaluated, and will start
at time 1. The sound that is
returned is now added to the first sound to form a composite sound, whose
duration will be 2.0
. *warp*
is restored to its initial value.
Notice that the semantics of seq
can be expressed in terms of
transformations. To generalize, the operational rule for seq
is:
evaluate the first behavior according to the current *warp*
.
Evaluate each successive behavior with *warp*
modified to shift the
new note's starting time to the ending time of the previous behavior.
Restore *warp*
to its original value and return a sound which is the
sum of the results.
In the Nyquist implementation, audio samples are only computed when they are
needed, and the second part of the seq
is not evaluated until the
ending time (called the logical stop time) of the first part. It is still
the case that when the second part is evaluated, it will see *warp*
bound to the ending time of the first part.
A language detail: Even though Nyquist defers evaluation of the second part of the seq
, the expression can reference variables according to ordinary
Lisp/SAL scope rules. This is because the seq
captures the expression in a closure, which retains all of the variable bindings.
Another operator is sim
, which invokes multiple behaviors at the same
time. For example,
play 0.5 * sim(my-note(c4, q), my-note(d4, i))
will play both notes starting at the same time.
The operational rule for sim
is: evaluate each behavior at the
current *warp*
and return the sum of the results. (In SAL, the
sim
function applied to sounds is equivalent to adding them
with the infix +
operator. The following section
illustrates two concepts: first, a sound is not a
behavior, and second, the sim
operator and the at
transformation can be used to place sounds in time.
The following example loads a sound from a file in the current directory and stores it in a-snd
:
; load a sound ; set a-snd = s-read(strcat(current-path(), "demo-snd.aiff")) ; play it ; play a-snd
One might then be tempted to write the following:
play seq(a-snd, a-snd) ;WRONG!
Why is this wrong? Recall
that seq
works by modifying *warp*
, not by operating on
sounds. So, seq
will proceed by evaluating a-snd
with
different values of *warp*
. However, the result of evaluating
a-snd
(a variable) is always the same sound, regardless of the
environment; in this case, the second a-snd
should start at time
0.0
, just like the first. In this case, after the first sound ends,
Nyquist is unable to “back up” to time zero, so in fact, this will
play two sounds in sequence, but that is a result of an implementation
detail rather than correct program execution. In fact, a future version of
Nyquist might (correctly) stop and report an error when it detects that the
second sound in the sequence has a real start time that is before the
requested one.
How then do we obtain a sequence of two sounds properly?
What we really need here is a
behavior that transforms a given sound according to the current
transformation environment. That job is performed by cue
. For
example, the following will behave as expected, producing a sequence of two
sounds:
play seq(cue(a-snd), cue(a-snd))
This example is correct because the second expression will shift the sound
stored in a-snd
to start at the end time of the first expression.
The lesson here is very important: sounds are not behaviors! Behaviors
are computations that generate sounds according to the transformation
environment. Once a sound has been generated, it can be stored, copied,
added to other sounds, and used in many other operations, but sounds are
not subject to transformations. To transform a sound, use cue
,
sound
, or control
. The differences between these operations
are discussed later. For now, here is a “cue sheet” style score that
plays 4 copies of a-snd
:
; use sim and at to place sounds in time ; play sim(cue(a-snd) @ 0.0, cue(a-snd) @ 0.7, cue(a-snd) @ 1.0, cue(a-snd) @ 1.2)
The second concept introduced by the previous example is the @
operation, which shifts the *warp*
component of the environment. For
example,
cue(a-snd) @ 0.7
can be explained operationally as follows: modify *warp*
by shifting
it by 0.7
and evaluate cue(a-snd)
. Return the resulting sound
after restoring *warp*
to its original value. Notice how @
is used inside a sim
construct to locate copies of a-snd
in
time. This is the standard way to represent a note-list or a cue-sheet in
Nyquist.
This also explains why sounds need to be cue
'd in order to be shifted
in time or arranged in sequence. If this were not the case, then sim
would take all of its parameters (a set of sounds) and line them up to start
at the same time. But cue(a-snd) @ 0.7
is just a sound, so
sim
would “undo” the effect of @
, making all of the sounds
in the previous example start simultaneously, in spite of the @
!
Since sim
respects the intrinsic starting times of sounds, a special
operation, cue
, is needed to create a new sound with a new starting
time.
In addition to At (denoted in SAL by the @
operator, the Stretch
transformation is very important. It appeared in the introduction, and
it is denoted in SAL by the ~
operator (or in LISP by the stretch
special form). Stretch also operates on the *warp*
component of
the environment. For example,
osc(c4) ~ 3
does the following: modify *warp*
, scaling the degree of
"stretch" by 3, and evaluate osc(c4)
. The osc
behavior
uses the stretch factor to determime the duration, so it will return
a sound that is 3 seconds long. Restore *warp*
to its original
value. Like At, Stretch only affects behaviors. a-snd ~ 10
is
equivalent to a-snd
because a-snd
is a sound, not a
behavior. Behaviors are functions that compute sounds according to
the environment and return a sound.
Transformations can be combined using nested expressions. For example,
sim(cue(a-snd), loud(6.0, cue(a-snd) @ 3))
scales the amplitude as well as shifts the second entrance of a-snd
.
Why use loud
instead of simply multiplying a-snd
by some
scale factor? Using loud
gives
the behavior the chance to implement the abstract
property loudness in an appropriate way, e.g. by including timbral
changes. In this case, the behavior is cue
, which implements
loudness by simple amplitude scaling, so the result is equivalent
to multiplication by db-to-linear(6.0
).
Transformations can also be applied to groups of behaviors:
loud(6.0, sim(cue(a-snd) @ 0.0, cue(a-snd) @ 0.7))
Groups of behaviors can be named using define
(we already saw this
in the definitions of my-note
and env-note
). Here is another example
of a behavior definition and its use. The definition has one parameter:
define function snds(dly) return sim(cue(a-snd) @ 0.0, cue(a-snd) @ 0.7, cue(a-snd) @ 1.0, cue(a-snd) @ (1.2 + dly)) play snds(0.1) play loud(0.25, snds(0.3) ~ 0.9)
In the last line, snds
is transformed: the transformations will apply
to the cue
behaviors within snds
. The loud
transformation will scale the sounds by 0.25
, and the stretch
(~
) will
apply to the shift (@
) amounts 0.0
, 0.7
, 1.0
,
and 1.2 + dly
. The sounds themselves (copies of a-snd
) will
not be stretched because cue
never stretches sounds.
Section Transformations describes the full set of transformations.
In Nyquist, behaviors are the important abstraction mechanism. A behavior represents a class of related functions or sounds. For example, a behavior can represent a musical note. When a note is stretched, it usually means that the tone sustains for more oscillations, but if the “note” is a drum roll, the note sustains by more repetitions of the component drum strokes. The concept of sustain is so fundamental that we do not really think of different note durations as being different instances of an abstract behavior, but in a music programming language, we need a way to model these abtract behaviors. As the tone and drum roll examples show, there is no one right way to “stretch,” so the language must allow users to define exactly what it means to stretch. By extension, the Nyquist programmer can define how all of the transformations affect different behaviors.
To make programming easier, almost all Nyquist sounds are constructed from primitive behaviors that obey the environment in obvious ways: Stretch transformations make things longer and At transformations shift things in time. But sometimes you have to override the default behaviors. Maybe the attack phase of an envelope should not stretch when the note is stretched, or maybe when you stretch a trill, you should get more notes rather than a slower trill.
To override default behaviors, you almost always follow the same programming pattern: first, capture the environment in a local variable; then, use one of the absolute transformations to “turn off” the environment's effect and compute the sound as desired. The following example creates a very simple envelope with a fixed rise time to illustrate the technique.
define function two-phase-env(rise-time) begin with dur = get-duration(1) return pwl(rise-time, 1, dur) ~~ 1.0 end
To “capture the environment in a local variable,” a with
construct is used to create the local variable dur
and set
it to the value of get-duration(1)
, which answers the question:
“If I apply use the environment to stretch something whose nominal
duration is 1, what is the resulting duration?” (Since time transformations
can involve continuous time deformations, this question is not as
simple as it may sound, so please use the provided function rather
than peeking inside the *warp*
structure and trying to do it
yourself.) Next, we “turn off” stretching using the stretch-abs
form, which in SAL is denoted by the ~~
operator.
Finally, we are ready to compute the envelope using pwl
. Here,
we use absolute durations. The first breakpoint is at rise-time
,
so the attack time is given by the rise-time
parameter. The
pwl
decays back to zero at time dur
, so the overall
duration matches the duration expected from the environment encountered
by this instance of two-phase-env
. Note, however, that since
the pwl
is evaluated in a different environment established
by ~~
, it is not stretched (or perhaps more accurately, it is
stretched by 1.0). This is good because it means rise-time
will
not be stretched, but we must be careful to extend the envelope to
dur
so that it has the expected duration.
The global environment contains *sound-srate*
and
*control-srate*
, which determine the sample rates of sounds and
control signals. These can be overridden at any point by the
transformations sound-srate-abs
and control-srate-abs
; for
example,
sound-srate-abs(44100.0, osc(c4))
will compute a tone using a 44.1Khz sample rate even if the default rate is set to something different.
As with other components of the environment, you should never change*sound-srate*
or *control-srate*
directly.
The global environment is determined by two additional
variables: *default-sound-srate*
and *default-control-srate*
.
You can add lines like the following to your init.lsp
file to change
the default global environment:
(setf *default-sound-srate* 44100.0) (setf *default-control-srate* 1102.5)
You can also do this using preferences in NyquistIDE. If you have already started Nyquist and want to change the defaults, the preferences or the following functions can be used:
exec set-control-srate(1102.5) exec set-sound-srate(22050.0)
These modify the default values and reinitialize the Nyquist environment.