Goyder’s Line v.3 in the works

I’m currently in the process of revising the Goyder’s Line Max/MSP patch with the intention of streamlining the drawing process and adding some additional features to the interface.

In addition to this, the work will be expanded with the incorporation of a video component for a potential exhibition/performance of the work in the future. A summary and audio of of v.2 (2015) can be found below.

14724655_10153831697521712_6947565769209885543_n
A snippet of the revised Max/MSP patch

Goyder’s Line Version 2 release notes (accompanying the Maurilia Sound Studio Volume 4 edition):

“Goyder’s Line” – recorded in April 2015 – is a composition for Max/MSP, vocoder and effects modules. For its structure and form, the work references the geographical boundary (or isopleth) pioneered by George Goyder in the mid-1880’s to denote and determine patterns of rainfall in South Australia. The work’s sonic character (derived from sawtooth waves and the feedback of a Moog MF-108M module) results in a continuous drone; consisting of rich, wavering harmonic tones and textures which are intended to be evocative of the colours, climate, topography and relative stillness of the landscapes that Goyder’s Line passes through.

Advertisements

Goyder’s Line (Maurlia Sound Studio Vol. 2) Preview

The second edition of my Maurilia Sound Studio imprint will consist of three versions of my sawtooth-vocoder work, Goyder’s Line (2014-2015), which is a drone-based work realised using Max/MSP, Microkorg, Moog MF-108X and Electroharmonix Memoryman. The work is inspired by George Goyder’s line of South Australian rainfall, using the original line to determine the frequency of a sawtooth wave over a set duration, which is then routed as the carrier signal to the Microkorg’s vocoder. A second line (based upon future climate projections) determines the frequency of the vocoder’s modulation signal. This results in instances of phase cancellation and harmonic overtones, with a predominant drone throughout  (provided by the base notes of the vocoder).

Review: Ross Bencina at EMU (15/8/2014)

Ross Bencina. Image source: http://blogs.adelaide.edu.au/emu/files/2014/08/ross2.jpg
Ross Bencina. Image source: http://blogs.adelaide.edu.au/emu/files/2014/08/ross2.jpg

The Electronic Music Unit at the University of Adelaide hosted Melbourne-based composer and software developer, Ross Bencina on Friday night. Bencina is the creator of the music software program, Audiomulch – which incorporates patching paradigm software with extended applications in composition, interfacing and live performance. 

Adelaide-based composer Christian Haines opened, presenting an iterative feedback work clearly influenced by Alvin Lucier’s seminal work I am sitting in a room (1970). Haines’ approach was similar to Lucier’s, albeit confined to a relatively straightforward patch on his laptop – I learned after the performance that Haines was using Audiomulch to realise this work. With Haines seated behind the laptop; an audio snippet of a familiar voice began, uttering a statement which sounded at once rhetorical, pretentious and yet vaguely knowledgable. Yes, it must be Kevin Rudd – the former Prime Minister of Australia; an erudite individual with a formidable vocabulary but frequent partiality to using impenetrable language for the sake of it. The statement, which alluded to the anticipated outcomes of an enquiry repeated a few times before any change was noticeable. Then gradually the voice became reverberant as the audio file underwent an iterative process through a series of crude reverberation modules in the software patch. The native resonant frequencies of the reverberation modules become increasingly apparent, until only the rhythmic quality of the speech are perceivable amidst a texture of smeared resonances and harmonics. Eventually the the speech’s rhythm is lost altogether and at this point (or shortly thereafter) the performance is brought to a gentle fade. Though this technical process of resonant feedback has been well-worn, the subject of the work brought an enjoyable dimension to an otherwise familiar performance approach. There was a pleasure in listening to such a pretentious statement – uttered by an equally pretentious individual – being periodically annihilated by the iterative procedure of a simple software patch.

Bencina took the stage with a set-up of laptop, MIDI controller, pedalboard and a metal bowl fixed with two contact mics. The first half of Bencina’s performance consisted of a work solely for his laptop and the MIDI controller. The performance began with very short blips of noise rebounding from the left and right speakers, slowly shifting back and forth between sparse and complex densities. A frequency spectrum steadily became apparent as the blips lengthened in duration and discrete pitches could be heard. Amidst the mid-to-high end activity, low end pulses gradually filled the space – assisted by a sub woofer – with Bencina carefully managing the pacing and overall dynamic with minute adjustments to the faders of his MIDI controller. A musical quality to the sounds appeared as the duration (of what were obviously now tones) lengthened further and the timbre of plucked strings could be heard forming rich and detailed harmonies. It was an elegant and well executed performance, with careful and thoughtful consideration given to the material and its evolution over a well paced duration.

The second part of Bencina’s set incorporated the footswitch, metal bowl, contact mics and laptop. Bencina explained that this part of his set would be divided into three sections and began the first section by holding the metal bowl (with contact mics attached) and gently dropping what appeared to be very small ball bearings into the bowl. With the bowl tilted slightly towards the audience, the contact mics picked up the minute movements of the ball bearings rolling and eventually settling. With this activity being recorded into the laptop, processed sounds appeared emphasising the movement of the ball bearings in the bowl – sonically magnified in great detail with rumbling, metallic textures. Bencina would occasionally tilt the bowl from side to side to assist the movement of the ball bearings, still being recorded and superimposing layers of live and processed sound on top of each other. By design of the software patch on the laptop the sound built to a crescendo of textures before dramatically fading away back to the unprocessed sound of the ball bearings’ movement in the bowl. This development was unexpected and very impressive – as with the first section of his set – Bencina’s understanding of pacing and structure was remarkable. The second section began with Bencina again holding the bowl (with contact mics in a slightly different position) and activating a couple of switches on the pedalboard before carefully dropping what appeared to be dried beans into the bowl. The processed effect was much more apparent this time around, with the initial impact of the beans striking the bowl followed by an eruption of noise and deep resonances. Similar to the process of the first section, these sounds would progressively layer against each other. For the final section Bencina put aside the bowl, removed the contact mics and placed each of these in his hands. Once again activating several switches on the pedal board, he proceeded to gently squeeze, tap and scrape the surface of the contact mics resulting in the activation of a rich musical texture, which evaded precise identification. As with the previous two sections, the layering and creation of dense textures was the main approach here, however unlike the performances with ball bearings and the beans, Bencina’s micro-gestures of squeezing, tapping and scraping the contact mics would result in such diverse and unexpected behaviours in the resulting musical texture that it became apparent Bencina had considerably more control over the expressive and dynamic elements of this process.

Overall, Bencina’s set was a hugely enjoyable performance of live electro-acoustics, frequently demonstrating a masterful ability for gesture, timing and the morphology of sound.

Weekly Beats 2014 #19: 5/3 “Goyder’s Line and Several Distant Figures”

The Vocoder is being controlled by two sawtooth waves generated in Max/MSP. These sawtooth waves are routed to the Vocoder as the Carrier and Modulation signals. 

The frequency value for the the sawtooth waves is produced by a light sensor which reads the fluctuating (single) value of reflected light in my studio space. Once received by a Serial~ object in Max/MSP, the first sawtooth frequency value remains unchanged whilst the second frequency value (Modulation signal) is multiplied by 3.51 resulting in a scaled frequency value. 

E.G: if the Carrier’s frequency value is 104Hz, the Modulation frequency value will be (104 * 3.51 = 365Hz) 

On the MicroKorg, a perfect fifth is held (E1-B1) and the incoming Carrier and Modulation signals undergo processing via an Electroharmonix Memoryman set to a reverberant delay setting with feedback which his gradually increased.

MicroKorg Vocoder, light control and feedback.

The Vocoder is being controlled by two sawtooth waves generated in Max/MSP. These sawtooth waves are routed to the Vocoder as the Carrier and Modulation signals. 

The frequency value for the the sawtooth waves is produced by a light sensor which reads the fluctuating (single) value of reflected light in my studio space. Once received by a Serial~ object in Max/MSP, the first sawtooth frequency value remains unchanged whilst the second frequency value (Modulation signal) is multiplied by 3.51 resulting in a scaled frequency value. 

E.G: if the Carrier’s frequency value is 104Hz, the Modulation frequency value will be (104 * 3.51 = 365Hz)

On the MicroKorg, a perfect fifth is held (E1-B1) and the incoming Carrier and Modulation signals undergo processing via an Electroharmonix Memoryman set to a reverberant delay setting with feedback which his gradually increased.

Weekly Beats 2014 #18: 5/2 “Goyder’s Line And A Shadow Passes”

freetronics light sensor

For the past couple of weeks, I’ve been experimenting with a light sensor and an Arduino Eleven board to control parameters in Max/MSP – at this stage just simple stuff like controlling the frequency amount of a waveform.

At the same time, I’ve been exploring the vocoder of my Microkorg, using basic Carrier and Modulation inputs to affect the vocoder’s oscillator. Last week’s submission (Goyder’s Line And A Shadow Passing) utilised two rising sine waves as the Carrier and Modulation inputs. Whilst the result was as I anticipated – very subtle and imperceptible – it was later pointed out to me that there would probably not be any actual affect on the vocoder’s oscillator since the Modulator imposes its harmonic characteristics on the Carrier, and since we’re talking about two sine waves, well…this should have occurred to me. I do have a tendency to get distracted by technology and overlook the basics from time to time, and I think I’ll be re-learning the rudiments of all the things to my grave.

So, this time around I used two sawtooth waves as the Carrier and Modulator (harmonic range = good!) and raised the Modulation frequency above that of the Carrier frequency using a simple Max multiplication object. * I’ll go into further detail with the Max/MSP patch in a later post.

A perfect fifth is held on the Microkorg (E2; B2) and as the light level changes, the relationship between the Carrier and Modulation frequencies shifts resulting in a change to the overall structure of the sound heard through the vocoder. Since I recorded this track in the late afternoon (the full version is 20 minutes long), the light level gradually falls as reflected light being read by the light sensor diminishes.