Thursday, 20 November 2008

Creative Computing - Electroacoustic - Performance

EDIT 2012 : has been rejigged see

Inspired by modular synthesisers and Morton Subotnick's ghost boxes this project is a set of modular processes that can be routable in any which way (this can involve occasionally having oscillators with an output range of +3 to +76).

Ever excited by feedback, the performance piece is inspired by Robert Ashley's The Wolfman. Utilising much lower volumes but generally harsher tones.

Unfortunately I found it difficult to create a stand alone application. After finally nutting out various issues such as correct use of capitalisation when referring to filenames etc. The Collective Editor then decided to express a stream of errors previously unseen - as such, the existent stand alone is an early version with a number of unworking components.

Performance piece :
Project documentation :

This is the Max/MSP patches :

and the slightly working stand alone version - OSX

Haines, Christian. 2008. CC2 - ElectroAcoustic - Performance.pdf

AA2 - sound design project - game audio

Rocks'N'Diamonds - Game Audio.

RocksNDiamonds (RnD) is an emulation of a 1980's aesthetic. It is an amalgamation of several games rolled into one. I have reworked one of the tutorial levels. The level itself has been slightly modified to make game play much easier, otherwise certain areas of the map and therefore sonic environments would be time consuming and prone to inadvertent player death.

Sound creation aesthetics was governed by a high quality lo-fi sound approach. Using the "Analogue" instrument in Ableton Live 7 for a majority of sounds gave ease to such an approach, with simple oscillator sounds with the use of automation to shape them, and a host of integrated effects (such as reverb and bit rate reduction) for on the spot creation.

Some sounds which I felt needed a richer tonality (such as the player digging, where I integrated a real world mentality to sound creation), the use of foley was needed. Generally putting a sound snippet into the Live "sampler" plug-in and making use of integrated settings, automation and other available plug-ins.

- edited version of Process.doc

Associated production files here :


Haines, Christian. 2008. AA2 - Sound Design Project - Game Audio.pdf

Monday, 27 October 2008

cc2 - week 10 - delay

Made a fairly basic delay with feedback, wet/dry, delay time and buffer size settings.

Incorporated an input for modulation which adds incoming signal to internal settings.. works quite well.

I played with using the poly~, got it to have a user input number of taps with the number as a multiplier for delay time. Each tap was full level or again dependant on number of taps, nothing that the other delay didn't acheive.

Couldn't quite think of a useful way to use it. I would like to set individual tap time (which seems to necessitate having a pre-determined number of input variables hence taps) which doesn't seem that much more useful than not using poly~ except maybe a slight reduction in used objects if you turn unused taps off..

Curiousity of the week.
running a float through a [-~ 1] object and getting the output to wrap around zero.


Haines, Christian. 10. Processing - Delay - CC2 - Music and Signal Processing.pdf

Cycling'74 2006, MSP Tutorials and Topics,

Saturday, 25 October 2008

aa2 - week 11 - game music

Creating game music was quite fun. The main challenge was to ensure repetition maintained interest (although this is relative to the amount of time one does spend on the same level !!).

I created two seperate tracks, one fast and one slow for comparitive consideration. Both work reasonable well on the level I tested with - although individual instrument levels could be slightly modified for transparency of game sounds.

Other thoughts ; the music shipped with the game is incredibly lo-fi (8 bit, 22kHz mono), and while I added various forms of retro techniques the new sounds are a bit nice. Whether this will matter to someone with no relative distinction is a potential question. I'm thinking that using low bit rate contemporary style music compression (ie mp3) may add an interesting sheen but this will be explored later (the mp3 as linked at 128kbps may be appropriate).

Upon reconsideration of the task set, I tried to extend the slow piece beyond a 9 second loop but found it purposeless. Because of the shortness I have included the two tracks, each with 2 loopings.

zip file at;


Haines, Christian. 2008. AA2 - Week 11 - Game Audio Design - Music - Planner.pdf

Hannigan, James. 2004, "Changing Our Tune", 2007,

"Chapter 7 - Ideal Production". Brandon, Alexander. 2005, Audio for games: planning,
process, and production, New Riders Games, Berkeley, Calif.

Pp149 - 152. Childs, G. W. 2006, Creating Music and Sound for Games, Thomson
Course Technology.

pp246 - 252. Irish, Dan. 2005, The Game Producer's Handbook, Course Technology

Friday, 24 October 2008

mtf - week 11 - composition

audio-visual presetations of collaborations between sound and video artist.


mmmm, not a lot to say really.


Whittinton, Steven. Steven Whittington plays various sections of a DVD Workshop. presented at EMU space, level 5 Schultz building, University of Adelaide, 23 October 2008.

Sunday, 19 October 2008

cc2 - week09 - processing - FFT

This week I extended the fft crossover to encompass 5 bands.

I was interested in extracting frequency specific information for use as a controller but don't quite understand what information the fft process actually creates.

I found it interesting that you can create dead spots in the crossover by setting them in the middle of the same bin, so I have done a fix on this - although it does require the DSP to be switched on as it uses the FFT fundamental frequency.

zip at

Haines, Christian. 2008. 9. Processing - FFT - CC2 - Music and Signal Processing.pdf

Cycling'74. 2006. MSP Tutorials and Topics.

aa2 - week 10 - game audio design - ambience

This week's assignment relates to creating in game ambience.

Again sequencing in Ableton Live and setting up the Analog synthesiser with some slight automation to create a spooky windy castle setting, and a deserted computer room with plaintive alarm signal.

This weeks challenge was to make sure they were loopable, which meant going back into Live and adjusting one of the LFO's on the computer room.

The pieces have both been normalised to -1 dB, but both would play back at significantly lower than other sounds. They may also require some equalisation/compression but I feel they need proper contextualising before finalising these.

This final mixdown features one loop of each piece. Computer room at about 12 seconds and the windyness about 20 seconds.

link to files


Haines, Christian. 2008. AA2 - Week 10 - Game Audio Design - Ambience - Presentation.pdf

Bernstein, Daniel. 1997, Creating an Interactive Audio Environment, Gamasutra, 2006,
( (Available Online)

Sonnenschein, David. 2001, Sound design: the expressive power
of music, voice, and sound effects in cinema. pp173-174, 182-189. Michael Wiese Productions. Seattle, Washington.


Friday, 17 October 2008

forum - week 10 - honours presentation

once again the images are tenuously related to textual content.

this weeks award for most exciting presentation goes to Martin Victory for his sonification of network packets !!!

whilst the presentation itself lacked any excitement and razzle dazzle (where were the dancing bears ?), it was the choice of material to sonify that won the day.

i am not particularly au fait with network information, but am aware that computers love chatting to each other and do so within a regular framework. The variety of networking possibilities offer such a myriad of diversity and hence a diverstiy of sonic outcome (insert comparison of University ethernet vs. LAN party gaming mayhem !!).

I would quite like to read the section on sonification which will apparently take up half of his thesis. It is the implementation of sonification that really seperates the wheat from the chaff, the meat from the poison, the tea from the biscuits.

All music comes from the arbitary delineation that is choice (shall I modulate to Eb minor here... or... G# Major ?????), hence the particular application of process (subconscious or conscious) proves the artistic merit. From the limited display of Martin's application, I thought it most definitely supplied the necessary cheese for my biscuit !!

Victory, Martin. "Honours Presentation". Workshop presented at EMU space, level 5 Schultz building, University of Adelaide, 16th of October 2008.

Sunday, 12 October 2008

forum - week ? (i'm confused with all my subjects being skweiffed

Student Presentations = 3rd year goodness..

favourite this week was Mr Ben Proberts Max patch Fantastical Metal. I enjoyed it immensely for it's potential !! The only bad thing, is the patch doesn't work on my computer :(

i think in particular i liked the arpeggiator function which gave slightly jerky but consistent rhythym.
i just don't know what else to say it - was just nice !!!

That said everyone else was boring and tedious and i can't be bothered with even remembering what went on !!!

OK, that last bit was a joke :) Luke made nice timbre's, and the sequencing element showed potential if he gets it finalised. Will's game sound made me really want to play Half Life 2 as I was a big fan of Half Life and Counter Strike !!! David weirded me out with his sampled General Midi sounds, although the timbre of the melodic instrument was very pleasant - and he seemed to have done far too much programming on his patch !!!

This takes me to wax briefly on the imbalance that is Music Technology.
Why is it that 1/3 of a core subject can take more time than 2 other subjects in their entirety ???

Hmmmm, I'm looking forward to our consumer response surveys this year.

Music Technology Forum - Semester 2, Week 9 - Third Year presentations. University of Adelaide, South Australia, 18/09/2008.

AA2 - week 09 - game audio design assets

I created these four sounds in Ableton Live using the in-program synthesisers. Shaping the internal synth envelopes, mixer envelopes and using equalisation and bit/sample rate reduction.
Die features a sequence of notes, whereas the others are single notes.

I went in with the idea of making reasonably high fidelity sounding assets, but found myself with fairly dirty(ish) sounds. Perhaps more in tune with my subconscious aesthetic of the game.

The sounds are ; acid_drip (acid dripping), die (player dying), nut_break (breaking a nut), nut_push (pushing a nut).

zip of sound file and asset list :


Haines, Christian. AA2 - Week 9 - Game Audio Design - Assets - Planner.pdf

"Chapter 5 - Sound Design: Basic Tools and Techniques" and "Chapter 6 - Advanced
Tools and Techniques". Childs, G. W. 2006, Creating Music and Sound for Games,
Thomson Course Technology.

Kelleghan, Fiona. 1996, Sound Effects in SF and Horror Films, 2006,

CC2 week 08 _ Midi and MSP

This week's assignment was a bit of a mental challenge. I had to overcome the resistance I had to putting 128 connections, objects etc in an object and all the associated joins.

My first attempt was not appropriate but very simple and had only one output. It was wrong.

My last attempt had 128 outputs, Hurrah !!!

Prior to utilising a poly~ in which to put the ctlin object (thanks to Christians Hint of the Day !!),
I was enjoying my Xenakis looking patches. So much that I'd like to make a patch with so many lines in it you can see zoo animals !!

One other thing I enjoyed from this week was editing the patch.txt file. I did this to name the objects and outputs. It was much easier than naming each object in Max.


Haines, Christian. 8. MIDI and MSP - CC2 - Music and Signal Processing.pdf

Sunday, 5 October 2008

cc2 - week 07 - sampling (2) Music and Signal Processing

After reviewing the tutorials and beginning my own patches, I quickly dismissed the sfrecord~ for something a bit more flexible, and as keen as I was to use poke~ I instead utilised the record~. My main wish was to be able to record to buffer so as to be able to review recordings before optional saving.

A recording light was necessary, also pre-faders, and also post monitoring. The self refreshing buffer~ view is just plain nifty !!

For the playback I used the play~ object, I can't see much difference between the various objects (wave~, play~, peek~....) but used play~ as an opposite to record~.

Fun times were to be had in the implementing of playing back selected areas in the waveform~ object at recorded speed. Again Max proved itself an interesting tool when it continued to play at double speed, so I put a fix in... then upon a reload spontaneously playing at half speed, so I took the fix out.

Also Max weirdness when routing signals through several graphic switches, I resolved this issue by having simultaneous rails which were then mulitplied by 0/1 as a switch.

Possibly it might have been easier using the groove~ object but I stuck with play~.

zip file at _______

Haines, Christian. 2008. 7. Sampling (2) - CC2 - Music and Signal Processing.pdf

Cycling'74 2006, MSP Tutorials and Topics,

aa2 - week 08 - Rocks'n'Diamonds game audio

Rocks'n'Diamonds is a game styllistically based on the 1984 personal computer game Boulderdash.

To complete this project I will have to create about 36 sounds. Including background music, level completed (Hall of Fame), and play action sounds.

Given the era of Boulderdash, the sound Fx for Rocks'n'Diamonds are based on the aesthetic of 1980 computing. The original music with the game were excerpts from music from the period (Tangerine Dream, Propaganda, Alan Parsons Project) and were very low sound quality samples (8 bit 8kHz mono). The current version has these excerpts at a much higher 22khz.

The game action sound features play driven sounds (digging, collecting treasure...) and feature sounds (creatures, dripping...). These sounds are all generally short (generally less than 200 ms). With the repeated sounds (eg monsters), I would like to experiment with longer loops which will give more variety rather than rapidly repeated clicks.

With the obvious visual styling of the game I will keep with the simple sounds reminiscent of 1980's computing. As such I imagine I will use analogue style synthesis, a bit of foley, and low quality samples. In light of the advance in computing power since the game inception, I will not be so concerned with keeping the sounds small but will be able to focus more on higher quality settings with a retro sound.

The game engine does feature the ability to play .mod files. Unfortunately these are not accurately represented on my Windows system so I will not be using this format.

Pre-production form is at:

Haines, Christian. 2008. AA2 - Sound Design Project - Game Audio.pdf

2008. Rocks'n'Diamonds news. (viewed 060october 2008)

Saturday, 4 October 2008

aa2 - week 07 - game audio aesthetics

I did the voice over's for this weeks project.

Basically a wee tiny bit of script writing, recording the voices, processing (major eq, compression) running it with the video, editing the audio to fit as I imagined it(time compression and cutting). then shipping it off to Freddy.

As I was only responsible for one small section. The finished product was a reasonable surprise. With the final placement of the voice's I could have left my original audio as it was. I feel that there is a major gap in plot development after the voice over's but this was not my area :)
I guess I could have shipped either the longer version, or both versions, but this did not occur to me.

I imagine that if this were not done during holidays, and were done with cash flow then there would have been better communication ( I spent a lot of time away ), and more interaction with the final flow of events.

Here is the finished product as it appears on Freddies blog.

Haines, Christian. 2008. AA2 - Week 7 - Game Audio Aesthetics - Planner.pdf

Saturday, 20 September 2008

mtf - week 08 - my favourite erasehead

my favourite part of the film was the few minutes directly after - there was general silence. i think that i was not the only one slightly bemused by the experience.

one of my favourite movie watching techniques is to miss the opening section. by missing out on the character development and even major plot developments, you can then get a sense of wonder as nothing makes sense.

without missing the opening section, eraserhead gave me a similar situation. the movie, whilst never being clear, at least began to take recognisable form over time. but still there were incessantly weird bits i had no idea what was happening or what there relevance was..... eg it was not til after reading the blerb at wikipedia, i realised that the funny cheeked woman was "the woman in the radiator", also other symbolic imagery such as the cloud of eraser dust surrounding the hero as featured in the cover image...

i found myself walking outside soon after, and i felt most appreciative of the beautiful day that was revealed to my then current mind space.

Eraserhead. Dir. David Lynch. DVD. Libra Films.

David Harris. Music Technology Forum - Semester 2, Week 8 - My Favourite Things Lecture. University of Adelaide, South Australia, 18/09/2008.

Eraserhead. (viewed 20/09/2008)


Sunday, 14 September 2008

forum - week 7 - presentations and sine tones

I didnt mean to spend so much time playing with the graphics,,, honest !!

This week featured a number of student presentations, and I presented my work from 1st semester with the amazing and descriptive title, Parabolic function as iterated through colour and melodic transforms. This featured as a sound component, 12 simultaneous melodic lines using sine tones generated from an iterative function describing the shape of a parabola.

Here follows a somewhat lengthy discourse (for a blog) on sine tones and their use in this particular piece. The quintessential element has been paraphrased in the linked comment which you could peruse instead and save yourself possible minutes of life. Otherwise, you could continue reading ....

After the playing of my piece there was a comment from
David Dowling afterwards something along the line of "Did you think of putting some sort of modulation on ? Sine waves get a bit old after a couple of minutes !". This has gotten me thinking ....

why do i like sine tones ? This has two aspects. (1) from a melodic aspect, I like their timbre (or lack of :), the way they actually sound when used in a melodic context. (2) From a harmonic aspect, they can be used in such a variety of ways with little concern for clash of timbral colours - their lack of overtones gives them such ease of combination.

More widely speaking when dealing with wide frequency/ pitch choices, the choice of timbre becomes very important. Generally speaking, a timbre that sounds good at one pitch will not sound good 3 octaves above or 3 octaves below - this requires then a pitch based modulation upon the timbre to shape the sound as appropriate. Or, one could treat frequency along a similiar line to which John Delaney has in his piece BinHex_25-06. Here there was distinct frequency based timbral delineation - not unlike an ensemble of actual instruments, where there own unique characteristics create possible frequency/pitch bands. Hence one could create a number of timbres with attached frequency limits, and divide a broad frequency output into its appropriate voice.

Or, use a sine tone.

The harmonic aspect has particular importance when considered in the light of "computer music" (I use the term here to denote the style of music associated with both computer aided composition and creation - an obvious example is using a mathematical equation to describe the entire melodic and harmonic output). I feel that in this aspect, my use of formulaeic derived methodology to replace the overt human aesthetics (eg pitch/rhythmic choices), requires the use of simpler, purer tonality to represent the audio output. The inclusion of richer timbres creates denser levels of complexity through the interaction of their extended harmonic components - this is what I consider undesirable (although not necessarily strictly so - indeed these denser complexities have their own desirability). In my particular work, I desired the purity of sine tones to help reveal the inner working of the mathematical process, their interactions were desired as a particular mental abstraction overlayed onto what I conceived as a continually morphing spatial panorama where overtones would only create muddy patterns.

That said, with the retrospective awareness of my piece and the cogitation/analysis engendered by its presentation. I feel I could extend it further -

At the present, the patch generates 12 independent tones. These tones are given a duration, constant amplitude and pitch (the pitch is given a start and end value with a constant rate of change).
In the simplest way the shape of these generated tones needs to be modified along the lines of the following ;
duration (the minimum possible could be shortened), amplitude (an envelope over time - rather than constant amplitude), pitch (an envelope of changing pitch inclinations, rather than constant change).
The tones are also given a stereo value, where one of the stereo channels is delayed by an amount. This could be modified with with the delay being also based on phase position, hence more importance on the frequency value related to stereo.

hmmmm, apart from that SINE TONES ROCK !!!!

or something :P

Various students. Student presentations. Level 5 Schulz building, Adelaide University. 11 September 2008

Tuesday, 9 September 2008

cc2 - week 06 - music and signal processing

i thought it would be fun to create a melodic synth using variable length buffer playback.

so using the groove~ object, and converting midi pitch to milli seconds to control loop end point, bob's your uncle.

issues this time were, needint to set up the groove~ to loop continuously. and setting up the modulation via the receive~ #2, it took a bit of magic to get max to recognise. I actually had to create a second receive~ with the real name, before the argumentative one would work... the first time I've actually caught Max out with such a blatant suddenly it works manouevure !!

i would have liked to use the note on to trigger the loop from the start, this would have made longer loops potentially more interesting, however it did not eventuate.

zip file at:

Haines, Christian. 2008. 6. Sampling (1) - CC2 - Music and Signal Processing.pdf

Cycling'74. 2006, MSP Tutorials and Topics.


Tuesday, 2 September 2008

mtf - w33k 05 - negativland sez favourite things

rationale of satire.

sometimes people are so wrapped in their own truths their is no point in mentioning your perspective of the inadequecies of their argument (should you be of the mind to do so).

one way to respond is to take the piss.___

is this offensive ? possibly.
is this funny ? possibly.
is this worthwhile ? possibly.
is this dangerous ? possibly.

usefullness of satire.

we live in a world of abstractions. these abstractions are defined by ourselves. we make these definitions based on our current state of ego. our ego is attached to the world and is influenced by the world. the world may influence us through many ways.

education is the process of deliberate enlightenment. the external use of education requires that you have control over the times of others. school is one example, television is another.

satire must be a form of education, otherwise it is acting superior and laughing at others - this is potentially opening yourself to claims of ethical inappropriateness.


according to the wikipedia article "negativland", the band themselves created the link between "religion is stupid" and the axe murder. They apparently did this to get out of touring.

"negativland". viewed 30 August 2008.

Whittington, Stephen. Forum presenting Negativland's Our Favourite Things. University of Adelaide 28 August 2008.

Saturday, 30 August 2008

aa2 - week 05 - fmod designer

It is a tad disconcerting at first using something that doesnt quite behave as it looks ie an audio editor like audacity. And auditioning, where the file playpack position has nothing to do with where the cursor bar is, it takes a slight paradigm shift.

Apart from that, taking several hours to create a multi-channel audio file was the only annoyance.

The basic ideas behind FMOD are nothing new, they seem a new setting for automated envelopes really. I look forward to attempt some serious use of this as other manifestations of automated envelopes can be a bit annoying, and this has nice curve shaping :)

Oh, and there is a typo on page 126, that says to put the reverb setting at 0 (for point 4 on the SimDistance), with the picture that shows value 1. I used value 1.

zip with FMOD file and multichannel audio file are...

Haines, Christian. 2008. AA2 - Week 5 - Audio Engine Overview - Planner.pdf

pp 113 - 132. "Chapter 5 - Tutorial Projects". Creative 2007, FMOD DESIGNER:
USER MANUAL, Creative Labs, 2007, (

cc2 - week 05 - synthesis (02)

I am slowly getting the hang of this poly~ object. This week, I found the need to send all the note information in a single moment more relavent than previously. Once I put the line~ object (for envelopes) inside the poly~ things went smooth.

I also put a slight line~ as a quick ramp up for each note, this gets rid of last weeks annoyance in note clicks.

The application of a real time modulator was slightly problematic, but the solution came in the form of the send/recieve objects. This does make it an overall modulator rather than for individual voices.

As my MIDI controller does not send aftertouch, I made available use of the data slider (CC 07) as an alternative.

zip at:


Haines, Christian. 2008. 5. Synthesis (2) - CC2 - Music and Signal Processing.pdf

Cycling'74 2006, MSP Tutorials and Topics,

Wednesday, 27 August 2008

forum - week 04 - h3llo

what happened,
why do i have a list of the 9 basic emotions, and the 4 temperaments ???

OMG what have i missed :?

Tuesday, 26 August 2008

cc2 - week 04 - synthesis (1)

This weeks excitement was to create an additive synth.

The basics of what I created are ;
12 voice polyphonic additive synth with 6 independent sine wave partials.
Multiplier to give relationship to frequency in, amplitude envelope with adjustable time setting, pan, phase, and output level.

This is given a GUI, and utilised in a bpatcher for the demo patch.

This version utitilises 6 seperate poly.partial~ objects ( 1 for each partial), but am in the process of exploring the potential of a single poly~ object with 72 voices (6 partials * 12 voices). This involves what is currently looking like a convoluted mangle of triggers (having to retrigger each parameter for each voice), but with the just excitement idea of placing poly~ objects inside poly~ objects may be easier ... but I'll wait until I get it working before I get too excited.

This additive synth as it is still gives clicks with voice play back ??? The only thing I can think of is the phase position of the cycle~ when playback starts, but I would assume the function~ acting as an envelope would fix this...

Zip folder =

EDIT: did something funny with original upload. now i can't even upload with them :?
uploaded somewhere else, fixed link... now should work = ideal !!

Cycling'74 2006, MSP Tutorials and Topics,

Monday, 25 August 2008

aa2 - week 04 - game engine overview

Grand Theft Auto III: Vice City (GTA III) uses the Renderware game engine.

Renderware is highly customisable with ease of transition between middleware options, hence it is difficult to determine exactly what middleware may have been used in any manifestation.

GTA III does use the Miles Sound System for audio, with extensability through Creative EAX HD.

Whilst the engine itself is compatible with numerous machine/OS (Microsoft Windows, Gamecube, Wii, Xbox, Xbox 360, PlayStation 2, PlayStation 3, PlayStation Portable) GTA III appears on the PlayStation, Xbox, and Windows PC's.

The audio of the game is spatially realistic, with noises happening in appropriate locations and movement portrayed with doppler effect and volume transitions. Pedestrian voice overs are abnormally loud and constant considering the speed you drive past but this is obviously a programming choice, and there are occasional glitches where sounds drop out whilst moving spatially.


RenderWare. 2008. (viewed 22/08/08)

The Miles Sound System,
(viewed 22/08/08)

Tuesday, 19 August 2008

cc2 - week 03 - polyphony and instancing.

The poly~ object had some mystery. Or perhaps again it was Max/MSP exhibiting its inscrutable behaviour. It took several patches of attempts, and suddenly it worked !! Another of those unrepeatable moments when one repeats a similar process again but now it works.

Also more inscrutable behaviour by reading an earlier generation object from a portable hard drive with no relation to the patch, nor mentioned in the preferences... highly inscrutable. I would like to know how to find the source path of objects inside patcher windows, perhaps the filepath command would be useful but I have as yet to master it. The ability to find the source path would make debugging so much easier as Max has tendencies to be able to put things in one folder and take from another with not even a wink.

Again I have been forced to take a step back and be aware of the software's simplicity. "Target" exists to enhance useability and flexibility, the ability for user defined voice stealing (polyphony) is a fine and glorious thing.

zip file is at :


Haines, Christian. 2008. 3. Polyphony and Instancing - CC2 - Music and Signal Processing.pdf

Cycling'74 2006, MSP Tutorials and Topics,

Grosse, Darwin. 2006, The Poly Papers (part 1), 2007,

note... blogger seems to be treating anything within <> style as not for common observance, hence I've used ( ) instead

Monday, 18 August 2008

aa2 - week 03 - process and planning

Grand Theft Auto III - Vice City.

Please excuse poor quality of video, I havent quite worked out how to capture good quality gaming video but it does feature the sounds as listed below.

Perusing the folder of "audio" in the GTA install folder gives mainly voice samples from the game - storylines and general asides from people. Also ambience in 9-11 minute blocks (eg. beach, city).
The files vary in quality from 112kbps 32kHz mp3 to 128kbps 32kHz mp3, and also occasional wav format of short sounds.

background noises -
quiet city hubub
quiet background ocean
police radio - eg "suspect on foot"
enviroment being interacted with eg. cars hitting poles

bullets hitting car
bullets whizzing by
bullets hitting person
shells hitting ground

cars -
engine starting
engine noises - stationary and passing
car doors - open and close
police sirens
cars interacting with enviroment - eg trying to drive through poles

helicopter -
flying above

game noises -
mission started sound

people -
player footsteps
pedestrian voices - including scream

Haines, Christian. 2008. AA2 - Week 3 - Planning and Process - Planner.

Lampert, Mark. 2006, IT’S QUIET … ALMOST TOO QUIET, Bethesda Softworks, 2006,

"Chapter 2 - Sound Database". Childs, G. W. 2006, Creating Music and Sound for
Games, Thomson Course Technology.

pp 2 – 24. "Chapter 1 - A Development Process Map of Game-Audio Implementation".

Brandon, Alexander. 2005, Audio for games: planning, process, and production, New
Riders Games, Berkeley, Calif.

Thursday, 14 August 2008

forum - week 03 - presentations la noobs and old concrete guys.

It is interesting getting a perspective on the aesthetics of the (relatively) new faces that have been haunting forum for several months now.

It occurs to me to wonder what might be made of listening to multiple (think 20 or so ) years worth of first year student work in one sitting. What sort of overall impressions, or observations one could make. Of course this is unlikely given the current rate of software development and hence possible techniques that one may encounter. Material from 20 years ago would be possibly tape manipulations, midi based sequences of samples and synthesised voices - possibly presented on tape, or floppy disk. Where would one get access to that sort of hardware now ?

I was hoping to hear the current versions of paper ripping masterpieces, but was disappointed in this. I was also curious to see such a breadth of content when our similar assignment seemed much more focused ie on using our edited paper sounds.
Perhaps this is down to Christian's ability to evolve class material dependant on the actual class, or perhaps it is just down to steady evolution of class material......

all sorts of first year students presenting their work. "Forum: 1st Year Presentations” Seminar presented at the University of Adelaide, 14th August 2008.

Sunday, 10 August 2008

mf - week 02 - some favourite things

David Harris presented a work of his for string quartet, Terra Rapta, as recorded live by the Grainger Quartet.
It had some very lovely timbres. I would quite like to "remix" the piece (ie process the f**k out of it) and play with some of those interesting timbres. Oh if only I had time to do what I wanted.
EDIT: I just noticed the amusing contrast of timbre and chainsaw :P

Also a Schubert string quartet (from the same performance).
When listening to "classical" pieces, I often note the one bar of prime minimalist material that flashes by - if only I could quickly rip and edit and capture that bar , I would have an amazingly beautiful and repetitive moment to repeat - and perhaps intersperse with another bar...

Mmmm, I recall someone telling me of a composer whom took all his favourite bars from potentially a Beethoven piece, and reworked them into a new order. The only thing is he then scored it all and got someone else to play it !! OMG where is the technology ?!

Some of my current favourite things :
sunshowers - I have seen more of these during this winter than I have for possibly the last 15 years... delicate/thin drops of rain vectoring along the current gust direction catching the sun making the air glow.
synthesisers - been reading a history of the Moog. Synthesis is just the bomb !!!
my daughter - Raven is now 1 and a little, talks with a little interpereting and wanders around with her mouth wide open laughing manically.
max/msp - related to synthesisers really. I've been daydreaming about writing extended equations that will synthesise weirdly morphing square waves with no use of its own oscillator style objects.
coffee - I also have to bear its scar as an addiction.
chainsaws - bloody good fun, but a bit scary sometimes.
rocket and silverbeet from the garden - wet with raindrops and green !!

David Harris, "Music Technology Forum: Semester 2 - Week 2 ". Lecture presented at the University of Adelaide, South Australia, 07th July 2008

Stephen Whittington, "Music Technology Forum: Semester 2 - Week 2 ". Lecture presented at the University of Adelaide, South Australia, 07th July 2008

Saturday, 9 August 2008

cc2 - week 02 - signal switching and routing

In implementing the mute function, i found the mute~ object not appropriate yet as last weeks object don't feature any sort of cycle~ style object. So I integrated a toggle switch to trigger the line~ up/down. It works very well.

The ek.sinegen~ difficulty was deciding on the GUI, mmm, knobs/sliders or number boxes..

The ek.sawgen~ features an extra step of extending the basic phasor~ object to ramp from -1 to +1, rather than 0 to 1. I just though it was fun.

The GUI draws my attention back to integrating a default state rather than everything reseting to 0 on loading.
Either a loadbang/loadmess, or integrating a preset object to save multiple settings. The inputs in the GUI versions come in handy for a loadmess.

The GUI versions would be more flexible with a signal input as well as non-signal. At this stage implementing this involves having seperate internal channel paths, or perhaps seperate objects .

And after all this time and cross platform irritation it makes me wonder - how much is this blog worth ?

Zip of everything:


Haines, Christian. 2008. 2. Signal Switching and Routing - CC2 - Music and Signal Processing.pdf

Cycling'74. 2006. MSP Tutorials and Topics,

AA2 - week 02 - game audio analysis

Quake 3. Quake 3 Arena.

First person shooter with single and multiplayer modes, released in 1999.

Features startup logo for ID software.

Also cinematics for single player mode;
intro movie - stylised action with tune from soundtrack and appropriate play noises.
level movies - voice over introducing bots from each level, hyperreal sounds and ominous pads.

Soundtrack from two bands, front line assembly and sonic mayhem (game musician Sascha Dikiciyan) . Featuring particular tracks for each level play, also track for battle won/lost.

There are family sounds for characters (eg different footsteps shared between some character models), also unique sounds for some characters (EG orb jump).

Weapons have different sounds, but all seem to have similar hit sound.

There are two sorts of sound in gameplay.
Diegetic sounds - characters moving/shooting/being shot and from the map features eg teleportals, jump pads and scenery eg fires.
Non-diegetic sounds can be equated with narration/refereeing which give the game the feel of performing in an arena. eg voice overs giving current stats. And sounds associated with gameplay events eg dropping the flag.

The game engine features stereo spatialisation and doppler effect, and a possible 96 channels of audio (22kHz).


McGuire, Thomas. 2001. "Creative Labs 3D Blaster GeForce 3 Ti200 review". (09 August 2008)

2007. "Sascha Dikiciyan & Cris Velasco". (09 August 2008)

Huiberts, Sander. And van Tol, Richard. "IEZA: A Framework For Game Audio." (09 August 2008)

Grimshaw Mark and Schott Gareth. 2007 . "Situating Gaming as a Sonic Experience: The acoustic ecology of First-Person Shooters" (09 August 2008)

Friday, 1 August 2008

CC2_w33k 01_soft on/off for max/msp

I have created a rampdac~ object to soften the impact of turning the dac~ on/off, and a softvol~ object to smooth the effect of abrupt volume changes on an audio signal.

I settled on 12 ms as the default ramp time for both objects. I enjoyed the seeming softness it gave to the softvol~ and then gave it to the rampdac~. I assume that one could abuse any setting to give roughness to a signal, but I think 12ms gives a good all round basic setting.

The major drama (other than remembering how to make a custom argument), was trying to put the dac~ into the rampdac~ object. It took me a few minutes to realise it needed to be in the same window as the cycle~ object (the whole point of the startwindow command ).

The minor drama was creating a custom argument with a default setting. The way I have implemented it means the time for both objects can never = 0, which considering the whole exercise was to prevent this is OK with me.

Oh, and there's sort of a bug in rampdac~ in that if you press off, then on again before it turns off, it'll turn off when it should be on... behaves ok if you don't click madly or keep time at a low setting.

It seems on a basic intro to MSP that it treats audio in a similar manner to Plogue, and as such the work we did last year will stand us in good stead !!!! Hurrah !!!!

Zip of two objects plus help plus demonstration

Haines, Christian. 2008. 1. Introduction to MSP - CC2 - Music and Signal Processing.pdf

Cycling'74 2006, MSP Tutorials and Topics,

AA2 _ week 01_ancient game sound

Levels 1 - 5 on the Apple II

For a pre 1987 gameI have chosen Lode Runner, a 1983 platform game which I played as an Apple II game.

For this review I have sourced one video of gameplay from the Apple II, one video on C64, the windows DOS game modified for slower game speed, and the MAME version (arcade version emulation).

The general plot of the game is to collect boxes of gold, while avoiding enemies (by running and digging holes) and climbing along ladders and platforms. Upon collection of all the boxes the player then escapes by gaining the top of the screen.

The sounds are all synthesised, and of varying quality. The DOS version is the worst quality, then the Apple, then the C64 and MAME version being similar in quality.

All FX are based on the current action of the player (eg falling, digging) or by the actions of the player (enemy falls in dug hole). This implies limiting simultaneous sounds due to processing limitations - there are moments of simultaneous sound eg when an enemy falls in a hole during other player actions and the sounds are not clear (some sort of battle for supremacy happens).

There is general concurrence between the versions in sound types for various actions however they are all very different. The notable differences in sound versions include the C64 having a short melodic progression upon collection of all boxes (vs. a short tone for the Apple) and lacking a sound for enemies falling in a pit. The MAME version has a timer on the levels, and a warning tone begins sounding near the end of time.

As far as music goes -
MAME features non-stop background music, at least 2 music voices plus FX sounds plus timer. C64 video feaures a brief monphonic musical fanfare with the title screen. No musical content in the MS-DOS version nor on the Apple video.

Haines, Christain. 2008. 1. Introduction To Game Audio - AA2 - Game Audio - Planner.pdf

McDonald, Glenn. 2002, A brief timeline of Video Game Music, 2007,
Video game music, 2007, 2007, .

Summers, Jason. "Welcome to Jason's Lode Runner Archive." (1 August 2008).

"Loder Runner." (1 August 2008).

dsby. "Lode Runner on the Apple II (Levels 1-5)." (1 August 2008).

DerSchmu. "C64 Gamevideo - Lode Runner Part 1/3." (1 August 2008).

"MAME roms, L." ROMNation.NET. (1 August 2008).



ode to bus travel.
bus olng 22

Earlier this year I was considering the options of the portable generation/manipulation of sound/music (eg using a laptop on the train), and apart from battery life, the main issues was being able to hear what I was doing and its associated generated subtleties. In the public arena ambient sound is an issue, and one cannot hear the subtleties unless they are louder than the ambient sound and even then one cannot be sure that the subtleties are not created by the ambience.

Even earlier than that I purchased my first mp3 player and its associated lifestyle/freedom, I then quickly gave up listening to certain musical style/genres (eg almost anything with drums) because of my inability to enjoy them against the background of public transport . This led me to adapt my public listening to sounds of a more ambient nature that could easily meld with the the general ambience without needing to be cranked (eg Acreil).

Here lies one of the key issues regarding the noise levels of shared portable listening; to actually listen to some music in a public space it must be louder than the ambient noise....

This implies to me that public listeners whose earbuds are audible from a distance are attempting to overcome background noise so as to fully appreciated the subtleties of their musical choice. And possibly that others whose listening is at more discrete levels are perhaps not listening as hard, but rather using the music as an enhancement to their surrounds rather than seperate themselves for their surrounds.

Of course this does not take into account the use of mobile phones as tinny boom boxes, where sound quality is not even worth mentioning. Perhaps here, the essence of popular music is revealed as not being embedded in the sound quality but rather melodic/rhythmic/lyrical content. Perhaps even its associative memories account for more than sound junkies would like to admit...

[1] Stephen Whittington, "Music Technology Forum: Semester 2 - Week 1 ". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 31st July 2008

Thursday, 26 June 2008

cc2 - sound project - iterative parabola

Sound Project: Program Note
Parabolic function as iterated through colour and melodic transforms
Edward Kelly

This piece is a composition based on the manipulation of data from the generation of a parabola (and associated transforms) of the iterative function, y = R(x+1) where R is a constant and y re-enters the equation as consequent value of x.

The function output is transformed and mapped onto an LCD screen. There are a number of transforms operating ;
manipulation of the constant R, manipulation of the number of iterations used, inverting the X Y planes as the parabola is mapped, a variety of coulour options for the pixels as mapped, a variety of shapes/drawing options.

There are two melodic variations based on the function output and LCD state.
1. parabolic melody uses succesive iterations of the left hand parabola output for its melodic content. In this case note duration is responsible for triggering iterations of the parabola.
2. colour melody divides the screen into a 16 x 16 grid and samples the center pixel of each grid square. The parabolas operate independently of this, and note duration is responsible for triggering data sampling.

Both melodic forms translate the pixel data (location, colour) into melodic data (pitch, panning, amplitude, duration/rhythm).

The parabola dictates the melody in two ways
1. sucessive iterations as melodic form. Also when using the incremental colour scheme (this pre-samples the pixel colour data and increments it upon drawing) prior events can shape outcome.
2. the LCD state (as sampled) is completely dependant upon parabolic output.

Note : the different drawing options effect the melodic outcome only in that other areas of the screen are effected by these. This is very obvious whilst using the incremental colour scheme with larger ovals.

MP3 of performance and score at

Zip of MaxMSP files at

aa2 - recording project - ben gillard quartet

Recording project _ Ben Gillard Quartet You Don't Know What Love Is

Four piece jazz ensemble recorded live.

aa2 - recording project - reverb

Recording project _ Reverb - Helter Skelter.

Four piece rock style cover band recorded live.

Documentation and mp3 at :

Tuesday, 17 June 2008

mtf - week 12 - dj Skilz

Well, scratching is all well and good... however more than 30 seconds continuous is just tedious. It reminds me of the Theremin - at first it's a really interesting use of time/space, then "OMG is that is ? A warbly pure tone ? Where's my polyphony ? Let alone distortion/delay/chorus/etc...?"

Apart from that minor quibble, the art of DJing is a very fine art. There are practitioners of all skill levels and it is rare that I experience the finer end.

On the blunter end, my experience of CutDemUp, a digital hardcore freak, (whose approach to scratching was perhaps improved with a blunt needle) was at the time a veritable laugh and a half. Techniques included dragging the needle sideways, and dropping it repetitively. And given the material he was mixing (digital hardcore - and if you don't know, it blended relatively seamlessly.

On a personal level, my finer moments of DJing have included using the infinite loop at the end of a record for extra "authentic record crackle", and leaving the needle on an unrotating platter - and cranking the volume for quality record feedback (a lovely warm sound).

In regards to the quality DVD we were exposed to in the name of education :P I am a bit curious as to the lack of emphasis on EQ as a tool. The EQ on a decent mixer is incredibly intense, and when used with something as simple as whatever it's called when you have two tracks on at once (beatmixing?) can enhance the layering. Not to mention the use of EQ killswitches (switch based band on/off), which when used with quality psy-trance, can raise the level of your generic middle eight peak to a level of glowstick abandon !!!

[1] Stephen Whittington, "Music Technology Forum: Semester 1 - Week 12 – Itching and Scratching". Lecture presented at the Electronic Music Unit, University of Adelaide, South Australia, 5th June 2008

Tuesday, 3 June 2008

mtf - week 11 - 3f1x

Here's a beautiful quote from an un-named author in a vitamin catalogue.
"Laundry powder - you know you'll use it up over time, so why buy a small bottle that'll only end up in the bin ?"
go green. Glow magazine. Winter 2008 edition.

This is just common sense, the like of which often falls by the wayside. A small bottle costs 1/3rd the price (or whatever) so obviously it's cheaper to buy the small one...... or is it ? Let's try a little math.
I do recall at some point in my dim distant past, supermarket trolleys with inbuilt calculators !!
Although, in personal research into toilet paper, as far as the Safe brand recycled paper is concerned, it is often cheaper (by the roll) to buy 9 rolls rather than the larger packet of 12 - I figure it is a super market conspiracy to shape purchasing.

My conclusion, is if that life is happiness, ethics is maintaining yourself in a state of conscious happiness rather than an unconscious zombie. ie buy bulk, use energy efficient lightglobes and give your mum a kiss XX

hurrah for being !!


Whittington, Steven. Harris, David. Forum presentation on Ethics and Technology at EMU space, University of Adelaide. 3rd June 2008.

Monday, 2 June 2008

aa2 - week 10 - mastering

I had a go at 'mastering' 3 different tracks. I used similiar chains on all 3, EQ, stereo imaging, and compression, although I experimented with multi band and single band.

Examples are first half pre-mastering, second half post-mastering.

This track, metal style, I think I lost a bit of overall level but gained clarity.

This track, weird pop style (early 90's), similiar result, loss of level but gain of clarity.

Third track, electronic, gained clarity and level. This one has fairly dirty synth pad that becomes more noticeable.

Sitting back and listening to them makes me keen to try again :)

I'm keen to try and use more EQ for pre compression control, narrowing in on particular frequencies, and to experiment more with compression I've always found compression amusingly difficult on a overall mix, individual instruments ok, but when you combine sounds it makes it another step difficult.

Grice, David. 2008. “AA2 – Mastering.” Seminar presented at the University of Adelaide 27th May.

Thursday, 29 May 2008

CC2 - Week 10 - Interface Design

two offset windows for the multislider bpatcher, and the ramp value bpatcher..
note the blue slider...

With the variable size MultiSliders it can be hard to tell where in the cycle it is, so now the slider that is being called will change colour. This limits the variability size to 8 as that is the definable colours for candycane sliders. I could use mulitple Multisliders to get more than 8 sliders - but that is a programming challenge for the future.

I put a mulitslider in a bpatcher, with the use of offset to display additional settings - very cool, and space saving !!

However I discovered that I was unable to save presets in the bpatcher window.. I can save the presets in the original patch, but this is quite limiting but good for pre-loading. This caused me to leave the duration multislider as it was, as it uses a range of 0 - 5 for the sliders.

Also bpatching the ramp to controllers, making it neat, but again harder to use re. presets.

I dig the bpatcher, I really did what I did with the multisliders "other settings" button.

link to zip..

have worked out preset dilemna !! so have added more bpatchers..

Haines, Christain. 2008. CC2 - Week 10 - Interface Design - Planner.pdf

Cycling'74 2006, Max Tutorial, 11/1/2007,

Wednesday, 28 May 2008

aa2 - week 10 - mixing 03

For mixing down this week I've chosen an electronic piece I sort of finished several months ago, and dug out for a new perspective.

It features an interesting array of drum sounds, with most of the track having a "click" rather than a kick. And pretty much most of the bass spectrum actually lives in the bass line, with other sounds rarely touching the low end.

It turns out however that the first sample is from a section featuring another kick. It is in other respects quite a sparse section.
tiny tiny bits 01.mp3 (

The second sample is the first with a HPF and compression on the stereo mix.
tiny tiny bits 02 hpf+cmp.mp3 (

Third sample is a different section and features the HPF and compression on the stereo mix.
tiny tiny bits 03 hpf+cmp.mp3 (

All files are at

White, Paul. 2006. Mixing Essentials. Sound On Sound magazine.

Stavrou, Paul. More Salvage Mix Techniques. Audio Arts 2 Readings.

Wednesday, 21 May 2008

forum - week 09 - of electronics and harmonics

.. these different musical styles ... affected the evaporation rate of distilled water...
From full beakers, 14 to 17 ml evaporated.. in the silent chambers, 20 to 25 ml..under the inflene of Bach, Shankar and jazz; but, with rock... 55 to 59 ml.
[142: Tompkins and Bird]

George E. Smith.... planted maize and soyabeans ... in two identical greenhouses, both kept precisely at the same level of temperature and humidity. In one...Gershwin's 'Rhapsody in Blue' 24 hours a that greenhouse sprouted earlier than those given the silent treatment.... their stems were thicker, tougher and greener.
[137: Tompkins and Bird]

In the early 1950’s, Dr T. C. N. Singh of the Department of Botany at Annamalai University, Madras, India, discovered under microscope that plant protoplasm was moving faster in the cell as a result of the sound produced by an electric tuning fork. This discovery led to his conclusion that sound must have some effect on the metabolic activities of the plant cell.

Tompkins, Peter and Bird, Christopher. 1974. Secret Life of Plants. Blackburn, Victoria: Penguin Press.

SINGH, T. C. N., On the Effect of Music and Dance on Plants, Bihar Agricultural College Magazine, Volume 13, no. 1, 1962–1963:

Tuesday, 20 May 2008

CC2 - Week 9 - Data Management and Application Contro

This week's problems were.

Upon the patch loading - the probability sequencer looked like it was loaded with note, but it wasn't.

Attempting to get the note recorder to record time between notes (as well as of notes), and play back sequence with this info. The major problem being metro won't change time between bangs. and I couldn't get the coll object to succesfully spit out appropriate address' eg 12 13 10 11 8 9 6 7

SO the recorder will play notes back in the same order as played, and has the option of either using the time recorded (for the previous note) or triggered by main sequencer or modifiers.

Other problems were with the layout, my patches just seem messy, but I don't know what to do about it... also there may be a better way of laying out the sections and relating them to each other.
I think I need to look at other peoples well developed patches..

IMPORTANT note, the record section can be sent to any channel (Prob Seq = 1). Giving more variety.
Getting Reason to work with two incoming MIDI channels is bloody annoying !!!

Haines, Christain. 2008. CC2 - Week 9 - Data Management and Application Control - Planner.pdf

Cycling'74 2006, Max Tutorial, 11/1/2007,

Sunday, 18 May 2008

aa2 - week 09 - mixing 02

This week I've mixed down sections of a recording I did last year of Tristan Louth-Robins.

It features numerous guitar tracks, vocals, backing vocals and melodica.

I've pretty much maintained eq settings on instruments, just varied levels and effects between mixes.

I'm quite enjoying the fairly dry vocal sounds, and the varying levels of backing vocals and melodica.
That said I think mix01 works the least with the vocals being too forward and dry.
And mix03 is my favourite with much a much wetter mix than the other two.

This weeks lesson to be learnt was what happens when you use plug-ins on one computer that aren't available on another computer :P

Grice, David. 2008. “AA2 – Mixing.” Seminar presented at the University of Adelaide 13th May.

Tuesday, 13 May 2008

cc2 - Week 8 - MIDI and Virtual Instrumentation

Since last week I have modified the phrase slider/tables to be more flexible (theoretically up to 128 sliders and including a random selector - non weighted, just random selection).

I added one of these to be a controller modifier, and three line objects for triggered changing values (the bottom one is half speed).

I also found the preset object as a way to load the patch with number boxes with a useful value (with loadmess).

Bugs -
the ramp subjects, are only triggered by a change in the max value OR by the sequencer running.

zip of files


EDIT: have neatened the patch a little and updated the preset so it loads better. 160508.

Haines, Christain. 2008. CC2 - Week - MIDI and Virtual Instrumentation - Planner.pdf

Cycling'74 2006, Max Tutorial, 11/1/2007,

mtf - week 08 - industry nerd jabbering on

In this case, not the parish priest, Peter Dowdall...

Peter Dowdall gave a rather interesting talk on both recording a large ensemble in the EMU space, and working for such multi-nationals as PEPSI. (mmmm, it makes me wonder what pepsi tastes like ?)

His use of organisational skills in the preparatory process of recording gives a good sense of necessities when approaching a recording with such a big set up, and in a space which is not something one is totally familiar with. Hence spending an amount of time prior to the actual session running through the process proves itself invaluable. For example I myself am curious as to exactly how many mic leads there are at EMU, and to hear Peter saying he had to outsource comes as no surprise.

His over the top photographic journal of the experience was also amazing !! Just recently I have had cause to bemoan my own lack of note taking when I was attempting to document some of my past works (like 6 or 7 years ago), and my own experience in the mixing world also brings home to me how bloody good it is to know what's going on now, and what was going on yesterday.

I was also very amused by his description of editing to a grid. Cutting of the end of phrases because fall outside the appropriate length.. I am looking forward to trying it at the appropriate juncture :P It's a bit like slow motion glitch.


Dowdall, Peter. Forum presentation at EMU space, University of Adelaide. 8th of May 2008.

Sunday, 11 May 2008

aa2 - week 8 - mixing basics

I went into uni twice to mix down a recording session I did last year. First time the DVD case was labeled, but the DVD was blank :? Second time, I had found the correct DVD, but left it at home :(

So perusing the EMU storage I found 20 .wav files which made up a track by Brian Eno and David Byrne.

I put them into ProTools, and to continue my clown action, forgot to get a snapshot of anything..

Anyway, here are three different mixes.

Some of the tracks are very noisy, with lots of hiss - enter the LPF.
And the vocal tracks are very nicely badly recorded, sounds like tape distortion and just dodgy tape action in general.
But given that they worked quite a lot together in late '70s early '80s, its quite possible that these tracks are
from those times :0

Generally used EQ to get rid of hiss and fit some sounds together, a bit of reverb, and some compression on the overall mix.

Grice, David. 06 May 2008. Mxing Basics Lecture. Adelaide University.

Wednesday, 7 May 2008

mtf - week 07 - tristram carey

I was absent for this forum on the life of Tristram Carey.

His name brings back thoughts of Dr Who. Jon Pertwee (also Worzel Gummidge) and Tom Baker whom were my favourites :)

Watching recent series makes me yearn for the simple (yet rich) audio accompaniment that was the then Dr Who.

Also the "impossible piano" which features some very fine pieces of midi piano mayhem.

A shame I missed finding out more about him :(