Monday, October 29, 2007

BACKING TRACK

Nicola and I discussed ideas for the music behind our track and video. It would be good to use the format of the LRRH story so we have crackling of twigs at the start, representing LRRH walking through the woods, etc. and birds cheaping at the end to symbolise a happy ending, that of enlightenment with sounds that are bright and sunny. We planned on getting these sounds from the original samples that were spoken and using various programs such as Spear, Soundhack and Logic. I want to use some of the words onamatopoeically, to bring even more sense of unification to the project. Words like 'conspired' can be used as a whisper too to mimic the action. This made us think of witches, which then brought me back to the Ganguro girls....another type is a yamanba (mountain hag) so we could look up japanese music, film, folklore, etc for extra input.
The length of the song should be about 3 mins for a decent length and the structure could maybe include 1 min intro with sounds collecting the ideas of the story together, 1 min vocal line (we could split this into 2 verses, 30 secs each) 30 secs for the chorus, which will be made from the original samples or possibly mix around the vocal line and 30 secs for the ending. We agreed that the vocal line will get re-recorded.
We went to the IMAC suite at conservatoire to use the music programs. We worked independently for a change to see what we would come up with for different parts of the tune

Firstly i worked with Soundhack and took each sample individually and play with the effects. Once i had saved a few that i liked i put them into Logic to play them one after the other. One of the samples came out really well, like a deep long growl (the wolf!) even though my story has a man instead of a wolf i see the two as the same thing. I layered some other sounds on top of the growl and saved the line. I then got Spear up, which is fantastic for film music in my opinion. This is one program i will definately use when making films. You load up your sample, which looks like a load of small chopped up lines. You can then chop bits or chunks out of it, timestretch it, change the frequencies, overlap them and all sorts of brilliant things. I really enjoyed working with the different audio programs today and will continue it tomorrow. Nicola and I listened to what each other had come up with and were very pleased. To see the video of our completed dance triggering the samples look HERE.

Thursday, October 25, 2007

BACK TO LOGIC

We had planned to record the vocal line today and needed to get our samples back into logic to do so. As usual it never ran smoothly and we had to turn to Jonathan Green for help to find out why we couldn't record. We had to disconnect the wire (in logic) going to the desk and put it into the sequencer instead. Once it was working we recorded the dance three times. All of the triggers were activated with each sequence, even if they weren't in the same order every time. I think every time a move is made, the slight difference in it the next time it's performed, triggers off a different sample first. It all depends which sample gets in there first. The elbows and shoulders are the most common because if you move your elbow it moves the shoulder so it's just chance as to which is activated first. To do the same effectively in real-time, with accuracy, we would have to get rid of probably half the samples again to ensure that only 1 sample will be played with a particular movement. We could do that for the video project also but it doesn't really matter if the samples are slightly different. We are going to choose one of the takes tomorrow and perhaps re-record it with new vocals.

WEDNESDAY

I spoke too soon on my last post..so today i'm writing in grey to reflect my mood!!Just when we thought the suit was becoming our friend...it turned on us. It was Nic's turn to wear it and as she was going through the dance, we were getting a different outcome. Sometimes the same samples were being triggered but sometimes not, unlike yesterday when it was pretty repetetive. So, i thought it may be due to the fact that i was wearing the suit yesterday and therefore, performing the moves slightly different. I put the suit on, we calibrated it but the same was happening for me too. We also adjusted the suit with the key to make sure it hadn't become loose over the last few weeks. Perhaps each time we calibrate we are going to get a slightly different result. The battery then died on us so we had to go back to the lab.
There were a couple of videos on the net of the midi suit and what people have done with it. Their projects appear to be a lot less complex compared to ours. They have very few triggers and repeat the movement a lot whilst having a backing track that gives most of the effect. Of course, we are not trying to just 'punch' a few effects out of the suit but are working to create a song, which is connected with a 'non-simple' dance.

Tuesday, October 23, 2007

CONDENSING

The first thing to do today was to change the values on the 16 channels of midi on the max patch. Max is so annoying though, when we opened it up and moved the suit it decided to play the samples with loads of delay which we hadn't programmed!!! We shut it down a re-booted and it worked fine. This has happened plenty of times where it just freezes and has to be shut down. The channels previously had 8 samples on each one but we halved that so there is now 64 in total as oppopsed to 128. Also, after checking the patch that tells us what actions the midi suit is picking up, i thought we may as well delete channels 7,8,15 and 16. We now have 12 channels. 16 wasn't actually picking up anything and the other 3 were not needed so by narrowing them down we can gain more control over our output. The wrist actions: flex and twist, get a bit muddled so we have not put any samples to be triggered by the twist action. We chose 12 of our favourite samples and placed them: 2 on the flex of each wrist (down and up), 2 on each elbow for x and y movement, 2 on each shoulder, also for x and y.
These were then tested wearing the suit, with the dance we made last week. As we had expected, the same samples were activated (most times) with each full sequence. So as i was dancing i could sing what was coming next. We were extremely pleased with this and the thought of having 128 samples now for the vocal line seems ridiculous!! The samples
that we had mapped and were not picked up with the dance will be taken into consideration for the 2nd part of the dance. We have just repeated the dance on the other side of our body for now and hopefully this will activate the other samples. I wanted to keep the sequence simple and re-use the material where possible so repeating it with a slight change fits in with what we have done with our audio. If the samples are not activated then we can change the movement to do so. The first part of the dance came first and determined the audio, (within limit as we had narrowed down the options) and the second part will be choreographed to trigger certain samples. The dance has created the vocal and the layout of the samples will now create the movement.
The samples we are not using will be utilised in other ways. Nicola and i discussed the programs: spear and soundhack, which are great for messing around with our samples and creating weird stuff for our backing track. We spoke about taking sections and working on them individually to see what we come up with.
So, to summarise for today, we have narrowed down our samples to our favourites, which has also eliminated wastage. Previously, we had 128 and as we were playing them in mono, 1 voice only, the ones that would have been triggered if the sample could have been interrupted, were lost, so may as well not have been there.
More samples are needed when working in poly with more than one voice and they overlap. We may use these in part of the song but for the actual vocal line our 12 samples are plenty. The dance and the suit are now starting to work really well for us and doing almost exactly what we had anticipated. It's only through using Kontakt i feel we have really started to get somewhere now. We have come a long way to get here but having our samples in Kontakt and being able to manipulate them easily has really started to bring the project on. I'm feeling really positive about today.
Tomorrow we are testing the second half of the dance with the suit. When this is complete and we are happy with our control, the next event is to record the vocal line. We may re-record the vocal again once we have the structure. This is to make sure that the vocal is one take as opposed to 12 separate samples and to get them all in the same key, tone, etc. We can also double the track up by using both of our voices.

Monday, October 22, 2007

Our samples have been put into Kontakt now, Jonathan suggested that we would get more use out of this program now that Logic has done it's job. The viewing and mapping of the samples are now a lot clearer.We listened to the samples individually and deleted the ones we thought were boring because we don't need 128 for the movement range we have with the suit. Maybe if the suit was full body we could utilise them all.At the moment there are just too many per action and it's sounding confusing when the voices are overlapping and when they are just mono and playing alone, the movement does not trigger another sample so they may as well not be there.
Next we stretched our favourite samples so they would be triggered more times, although thinking about it, if the samples are going to overlap slightly we could have some blanks so the whole keyboard is not full. This way of mapping is so much easier than having to re-map the whole suit but if we get the samples down to just the best ones, we can then map the suit so we have complete control over each sample. We could have just 4 samples per action, there are 16 channels on the suit so that would give us 64 samples, which i think will be plenty for the vocal line. After stretching the samples, we had to delete them and just duplicate them instead because the pitch had changed.
There is so much we can do with our project, We have to continuously make judgments as to what will work the best. We can only do this through experimentation, which at the moment, is leading us to some good ideas. When we were in the max class on fri we heard the samples through the speakers and when they were played to overlap, we got some nice harmonies. We could use this to our advantage when choosing which samples to keep and where to place them on the suit. Some of the samples clash harmonically so we must bear that in mind too. There are a whole load of effects using max as we discovered in the class too. I had some samples triggering off my left arm and altering the way in which they were played with my right arm. This is great for live performance because the audience can see and hear what you are doing.As we won't be wearing the suit in our video this may not be as effective but we could still use the effects in other ways.
We need to choreograph the rest of the dance and see how it works with creating a tune. We can manipulate the sequence once we have a basic structure to get the best result. We are using full body movement even though the suit is upper body. By using both we will have direct and indirect instruction. The upper body is primary and the lower secondary. For example, a leg movement may finish with a hip movement ,which swings to incorporate the shoulders, thus sending the signal.
Tomorrow we need to test the dance we have so far to see what order we have the samples playing.


Wednesday, October 17, 2007

CHOREOGRAPHING

Well, we are finally at the choreography stage....yipee!! Although i do think we may have to re-map the suit still. When we listened to the samples through the midi suit and after changing the patch to allow the samples to play without interruption, we realised that we may not have to re-map the suit. By the time one movement is complete, the first sample that was activated is the only one to have been played. This may, however, cause us problems with the activation of the other samples but we will have to see how many are triggered with the movement we do.
We began to choreograph the para para sequence. I don't want it to be like a conventional pop style dance or as cheesey as the Ganguro girls' dance so i am basing the movement with aspects of the Little Red Riding Hood story in mind and creating some contemporary/quirky sequences. We need to keep the dance fairly simple for a few reasons. Firstly, Nicola has never danced so it would be ridiculous to choreograph a hard sequence and expect her to learn it well. Secondly, we need to perform it in unison like that of the para para so keeping it mostly simple will allow us to perform well. Finally, the movement is predicting the order of the samples so complex movement is not needed and will affect our control. I recorded the dance sequence we have so far and can be viewed below.
Nicola and i discussed a little bit about the actual video. We thought about having some text projected onto us, the LRRH story that we abstracted to create the vocal lines. I want the dance, song, music, video and costume all to be connected and interdependent of each other.


Tuesday, October 16, 2007

MY FRIEND MAX

We had some problems getting the midi suit working today but found out it was due to a button being pressed in on the firewire!! Nicola wore the suit today. The samples sound so much better now they have been re-done...very harmonic. The problem was, though, the samples were cutting each other out. Even though we have the max patch set so each one plays for 4000 ms, this only enables them to play if it's the only sample being activated. So because the suit is mapped to pick up, say 7 samples per twist of the wrist, they are stopping each other when the next one is switched on. Re-mapping would help to some extent, which we had planned on doing for more freedom of movement, but not enough for each sample to play fully. So it was back to Max to create another patch in which we had to give a message to tell poly to be mony. Patch will be put here....
Tomorrow we have to find the best way to re-map the suit. We can switch off some of the smaller movement samples like the wrist but we still have to place them somewhere else so maybe on the shoulder, where we have the least control, may be the best place.

Friday, October 12, 2007

RE-RECORD

We booked the recording studio and live room again to re-do the vocals in pro tools. Firstly, the existing samples were re-written on paper in their new sounds. So the ones we reversed and chopped, etc. were written as best we could understand them. We came up with some interesting phrases! The full set of 128 new samples can be seen in my googledocs here. (....to come...)You can compare them to the original text we used written in colour below, to see how they have altered.
As the original recordings were spoken, we decided to sing the new ones using notes within the major scale and over a couple of octaves for variation. I used a keyboard in the studio for pitch. We bounced the tracks and saved them to the external hard drive. When we put these into Logic, and re-map the midi suit we should be able to get the result we require: to manupilate the samples and get a tuneful result!

Wednesday, October 10, 2007

DECISIONS


The suit is working fine with the samples in Logic Pro as is pictured above with Nicola and myself. The decisions now have to be made about how to get the samples to sound good when constructed. I thought it would be a case of using effects and autotune to do this. We thought maybe that if we section each of the 128 samples into 8 (octaves) and pitched them to a major scale, using the pitch correction plug in it would solve our problem. Then have the scales ranging over 2 octaves (split the keyboard into 2 sections) and this would give us enough variation to create a song and sound melodically correct.The problem was that the result sounded too computerised, which we didn't want,.as autotune usually does when it changes notes drastically.

We started to think that it might be useful to re-record the samples. So we could do 128 replicas (1 for each sample) but within a major scale. They would all fit in to the same key then and by singing them, the result would be more tuneful. Also, some of the existing samples are quite short and quick and although we do need a contrast, i think the new samples should be more sustained to give the ear a chance to hear them clearly before jumping to the next. Another alternative, suggested by Lamberto, is to time-stretch some of the samples. When this is done over about 1/2 a second, it's unnoticeable, obviously if we stretched it for longer the result would,again, sound very computerised.
Even though we are creating new samples, the existing ones have not been useless. When we heard them by using the midi suit, we got a nice sound when the setting in Logic was on poly. This gives an overlap of each sample, rather than seperate when heard in mono. We could use these samples for an instrumental part of the song or maybe the middle eighth. If we autopitched them to the same key as the new samples, we could even use them as a backing track. The computerised sound wouldn't matter then because we are hearing a conglomeration instead of an individual sound. The movement would be quicker from the midi suit and less contrived.
The midi suit needs to be mapped so that we get a larger scope for activating the samples
between joints. At the moment there are, for example, seven samples playing just from twisting one wrist. Although possible, it's extremely hard and time consuming, not to mention restricting choreography, to master what each tiny movement activates. So by re-mapping, we can transfer say those seven movements of the wrist to maybe connect to just 3 sounds. So along with making the samples longer and the midi suit less 'sensitive' we should have more room to choreograph our tune!!

Tuesday, October 9, 2007

ALL VERY TECHNICAL

On Monday the audio files were copied and saved into Logic. We thought there might be a problem as we couldn't copy wav files before but we just dragged them over and it worked! So each file was placed onto the keyboard in the program. Some of the samples had to be set into the correct key range because when we copied them over, starting with the note C1, some went off the keyboard. When they were copied back on, the notes were really low, so that had to be fixed. We had a play around to see what effects we could use but Nicola thought it would be best if we got the samples into a sequence first, using the midi suit, so we would just put effects on the final tune, which i thought made sense. We can then use the granulation effect in max/msp, which sounds great.
On Tuesday, we had to build a patch in max/msp so that our audio files could be understood and the midi suit could manipulate them. With loads of help from Jonathan, we achieved this! First we adjusted the suit to fit me correctly. It had to be calibrated to make sure the movement was picked up correctly and that the readings didn't go off the graph. We then created the patch: (click pic to enlarge)



We saved the audio onto Logic via an external hard drive as we used a different Logic program yesterday so it wasn't saved on this computer. There was a problem with the suit operating not only the keyboard but also the volume and panning because two midi inputs were being read so Logic express couldn't distinguish between the two. Logic Pro can tell the difference between the midi input ports so tomorrow shouldn't be a problem.

Friday, October 5, 2007

CHOPPING

We had the laborious, yet fun task of chopping the audio up into 128 different sections ready to use in Logic. We did try to use Logic to cut the file but it didn't seem to want to import the wav file we had bounced yesterday and saved on an external hard drive. Logic is good for midi rather than audio so perhaps that's why. So failing that we tried Spark xl. This program wouldn't display the edits on the screen so we decided to use pro tools again as we did yesterday. This worked very well. It was just time consuming to choose 128 sections that sounded useful to have as our sound scale, which will be re-arranged by using the midi suit. We took random sections and listened to them just to make sure they were useful but we didn't want to be to precious about it because they're going to sound completely different with the various effects we will put on them. Also, if we had been too picky the time taken would have doubled.
Some of the sections were reversed and it was interesting how it affected the diction. Depending on the words reversed, we got German, French, Spanish/Italian sounding language as well as just sounding like English played backwards. The new files were saved on the hard drive and will have to be bounced individually.
So, the next plan of action is to make the most of the audio files by changing them with effects, such as reverb, autotune, time stretch, etc. Once this is done we can wire the midi suit up to trigger each of the128 files, by moving different parts of our bodies. When we get a good sequence, with great skill and memory.... we can remember the movement pattern, which will then become our para para dance to perform in the video! The dance will reflect the sound and vice versa and should bring an unconventional element to the pop genre ideal.
I attended 2 lectures with Jonathan in max/msp, a little hard for me to grasp after the first lecture as my brain doesn't tend to process technical info quickly! The practical class was more appealing to me; we got to mess around with the mixing desk and learn about the acoustics of the recital room, etc. The reverb and effects we applied to the sound input (from the piano) were really exciting and i'm looking forward to using them in my own work.

Thursday, October 4, 2007

RECORDING

I got a good introduction to pro tools today as Nicola had used it before and we decided to use it to record with. Taking the Little Red Riding Hood idea, I copied the story I had previously written, (just on good old fashioned paper) ready to take into the recording studio. Nicola and I chose to extract words or phrases from the text, that we liked and were a summarisation of the ideas behind the story. We then ended up with a shorter story that didn't make much sense as a whole, which was fine because we plan to chop it up anyway. It looks like this:

Once upon a time unsatisfied with previous lovers seek knowledge rightful path she became distracted tempted by a tall dark handsome green-eyed stranger life long passion to re-invent men his wicked way the horny male arrived feel to force gagged gigolo conversing when we're done instantaneously after ejaculation dirty deed little red riding hood had anticipated females conspired then castrated testicles pebbles afoot sinister cackling awoke brain removed hermit practice tantric sex wise old spinster
We set the desk, speakers and microphone up. We only needed 2 speakers rather than 5 that were set up so changed the 1/0 settings to 'stereo mix'. We checked the levels to make sure we got a good sound by going next door into the live room where the mic was. The sample rate was set to 48khz and bit depth to 24 bit, this is dvd quality so would be great for what we needed. We did 4 recordings, 2 for each of us. We spoke the first one just in plain story form. We then made the second one more dynamic, I wanted to get as much variation in the vocals as we could so we would have a good basis to work from. After recording, the first vocal was slightly louder than the other 3 so we normalised it to take out any extra loud bits and brought the gain down to lower the levels . The other takes were fine so we chopped the beginnings and endings off (where we were speaking to each other before and after takes) and pieced the 4 together to make just 1 audio file. We then bounced the file and saved it on Nicola's dard drive. Tomorrow we are going to use Logic to chop up the file and re-arrange it ready to use with the midi suit. I haven't used Logic yet so i will know how much we can actually do with it once I've had a play with it.



Tuesday, October 2, 2007

GENERATION OF IDEAS


Today turned out to be very productive. Greg, Nicola and myself had our 2nd meeting, which produced a large amount of ideas to work from for our collaboration project. I had initially been thinking about using the Nexus to record data of live movement to then produce a secondary visual of some sort, to run side by side. My other idea was to make a pop video but i hadn't really thought about how and I also had the theme of Little Red Riding Hood playing around in my head for possible use. I re-wrote the old fairytale in my own style, which can be read on Google docs here.




To get random ideas to work with, Greg Googled some words to see what came out. Nicola thought of 'cats eyes', which brought up a lot of different subjects including Anime. We got on to the topic of Anime porn and Japanese culture and how some of the girls are constantly re-inventing themselves such as Ganguro girls (as pictured above left). These girls seem to dress to rebel but do it to fit in, which contradicts the whole idea of rebelling. They wear dark foundation and highlight their eyes, mouth and nose and colour their hair. They perform dances that are done in unison in large groups called para para. All of these ideas lead us on to the pop video idea but to maybe do it in Japanese. Nicola and I went off to discuss this.
We finally came up with the idea to create our own unique pop song, which we could present within a video. We decided not to do it in Japanese, but in a sort of computer language generated through data, received from movement. The movement would come from the overall theme of the project, which comes from the Ganguro idea along with red riding hood and Anime characters. The R. R. Hood story fits in with the idea about the Ganguro girls wanting to re-invent themselves. It's a story about change and we could maybe base our characters on her in an Anime style and come up with our own image. We are unsure of what programs we would need but we want to collect data from the movement, which can be put through some sort of generator to create random words and audio, So everything comes from the same source and each element is related, therefore is the same thing but performed in different ways. We thought the Nexus would be able to pick up
enough movement for us but after reading the handbook we realised the sensors only monitor skin temperature, blood flow, etc as it's a medical piece of equipment. After speaking to Greg and Jonathan about this, they suggested we use the midi suit. This will record movement and produce the data we require for our project.
So, the collaboration idea has started to come together in which we will attempt to devise a 'dialogue' to use in the form of lyrics and music, created from a dance sequence and presented as a pop video. The use of music programs such as cubase may be useful for auto-tuning to get a more conventional sound. For example, if we put the data through a program to create sound, to turn it into a pop tune, we will need to follow certain rules such as having it all in the same key. We will film our rehearsals, which will be shown as short clips on here and youtube.We now need to play around with the midi suit tomorrow and find the appropriate programs to utilise that data.