Saturday, November 20, 2010

CCRMA Musicircus

Last night CCRMA was taken over by the Musicircus, an annual, John Cage-inspired experimental concert featuring overlapping spaces filled with music, interactive performances, and installations by CCRMA students and faculty.  I had the privilege of performing some of the material I've been developing over the past couple years in the student lounge on the third floor.  It was a wonderful gift to be able to share my music with so many other like minded individuals, and I was really pleased with everybody's reactions to my set.  I was especially happy to hear that multiple people fabricated connections between the video and audio (assuming that they were controlling each other in some way), when, in fact, these pieces were totally independent.  Any correlation between the two was completely random.  John Cage would (hopefully) be proud...


Michael Wilson (one of my fellow MST students) was incredibly generous to offer some of his time and a few gigabytes of SD card memory to film my entire set (see below).  Thanks Michael!  Some high quality video of one or two excerpts from the evening should be making its way onto the internet soon as well.

cloud veins @ the ccrma musicircus (november 2010) from Christopher Carlson on Vimeo.
Musicircus (1967) simply invites the performers to assemble and play together. The first Musicircus featured multiple performers and groups in a large space who were all to commence and stop playing at two particular time periods, with instructions on when to play individually or in groups within these two periods. The result was a mass superimposition of many different musics on top of one another as determined by chance distribution, producing an event with a specifically theatric feel.


Setting up earlier in the day...



Since I was projecting on a bed sheet in the window, it was possible to
see my video projections from outside the building.  People seemed to
notice this as they approached on foot.





Thanks to Bruno, Sasha, Carr, Jay, and everybody else involved in making this concert happen!  I can't wait for the next one!

Tuesday, November 9, 2010

The Feedbox

This prototype noise machine is my latest creation for Music 250a - Physical Interaction Design for Music. The task was to construct a mini-instrument that is "off the breadboard."



My general goal in this class and in my personal work is to make expressive devices that allow performers to explore new or unconventional sounds. Since i have always loved feedback, I took this lab as an opportunity to test out some ideas for generating and controlling it.

The materials in the instrument include: 4 piezo discs connected to rings of aluminum foil, a mini amplifier: (radioshack.com/​product/​index.jsp?productId=2062620) (hacked to enable toggle switch control for power), an aluminum foil pad connected to the hot lead of the audio input to the speaker, and the aluminum enclosure (with holes drilled for the wiring from each piezo and to the user's hands).

UPDATE:  The feedbox is featured on the Make Magazine blog today.  Thanks guys!

Thursday, November 4, 2010

Sonification of Tree Ring Widths

This week in Music 220a - Fundamentals of Computer Generated Sound - we had a homework assignment involving the use of time series data sets to manipulate control parameters for synthesized sounds.  The general term for this process is sonification, which essentially boils down to developing an auditory depiction of a bunch of data.

Sonification is an growing research area, particularly because our auditory system is highly  capable of processing multiple streams of information.  By developing audio representations of data, it is often possible to obtain additional insight into the complex interactions between measured quantities - insights that would be difficult or impossible to obtain from visual inspection of streams of numbers or graphs. 

For the class assignment, I chose to sonify multiple sets of tree ring width data, measured (by others) for 6 trees in California. The size of each data set varies depending on the number of years recorded, with the oldest tree in the group having data from 6000 BC to 1979 AD.  


The sound of each tree is represented by a slowed down sample of Max Mathews speaking a single word from the corollary to his famous theorem from the 1950s (any perceivable sound can be created using a computer.)  The corollary states that "Most of the sounds that one can make randomly with the computer are either uninteresting, horrible, or downright dangerous to one's hearing."  Words used in this sonification exercise (in order of appearance) include "sound," "randomly," "horrible," "make," "downright," and "uninteresting."

The successive tree ring width data points modify the playback rate of the corresponding sample, slightly speeding it up or slowing it down.

This recording starts with the oldest tree and jumps ahead 200 years with each iteration, skipping 200 data points.  The data is compressed in this manner in order to ensure that the final file is a reasonable length (and not 30 or 40 minutes...)  When the starting year of a new tree is passed, the tree joins the chorus. The data sampling interval is decreased at the end of the recording to allow the last several trees to "perform."  These trees have much shorter timelines and would not be heard for more than a few seconds at the initial sampling rate.

One note - this file sounds best when listened to on headphones.  It is a binaural recording, which allows the simulation of directional sounds (left and right, front and back).  It should sound as if there are four sound sources - two in the front, two in the back.  It is possible to listen on speakers, but various cancellations will occur and the much of the directionality of each sound will be lost.  The recording also sounds best on headphones with a decent bass response since the samples consist mostly of low frequencies.

  Concentric - (Binaural. Listen on headphones) by cloudveins

 
Normalized Tree Ring Width Data Sets (in order of appearance) from the Time Series Database:
  • CA535.dat: Methuseleh Walk - GT Basin. -6000 to 1979
  • CA506.dat: White Mountains - Bristlecone Pine. -5431 to 1963
  • CA051.dat: Limber Pine. -42 to 1970
  • CA507.dat: White Mountains - Bristlecone Pine. 800 to 1954
  • CA087.dat. Kaiser Pass - Western Juniper. 1140 to 1981
  • CA531.dat: Onion Valley - Foxtail Pine. 1027 to 1987

Sunday, October 24, 2010

CCRMA Whale Watch/Field Recording

As promised - here are the photos from our field recording trip in Monterey a few weeks ago.  I would like to post some details on my recent projects and exciting happenings at CCRMA, but unfortunately that post will have to wait until midterms are over in a few days...   For now, here are some photos from the trip!





My hydrophone (used to record sounds underwater) 
CCRMA provided the raw materials, and each student
  soldered and assembled their own.

 
Testing out the hydrophones.  Unfortunately, the epoxy sealing
my hydrophone was a bit leaky...  Amazingly, however, the
electronics did not fail and I was still able to get a few
recordings (albeit at a very low signal level)


On the way out of fisherman's wharf in Monterey



Still dry...


Cannery Row.  The small strip of beach in this photo
is the location where I first touched the Pacific Ocean
a few years ago on a business trip for Metron.


Sasha (left center) brought along ginger candies to
help settle people's stomachs.  Seemed to work
very well for most people!


Recording



"Oh there's an albatross!  Hunter, get your crossbow!"


We didn't see any whales, but we did encounter
a pod of Rissos dolphins.

 






Fishing for sounds...


Classmates hard at work.


Heading home wet, cold, and tired...but not seasick, and that's all
that matters.  Thanks to the faculty at CCRMA for organizing
the trip.  It was a great experience!

Monday, October 11, 2010

101010

I thought I would share this recording that I made this evening to commemorate 101010. Have a listen!  Be prepared for a long, gradually changing noise-scape.  Technical details are below.   


The recording was produced by running a single guitar sample at several playback rates through a custom feedback delay system that I built this weekend using Max/MSP.  This was my first time working with the "delay~" object in msp, so I went a bit crazy... 

The system includes 10 interconnected delays with constantly varying time offsets (generally between 5 to 500,000 ms).  Feedback loops and random pathways are created and destroyed as the sound bounces through the rapidly shifting time terrain.  During the "performance" of the piece, all incoming audio was turned off at approximately 10 minutes.  The second half of the recording consists entirely of the decaying feedback loops until they become inaudible (don't be fooled by the waveform shown above - the audio does in fact reach a lower level than what is depicted by soundcloud).   No gain/attenuation/fade ins/fade outs were applied to the resulting waveform.

So...
10 delay lines  + 10 minutes of processing + 10 minutes of decay =  101010.


The first draft of the patch really embodies the experimental spirit of Max/MSP...



Version 2:  more delay stages, encapsulated functionality, improved access to controls, and preset selector



The guts of a single delay stage.  Delay time is varied by a
scaled sinusoid. Min/max time and oscillation rate are controlled
by the user.  Two signals are output - an attenuated copy is sent
to the audio card. A non-attenuated version may be fed to other
delay stages.



This is the audio routing subpatch.  The connections and objects on the
left route the attenuated signals to the Left and Right audio outputs. 
The lines on right connect the non-attenuated outputs of the various
delay stages to each other.  This routing configuration and the time/
oscillation settings for each delay stage are responsible for the overall
sound of the patch.


Check back soon for some photos from the CCRMA whale recording trip and more details about my recent projects!

Sunday, October 3, 2010

Weeks 1 & 2

It is hard to believe that nearly two weeks have passed since my first day at CCRMA!  Quite a lot has happened, and I will try to cover the highlights in this post.  In short, I am incredibly happy to be part of this amazing program and to be surrounded by so many like-minded individuals.  I've been extremely busy, but in all the right ways. 

This quarter, I am enrolled in five classes:
  • Music 250a:  Physical Interaction Design for Music
  • Music 220a:  Fundamentals of Computer Generated Sound
  • Music 192a:  Foundations of Sound Recording Technology
  • Music 320:  Introduction to Digital Audio Signal Processing
  • Music 201:  CCRMA Colloquium
  •  
    Music 250a - Physical Interaction Design for Music is my first class every Monday morning.  If you're curious about the meaning of "physical interaction design," the course website has a great overview: 
    In recent years, technologies for synthesizing, processing and controlling sound, as well as those for embedded computing, sensing and inter-device communication have become independently mature. This course explores how we can physically interact with electronic sounds in real time. A series of exercises introduces sensors, circuits, microcontrollers, communication and sound synthesis. We discuss critically what the merging of these technologies means for music and art. Along with new technologies, what new music practices or art forms may emerge?
    In the broader sense, this course deals with interaction design: What happens when human behaviours meet those of machines? How do the devices we use determine the style of interaction? How do we design for the limitations of human performance and the affordances of machines.

    The course initially consists of labs designed to expose students to electronics/sound programming and homework assignments that encourage exploration of higher-level design questions like "what is an expressive action?" and "what modes of feedback are important for effective interaction?"  Halfway through the quarter, we will draw upon our new insights and technical skills to propose and execute a final design project.  Specifically, the goal is to develop and build a new, sensor-enabled musical instrument that opens up interesting and unexplored means of expression.

    Our first assignment was to think about the distinction between handles (analog input) and buttons (discrete input) and sketch five examples of each.  A few of my favorites from my set are shown below. 

    Play button on my Akai reel to reel.  The
    interesting thing about this button is that it
    can only be controlled in the "on" direction.
    Only the stop button can release it...


    Knob on the Line 6 POD guitar amp simulator (handle).
    Provides both tactile and visual feedback.


    Power button on Mackie HR824 studio monitors.
    Beautifully designed.  I love these buttons. 


    Pitch wheel on a synthesizer (handle).  Provides some
    force feedback via the hidden spring mechanism (abstracted
    as a single spring in this drawing). 


    We learned some great sketching techniques in class,
    which were very helpful in completing the assignment -
    Especially since the last time I did any detailed sketching
    on paper was probably 5-6 years ago as an undergrad.


    Guitar pick as a handle














    Our first lab exercise involved getting familiar with Pure Data (Pd), which is a free, open source version of Max/MSP.  Both tools are "graphical" programming languages, so-called because programs are constructed by linking or "patching" together objects on the screen.  Interesting side note - the namesake of the Max language is Max Mathews, "grandfather of computer music" and Professor Emeritus at CCRMA.  - I have spent a decent amount of time with Max/MSP over the past year and a half, so it was both fun and frustrating to make the switch to Pd.  Once I figured out some of the equivalent objects and tools, I had a lot of fun building my first Pd instrument (pictured below). 


    My Pd "patch" for 250a Lab 1.  It is a simple sound
    sequencer/glitcher with reverb size and width control
    using mouse x/y coordinates.


    My second class, Music 220a - Fundamentals of Computer Generated Sound,  is taught by the director of CCRMA - Dr. Chris Chafe.  This class is more of an overview of the history of and practice of computer music with a technical component involving the ChucK programming language.  ChucK was developed by one of our faculty (Ge Wang) when he was a PhD student at Princeton and is a very neat tool for "on-the-fly audio programming."   Some musicians use it for "live coding," in which the performer literally codes up a piece on stage, projecting his/her laptop screen so the audience can see the commands as they are entered.   ChucK is especially good at handling concurrent processes, which is extremely important for real time applications like live coding.  I'm sure I'll have more to say on this later once I have more experience "ChucKing."  

    This week for 220a we are building hydrophones (underwater microphones) and taking a field trip off the coast of Monterey to find and record whales.   How cool is that?!?!?    I will definitely update the blog with pictures and any recordings I obtain once we take that trip!

    The rest of my classes - 320 - Intro to Digital Audio Signal Processing, 192a - Foundations of Sound Recording Tech, and 201 - Colloquium have been equally as interesting.  320 has been mostly review up until now, but will soon be getting much more complicated.  By the end of the quarter I should know the basics of filter design and spectrum analysis.  In 192a we've covered the principles of sound, basic audio electronics, and psychoacoustics (how we perceive sound).  This week we should be getting into various microphone types and will be spending some time in one of the CCRMA studios.

    At last week's colloquium,  each of the Master's students (myself included) spoke for 5 minutes about their previous work and current interests.  It was fantastic to get a better idea of the interests of my classmates and to see the diversity of work that they have produced.   I spoke very briefly about my time at Metron and shared two recordings - one excerpt of a controlled feedback experiment and the first few minutes of "Pollen Chair,"  a piece I composed for a modern dance in NYC back in 2006  (see my portfolio for the full recordings).  I was very happy to receive some positive comments from Dr. Chafe regarding the controlled feedback piece.  It seems like we have very similar interests, so I am hoping to learn more about his research in the coming weeks!

    This Tuesday, amid the chaos of the first round of assignments, I managed to free up enough time to make a trip up to San Francisco to see one of my musical heroes perform a set at the Swedish American Hall.  Fennesz is an Austrian guitarist/composer who generates enormous/overpowering soundscapes using his guitar, Max/MSP, and a suite of other software.  I have been following him for about 5 years but have never seen him live.  This tour is the first time he has been in the states in quite some time.  Two of my classmates came along for the show, and it was an incredible evening.  One of the best parts of the night was simply having the opportunity to discuss the music I love with people who are as crazy about it as I am.  I have missed that type of interaction for a long time, so it is wonderful to finally experience it again!


    Fennesz melting the room in San Francisco.

    If you are interested in Fennesz but don't know where to start, this video provides a nice intro.  The albums Endless Summer, Venice, and Black Sea are great entry points as well.






    Finally, this Friday night the music department had it's annual start-of-the-year BBQ on the CCRMA lawn.  Here are a few pictures from the event.  
     
    This is the amazing view out the back of the CCRMA building.


    Free food is always a plus for poor grad students!!!!
    Thanks for reading!   Please check back soon for more updates!

    Sunday, September 19, 2010

    A New Beginning

    Welcome to the first post of Modulation Index!  This blog will serve as an archive of musical ideas/creative projects/works in progress as well as a document of my experiences with music, tech, and life as a grad student at Stanford.

    Tomorrow morning (Sept 20), I will begin a Master's program in Music, Science, and Technology at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA - pronounced "Karma").  Over the course of the next nine months, I will be learning from experts in the fields of digital audio signal processing,  human computer interface theory and design, and audio software development.  I will be working both individually and with others to build new interfaces for musical expression, learn the theory and practice of digital signal processing, compose and record exciting new music, and write new audio software.  Needless to say, this is going to be a busy, exciting, and, most importantly, fulfilling year!  I can't wait to get started!


    Last Thursday night,  the department held their annual "Transitions" concert, at which many of the faculty and existing students performed their compositions and showcased interactive sound/art installations. Highlights of the evening included a performance by Max Mathews (pictured below), a piece by Luke Dahl, Jorge Herrera, and Carr Wilkerson that turned twitter feed data with #CCRMA hashtags into sound objects within the composition, and several other pieces that made full use of the 8 channel/360 degree sound system.  The evening provided great chance to meet some of my new classmates and get a taste of the type of work people have produced at CCRMA.

    Max Mathews, "Father of Computer Music," performing at
    the CCRMA "Transitions" Concert Thursday evening.


    The Knoll (CCRMA  building) at night


    As classes progress over the coming weeks, I will have plenty of new things to share on this blog, so please check back soon!   For now - here are a few recent sketches that I made this summer in between packing and unpacking boxes!


      Rink Abrasion by cloudveins


      Radial Sheath by cloudveins


      Stuck Call Button by cloudveins