[back to classes]
here i track my
electronic music performance / NIME
this is the end. i'm posting a bunch of things that should've been posted a little while ago and should document where i am.
NIME multi-touch photoset on flickr
multi-touch block diagram (hardware and software)
ending thoughts on my manifesto
i like my manifesto. i feel like it's better articulated than anything i've argued in class. after seeing people's projects in class, especially will's (the glorious soundlamp), i'd like to add a clause that i implied but wasn't specific about:
performances in this "genre" are made up of many discrete pieces: sound/music, video, physical spectacle, overarching concept, etc. to make the most impact, connect these elements as much as possible - make the sound that should be there when the lights go wild.
we said this, for the most part, in class, but it's so important that i couldn't let it go.
future of performance?
we were asked, "What is the future of performance?" there is no one future. at the moment, we have more performance power than we know what to do with. the future will be molding this into a multitude of things - audience-involving performances, installations, traditional concerts, etc. this isn't a radical change, either; the history of performance has been a cycle of technological development and refinement into art, in the various spaces of performance. we're lucky to be at the beginning of a new technological cycle.
my project, concisely
i am doing research in multi-touch interaction. my first step is to build a multi-touch screen based on FTIR (frustrated total internal reflection), something that has been by many people already, and has already made appearances at NIME. this will be finished soon, and I already have written one musical application with it in mind (a ball-bouncing 3D environment, created for installation in the NY Hall of Science museum, intended to (a) be fun and make great sound, and (b) teach kids a little bit about physics). because of the physical nature of the screen, it lends itself more to an installation scenario than a performance one. I am embracing that, and want to make applications that are simple and immediately accessible to non-musicians.
i also plan to take new directions with the technology. some possibilities:
combined FTIR and IR-bounce technology, allowing for extremely precise touching (sub-pixel accuracy) and hovering hand detection (imagine drawing something onscreen, and then waving your hand over it to erase your mistakes)
yesterday, a co-conspirator and I attempted to build a frame for the acrylic. long story short, the frame cracked as we were building it (out of wood) from using screws that were too big; the wood that we carefully mounted 17 IR LEDs in is a bit warped (and wouldn't be an issue had the frame not cracked); and we scratched the nice $100 piece of acrylic. dammit.
on the positive side, the wiring for the LEDs went well, and we figure we can cut the scratched part of the acrylic off and have a smaller screen (and buy another $100 damn piece of plexi).
we have a renewed interest in using aluminum to build the frame. anybody have a spare drafting table we can hack apart?
the ever-popular jeff han video:
more to come...
we went over what the class is going to be. since it's so short (only five classes left now!) we're not going to be doing much technical stuff in class, it seems; more discussion and examining other people's work.
my least-favorite performance
this isn't my
favorite performance, but it sticks out in my mind as a particularly bad one. i saw luke dubois perform a few years ago (probably winter '05?) at the tank's space on 27th st. (still there? they're always moving around). i had little introduction to what he was doing, so i'll present my experience as i understood it at the time.
i walked in, and people were mostly seated. it was quiet... not as quiet as, say, the stone, but still quiet - definitely gallery-style rather than club-style. my friend and i took a seat behind a pillar, where we saw that we could somehow see where luke was going to be sitting, and where the projecter was idling on the wall. he simply had a laptop and projecter (perhaps a wacom tablet too..? it wasn't memorable if he did use it). we timed our arrival well, because he walked up onto stage (sans fanfare) a few minutes after we sat down.
in his inimitable style, luke awkwardly nodded at the crowd a bit, and started playing. the projector that had been black before now showed a purple blobby shape, and the room filled with a very buzzy, glitchy sound. it didn't change much in pitch, as far as i remember - there was little use of melody. it was very harmonically rich, and changing all the time. after a minute it became clear that the sound and video were closely coupled (luke didn't say anything about how it was done). every ~8 minutes, a "song change" (ha) would happen - the shape would drastically change color and shape, the sound would drop out for a second, and come back sounding "the same but different," as in a different fundamental, and perhaps some other small changes, none of which were memorable.
this went on for about 50 minutes, when it seemed like a song change was about to happen, but luke closed the lid of his laptop instead. people clapped, he walked off, and an MC said something to the effect of "yay luke! somebody else is next." i was thoroughly bored, and had something else to attend to, so i left.
notice how distant and uncertain this recollection sounds? that is exactly how it feels in my memory. i was bored, uncertain of what was going on, and even though there was technically plenty of stimulation (video and audio) there wasn't much on which to focus my attention.
i suppose it's time for a disclaimer: i love laptop music. i do it myself (my duo freedom and fesponsibility plays fairly often, just 2 guys with laptops making sound, and projection once in a while. no alternative controllers, no fireworks, no gimmicks... our max patches are SUPER-ugly too!). i also love simple droney music and often set up my synths to do a complex drone that i listen to for hours without changing.
that said, luke's performance was lacking in so many ways. i've since learned that he is creating NURBS with direct control and controlled randomness, and then he scans across the surface of these shapes and uses them to directly make audio. in fact, his reasoning (i learned these things from a guest lecture by him and talking randomly a few times) for this concept is that he's sick of the disparity between the physical shape of the laptop and the sound being produced. cellos, on the other hand, have their shapes dictated by necessity, and their shape has a direct effect on the sound. this video work is a response to that.
the problem, though, is that he didn't tell the audience that! i was wondering what he was doing the whole time - was he changing numbers on a synthesis algorithm, and running a jitter patch that visualized the sound? or the other way around? or a combination? or were both the video and audio controlled by him?
because i didn't know these things, and the sound offered so little to focus on, i just got angry as the performance went on. it's something i often worry about when thinking about my own performances with freedom and responsibility. do we make sense? are we boring the hell out of people? should we talk about what we do (we don't)? i don't worry because we don't base our music on a concept; we just improvise and do whatever sounds great. i think a lot of artists in this vein could stand to lose some BS baggage and just make great sounds.
(this may reiterate some points i made above; sorry, i think they're very important)
the Most Important Point; the Glorious Truth: when playing music, make things sound
concepts are great and have their place. please use them, but make the music enjoyable beyond "wow, they're mapping rain patterns in south america to the speed of a random number generator." this applies to people playing gigs and acting like a "normal" musical act, which leads into the next point...
when playing music, don't do "avant-" things just for the sake of spectacle.
let your music and art stand on its own. do as much as you need to to make the best music you can. if you need wii remotes, fireworks, and dancers - great! just make sure you need them and aren't doing it to make up for a fundamental deficiency.
when playing music, offer as much depth as you can.
each member of the audience knows how much attention he feels like giving to your performance. let them have a conversation and enjoy your melodies, or let them sit there stoned and zoned out of the world, hanging on every bleep and bloop, hearing micromelodies where they may or may not exist.
that's it. very simple. these (honestly) are the three things i keep in mind while programming max patches, recording samples, rehearsing, and playing on stage. i don't worry about presentation (although i "might could"); i let the sound stand on its own.
(and boy, do i sound conservative. weird)
Synthesis and Control on Large Scale Multi-Touch Sensing Displays
by phil davidson and jeff han
the most important thing this paper touches upon is haptics. touch screens are amazingly versatile input devices, but they sorely lack physical feedback. "tables" approach a solution by requiring interaction with pucks and blocks on top of the touch screen surface, but they lack tension - imagine an interface imitating a set of knobs. pucks offer no physical resistance, like a knob might.
the paper, though, seems to decide that virtual pucks can compare to physical ones on a table. while, of course, on-screen pucks can be infinitely more versatile than a physical puck (they can zoom, slide around algorithmically, reshape, etc.), they don't serve the same purpose as a physical item.
one thing I found interesting is their performance claims. they say they get 50hz with a 2mm accuracy on a 36x27 inch screen. this really means they can capture and process about 50 frames per second at 640x480. my software has almost the same performance, in the alpha stage (read: little optimization). i feel good.
Mary had a little scoreTable* or the reacTable* goes melodic2
by sergi jorda and marcos alonso
this paper's most interesting discussion isn't really about touch screens or tables, but about the use of pitch in new electronic instruments. the year before, at nime, perry cook "stated that when building a new controller, one of the first things he would try to do is play the simplest song such as 'Mary had a little lamb.'" it makes one wonder - why should an instrument need to be able to do this?
i'm doing my second prototype multi-touch screen. the first used the IR-bounce method, which is quite low in resolution. this version uses FTIR (frustrated total internal reflection), which is absolutely amazing technology. (version 3 will be this FTIR display built into a nice case, hopefully mobile, and multi-mode. more later)
what i've been working on
i've gotten a lot done over the weekend, hardware- and software-wise.
purchased acrylic and sandpaper, got IR LEDs from digi-key
sanded one side of acrylic. i wonder if i did a good enough job - some of the pictures i've seen online look a lot clearer.
made a test strip of 7 LEDs after planning out a grid of LEDs. did a decent amount of voltage pondering - eventually deciding on (for now) 35 LEDs, spaced 1 inch, in 5 strips of 7 LEDs wired in parallel, each with a 10 ohm resistor, and the strips wired together in serial. this lets me use a 12v wall-wart (ha, try and find one that actually puts out 12v, the one i'm using is ~13) with no LM78xx power conditioner.
i'm frustrated by my complete inability to remember (a) which pin is which on a 7805 and (b) whether positive is the longer or shorter... pin... on an LED. (i think i've got it now... but we'll see in a few months)
put the whole thing up (using advanced components like keyboard stands, broken wood, socks, duct/electrical/masking/scotch/etc tape, spit, and soul) and pointed the camera at it
played around with my older windoze-based code (windows is used because of the Point Grey Research camera's nice windows SDK)
did a LOT of work getting software working on OS X. i'm kind-of wishing I had a superfast linux box... all my linux stuff is too old to do vision work seriously. a lot of the libraries and example code i'm using are linux-based (some of it unfortunately tied to very non-osx stuff like gnome), but i'm in love with OS X... so I've been googling, porting, watching giant 'make' compile trees scroll by. i now have
compiled and running (at least sort-of) under os x, and i've been messing around with a slew of other things (coriander, libraw1394, IOXperts drivers, quicktime, cocoa, various OpenCV input wrappers, oh my).
times like these make me wonder - why os x (aka why not linux? windows crashes WAY too much and way too hard (like - sudden console text on the screen and REBOOT!)). the real reason is that i have macs at my disposal, and they're fast machines, and I'm very comfortable developing on them (it's my day job). once we start thinking about commercializing these screens with applications, we'll almost certainly use a really tight install of linux (all the code I'm working on now is very cross-platform-able) with carefully chosen hardware. till then, macbook pros will have to do.