Most Swiss people are vampires. You can see it by the teethmarks they leave in their cheese. Sinking his fangs well deep into the visualisation of music and the live performance of inter related audio and video, is the krazy kat: Jasch, who relayed these insights from his northern hemisphere rooftop.
Dyad is my duo-collaboration with video-artist Johnny Dekam. Central theme is improvisation with visual and sonic material. We both work within the same paradigm/software: max/msp/nato. This shared syntax for sound and visuals makes communication on several levels possible. Much of the collaboration deals with finding bridges between hearing and seeing. We’ve developed a set of rules describing events and processes and the communication about them via a network link. We’ve been focusing on live-shows, where we roughly know beforehand where we want to go with the material, but improvise all of the treatments and mixing in real time. The big challenge is that there’s no guarantee to hit the right combination of materials at the right moment. But that keeps the thrill in the performances and lets us develop new things from show to show. In a way Dyad is like an improv combo, except that the instruments are silicon-based and the expressions span visual as well as acoustic phenomena.
How do you approach creating for audiovisual performance?
Along two axes: structural and emotional. The structural level deals with form and the shaping of the flow of sounds and images in real-time. The development of personal software-tools is an important part of that, another is using the appropriate hardware to interface with the processes. The emotional level is about atmosphere. All sonic material can project a context and evoke feelings. It’s essential to have at my disposal a large palette of such material and treatments and intimately know it. To me that’s the key to improvising with electronic media, basically feeling the material and be able to expressively apply it. It’s still very much like an instrument that wants to be practised and demands a considerable level of dedication. The preparation-effort oscillates between creating and refining the (soft and hard) tools and collecting and organizing new materials and processes to work with. It sounds paradoxical that the time spent programming and researching tools and materials is directly linked to the level of intuitive control. But it’s the only way to reach a state without thinking about the processes and control. The real test can only be in performance, when all elements collide and hopefully merge.
What interesting issues come up with AV performance and composition?
The crucial issue is time. Composition is thinking about structure and sound by organizing it in time, working in a detached sphere outside of time where decisions can be examined and revoked. AV-performances happen in actual, ‘forward-running’ time. The experience of composition does help in perceiving structure and temporal evolution in the moment, but doesn’t tell how to act and react to the situation. Intuition is the key, the non-linear access so to speak, to the past experiences without the need of analytical thought. Improvisation is tapping into the experience-base formed through perceiving, performing and composing. AV-performances are unique in that the shaping of visual and sonic structures happens simultaneously with a high degree of flexibility. Composition for the visual and the sonic media do have a lot in common, but there are also rules of perception based on disparate physiological phenomena: the ear, for example reacts differently to dense layering than the eye. Improvising in a open and undefined visual and sonic context demands a high degree of awareness both from audience and performer.
How do you try to transcend the limitations of laptop based performance?
By moving away from it. By finding ways to give back meaning to gesture and physical presence. Using sensors and controllers away from the typewriter-interface helps play the machine like an instrument. Projecting presence with physical action rather than thought and click.
What was your role in the development of the AV software, VDMX?
small, actually marginal, since jdk did all the coding himself. my visible contribution to VDMX2 was a small interface-hack. conceptually of course a lot of the lfo-vfo ideas and routing architecture were developed in parallel in my sound-tools and vdmx. Many ideas developed in common between me and jdk stemming from performance-experiences were filtered into and implemented in VDMX.
What software interests you at the moment & why?
Max/msp with it’s growing set of visual extensions: nato/jitter/softVNS and my own. I’m also following with interest the development of Pure Data, a cousin of max/msp on linux/windows/ and OS X.
What software would you like to develop?
More advanced and powerful 3D-sound and 3D-visual improv-tools, auto-generative and autonomous sound and visual architectures.
Current/ future projects?
Working on ‘codespace’ – a system for abstract 3D-graphics and sound in realtime. I’ve been doing shows and developing software-pieces with abstract 3D-drawing and sound control, integrating ideas from generative arts and minimalist electronic music with the realtime aspects of improvised performances. www.kat.ch/jasch/codespace.html Laying down tracks for a CD release of my material.
Future projects include a feature length DVD with dyad, development of a full surround projection system and immersive sound/visual installation and a lot of hardware development in the field of wearable interfaces for gestural expression. http://www.kat.ch/jasch