Experimental musicians, motion graphic designers and game-heads will love the possibilities suggested by Fijuu, an impressive 3D audiovisual performance engine and interface put together by Pix & Julian Oliver, Aussie ex-pats new-media slumming it in Europe. Fijuu allows dynamic manipulation of 3D instruments using Playstation2 style gamepads to make improvised music – which combined with the tightly synchronised motion graphics and spatial explorations, forms a highly compelling and unusual audiovisual experience. At the bleeping heart of Fijuu? – The open source game engine ‘Nebula’, running on Linux, and a desire for live audiovisual improvisation.
“During performance, you quickly start to think in terms of the sound and the image and not the interface in between,” explains Julian, who also helps maintain the game portal selectparks.net, “Game interfaces are designed to be somewhat transparent, and when you are very involved in a game, you don’t think of yourself as moving a stick or pressing a button on a gamepad, but in terms of the game you are playing – running, jumping, etc.” Building a live audiovisual composition and musical tool has brought unusual challenges, but also delivers unusual rewards.
“There is little in the way of ‘visual meaning’ in fijuu, more hedonistic splashes of happy coincidences that produce sensorially indiscrete experiences. We rely quite heavily on the graphics to motivate improvisation. When attention is focussed on one aspect, you will often have your attentioned snared by an unintentional discovery in the other.”
Fijuu will eventually include a non-linear beat pattern sequencer, granular synthesis tools and a graphical filterbank, and be released as a live CD Linux project, meaning players will be able to boot up a PC and play Fijuu with a PS2 style gamepad (without installing anything). Until then, a very impressive 50mb movie at http://fijuu.com demonstrates some of Fijuu’s wild potential.
>Am curious about how you both might play together live?
We’ve only ever played together twice, as one half is in Berlin, the other Spain/Denmark. other appearances of fijuu have been in an installation format for two players. We haven’t thought about the duet aspect of it enough, which is why playing as a duet is a total improvisation (and sometimes alot of work).
Playing solo on the other hand makes it very hard to use the sequencer, just due to the fact that it needs to be set up. using the other ‘instruments’ in the scene means one can perform while the other works on rigging the sequencer.
>When performing – how do you think the audiences and your own concentration is divided – between audio / video and the software?
At first it seems there is definitely a split of attention, but it seems after some exposure attention seems to individualise across both core elements. We’ve seen this more in an installation setting than during performance (of course).
>Where are you visually – on a sliding scale from computer game to video art / cinema?
The scale itself slides so often that we don’t consider Fijuu to occupy a position on it.
>What can traditional film-makers learn from game culture and game engines?
That regardless what is told, and who tells it, the audience will always produce a third work. that the audience of a game is an active agent in the event continuum of a work empowers this secondary production.
>And from open-source?
It’s faster, easier, cheaper and you make more friends.
>Some unexpected challenges / rewards with *live_creation* of visuals / motion graphics / visual meaning?
A composition can be described within sets of mathematically intervalic relationships but only when considered from the perpective of time. time is however not the only continuum in which sound events can be described. as fijuu is currently framed within three dimensions, the question of which dimensions should be delegated to aspects of signal production, and which to time becomes tricky. pretty much the same scale of problem applies to the gamepad, which buttons should do what and when?
Rewards include surprising synchronicities when a signal looks/sounds like it is the result of a visualised action of play but in fact it isn’t. How much of this unpredictability and symptomatic effect should be built into the project at the expense of reliable control paradigms is a tricky design question that for the present is answered intuitively if at all.
>A likely release date as ‘a live CD Linux project’?
We’re working on it 😉 no promises at this stage as there are a few questions surrounding performance and drivers.
>What are the technical challenges that need to be overcome before this happens?
the main technical challenge is correctly detecting and configuring the graphics hardware capable of running the software. this is a tricky task to perform manually, and even harder to automate. livecd’s are all about automatically detecting and configuring the hardware of the computer they are running on. there are only a handful of relatively immature livecd projects which are in the initial stages of addressing this issue.