Above – a motion graphic medley made from clips I created recently for each song of Audego‘s latest album, Beneath the Static and the Low. It’s pretty gorgeous music, listen for yourself - bandcamp / soundcloud / itunes. The Audego brief: ‘retro-abstract motion graphics we can project behind us while we play’.
VDMX - for real-time clip triggering, compositing and effects TV Paint - for generating some animation textures
After Effects + Premiere – for compositing, effects and editing. Madmapper - for arranging projection of motion graphics onto surfaces
Canon 7D – for filming of above projections.
Over the past three semesters I’ve had the pleasure of co-steering a studio elective with Caroline Vains, for Interior Design students within the school of Architecture and Design at RMIT. Loosely – we’ve been exploring the intersection of video projection, built objects and interior design.
Most interior design students are already highly visually literate, great at quickly visualising their ideas in many ways, and unlike most video oriented people I know – are fantastic at working with materials and constructing models. As well as adapting very quickly to the world of projected video, they also bring along considerable materials testing, research and construction skills.
This semester though, we added interactivity as a requirement. In addition to learning video editing, video composition, animation, and projection mapping – they needed to think about how they would include some simple lo-fi interactivity in their projects. It’s quite satisfying to report that they responded wonderfully to that challenge, and I’m happy to share some of those efforts below.
Pictured above – the crowd-pleasing bicycle powered installation by Jimmy Liu, David Dai and Nick Hsu. (‘Team Brothers!’ ). A series of reflective gears were connected to pedals and a bicycle seat, causing the carefully mapped projections to reflect around the space. This was a nice evolution from their earlier experiments which included tight geometric mapping sequences, and three dimensional arrangements of laser-cut paint splashes.
Part of the pleasure with this studio – is seeing how different skills and ideas merge, and evolve, over time. For example, Hexin Bi used his experience from scuba diving as the inspiration for his audiovisual installation, rigorously analysing the rhythm of his underwater breath…
.. while Jacinta Birchmore explored repetitive forms and texture, beginning with the intricate model below, before iterating through other shapes and surfaces.
Together for their final piece – they continued to explore scuba diving, but took it in a new direction, creating a breath activated installation. Their structure featured a layer of styrofoam balls which obscured the projector light from shining through, unless someone blows through the mouthpiece – which scatters the balls and enables the animation to bounce around inside their mirrored space (a process accentuated by a breath-powered spinning reflector ). Guest blower: Ramesh Ayyar.
Tisha Sara Dewi, Jing Yang and Ranqi Liu created a beautifully made cone structure, to be viewed from underneath:
Michael Kuo and Ting Jiang played with precision modelling, mapping and hand activated rotations:
Another blending of approaches: Stacy Rich’s topographies and Danielle Bird’s organic textural work..
.. later combined to produce a quite beautiful structure (which unfortunately had a few interactive / mechanical problems).
Tahlia Landrigan produced an intricate response to the music of Nicholas Jaar…
and Fenella McGowan’s structure built from cotton buds responded to projection beautifully, as did Nikita Demetriou’s tissue-paper hangings…
Together that trio diligently combined their efforts to produce a wooden bicycle work – which featured a pair of revolving wooden cylinders, each with precision cut holes, and a bicycle wheel on top for spinning the cylinders, altering the light patterns being emitted from within.
Aside from the prowess with construction, another trait interior design students seem to share – is a flair for dynamic documentation, very comfortably playing with formats and materials to best express their projects. Below, a couple of examples by Stacy + Jimmy.
Below, the projects grouped as part of the final exhibition … (note the bicycle pedals for activating the ‘Team Brothers’ piece at the top of the page).
And finally, studio co-ordinator Caroline, wondering where the semester has gone to…
Thanks to Caroline and the entire studio for a great semester!
Winter in Tasmania isn’t an obvious time and place for a festival, but MONA isn’t your average museum / gallery. And so began in 2013, MONA’s DARK MOFO (Jun 13-23), an annual festival that riffs on the idea of winter solstice with pagan celebrations of light ( + art, fires, lasers, feasting, etc..). This included: the Red Queen exhibition @ MONA,/ performative chefs, Skywhale (a sculptural / sky-breasted hot air balloon by Patricia Piccini), Robin Fox laser performances, and a whole host of other light and projection related artworks…
Oh, You Mean *LIGHT*..
… All of which were made irrelevant by Ryoji Ikeda‘s 5KM HIGH BEAM OF LIGHT INTO THE SKY ( aka ‘Spectra‘).
Simultaneously over at the gallery, Ryoji exhibited his Datamatics work. Really enjoyed this more than expected. It’s a very well documented and promoted work, but none of that captures the oddly calming oceanic presence it has.
The Beam in Thine Own Eye exhibition gathered together a range of works exploring the limits of perception. Zee by Kurt Hentschlager was the most spectacular of these, an intensely stroboscopic smoke filled room, that came with pages of warnings, had medical staff on standby, and completely blurred the capacity to distinguish between what was happening in front of or behind your eyes. There’s a great interview with Kurt here. Other standouts:
Ivana Franke – “We Close Our Eyes and See A Flock of Birds” -A cyclinder shaped room, with central seating, facing out against LED covered curved walls, which proceed to strobe and flash their way through a range of sequences.
Anish Kapoor – “Imagined Monochrome” – An artwork experienced one at a time – because it involved laying down and having your *eyeballs* massaged by a professional eyeball masseuse. I missed getting an appointment for this, but apparently it was fantastic.
.. And Dark (Faux Mo).
DARK FAUX MO, the festival club - is what I was there for - projection mapping a disused double-storey theatre space each night. Performers included Miles Brown, Super Wild Horses, ZOND, My Disco, Zanzibar Chanel, Mixmasters ( who cooked soup + dumplings on stage while they DJ-ed some tracks), Andee Frost, Rainbow Connection DJs and more. It was a wild space, delightfully decorated, with lots of roving performers – so it came up great in photos. (Eg collage at the top of this post – or see flickr photo set)
“There’s a lot of wonderful possibilities for real-time visual compositing with Quartz Composer. Most existing QC learning resources though, tend to emphasise the generative graphics capabilities of QC. For those with a post-production, animation, motion graphics or VJ background – QC’s composition potential can be difficult to unleash.”
Hoping to make the transition to Quartz Composer a bit easier for the above kinda folk - I’ve gone and made a page which documents how various animation + post production techniques and processes can be recreated inside QC…
(eg Composite multiple video sources, Image Masking, Pre-framing video into compositions, Working With Layers from Photoshop, Nesting + Pre-composing in AE, Inverse Kinematics￼, camera paths.. etc).
Learning Quartz Composer: A Hands-on Guide to Creating Motion Graphics with Quartz Composer
“Whether you dream of live visuals, interactive installations, Cocoa apps, dashboard widgets, or extra awesomeness for your film and motion graphics projects, Quartz Composer will enable you to develop beautiful solutions in amazingly short periods of time…”
“….To make up for all the gaps in video tutorials and forum posts scattered around the interwebs we wrote a book…”
A Quartz Composer book has been long desired by the real-time video community, given the combination of its unique capabilities and severely undercooked documentation online. Hats off to Graham and Surya for rising to that challenge, and helping expose QC’s potential for visual artists of many flavours.
These days a book inevitably also means an accompanying DVD of video tutorials (which can also be accessed online by those who buy the PDF, book code needed), and an extended support website (ILoveQC).
Who Should Read This Book?
According to the authors - Maker types /Motion graphics designers, film makers, VJs, artists, interactive programmers, and Cocoa developers. If that’s you – this book will help – “…even the unsophisticated user into creating art projects, visuals for a band or party, wild Preface screensavers, and RSS-powered trade-show kiosks. For anyone with a programming background, the material quickly opens up a new world of visual potential”.
Who shouldn’t? “Advanced Quartz Composer users looking for detailed knowledge about using GLSL and OpenCL, or creating your own plugins in Objective-C..”
“Coming from a non-programming background, I’ve found some of the concepts and structural logic of Quartz hard to grasp, and the engineerish manual doesn’t help much. Kineme.net and the QC mailing list – seem helpful, but also populated by mostly advanced discussions – which tends to stifle introductory questions and beginner problems. So I found myself trying to learn QC by forcing myself to explain what I was learning about it as I explored it.”
This scattered learning approach lead me to writing up these QC tutorials…
What I found myself really craving was a learning resource that broke down the structural logic of QC, and which explained some of the principles in ways that related to how I wanted to use it as a compositing tool. And this, the ILQC book mostly delivered – using deliberately plain and simple language, and making no presumptions about animation or programming knowledge. A quick glance over their contents page, gives an idea of the book’s scope:
What is Quartz Composer and Why Should I Learn it?
The Interface and Playing a Movie
Adding Visual Effects
Using LFOs, Interpolation and Trackballs to Move Stuff
MIDI Interfacing (Getting Sliders and Knobs Involved)
- The examples are well chosen, and build up on skill levels as the book progresses
- The book examples and video tutorials correspond really nicely to see each other
- There’s a good emphasis on concrete examples, while explaining the principles that make it possible
- That said, found myself wanting some more explanations of underlying concepts occasionally
- Gaps? Would’ve liked some more advanced exploration of:
how ‘timelines’ and ‘queues’ can be utilised within patches
‘structure’ and ‘multiplex’ related patches
‘render in image’ and ‘rendering’
the composition process in QC, explained relative to composition software such as After Effects… giving a bit more of an explanation of how the overall 2D / 3D possibilities work, and how they could be utilised / explored in many directions..
Ok, ENESS - you had me at ‘projection mapped kinetic sculpture’. The Creation Cinema – seen above, is now installed at the Melbourne Museum as part of First Peoples, an exhibition celebrating ’the history, culture, achievements and survival of Victoria’s Aboriginal people.’ It’s a gorgeous installation, located inside a circular room, which is in turn enclosed by intricate layers of wood. Once inside – the sublime smoothness and grace of motion immediately captivates. It’s something that animators strive for with onscreen movements, but is so much more satisfying to witness with moving physical parts. Within that darkened egg of a room, the sounds, video and slow relentless movements of the wing fragments all add up to quite sublime effect. Fantastic installation, and viewable for the next 10 years!
The slow fade out of Melbourne’s summer = an opportune time to re-spark the skynoise engines. It’s also likely I just miss writing things longer than 140 characters. Especially since I’m just about to finish reading a 3000 page novel ( The Baroque Cycle, Yo!). Regardless of the roots….. expect some fruits.. scattered across the next few months – riffs about visual culture, and likely some weirder tangents too. That’s what ma bones are saying. And sooner than that – some long overdue reviews:
Millumin (interesting timeline based visual performance software)
Below – the summer that kept skynoise drooling on its pillow:
Dec 2012: Stereosonic – Touchscreen video booth installations for an energy drink tent. (photos got animated, projected and sent online.) Sampology at Falls Festival Dec 30 ( Marion Bay ) + Dec 31 ( Lorne ) ( Mixing camera sources with Sampology’s live video).
At the Adelaide Festival – did live video for The Cumbia Cosmonauts at the fun pop-up venue, Barrio. The theme for the night was ‘Animal House’ – which meant there was a camel, piglets and geese nearby, as well as dog masseur doing live demonstrations on a table.
And Then It Was Now:
- Developing an audiovisual performance for Wide Open Spaces, a wonderful desert festival held out near Alice Springs in May. Longtime collaborator Suckafish P Jonez is back from Barcelona, and we’re excited to be exploring AV again. Weekly rehearsals!
- Am co-hosting a studio elective at RMIT within the design faculty, looking at video production, projection and installation – from an interior design perspective (which tends to include a lot more materials and building related research / development). It’s a fun studio, which uses mapping processes, and comic / graphic novel storytelling techniques to help inform video installations.
- Am slowly rolling out a series of updates to the skynoise.net/projects page, finally uploading documentation from a range of projects… including the snippets below, developed for 360 last year.
Elsewhere: I PREFER VIMEO: ((Better quality encoding/resolution/interface/community comments etc)) OTHERS PREFER YOUTUBE: ((More eyeballs. And clients sometimes want it here.))
FACE IS THE PLACE : ((Finally succumbed – click facebook.com/JeanP00LE - for all your Zuckerborgian messaging / subscribing / liking needs)).
Above : more proof that Space Is The Place…. at least when it comes to Mexi-Australian tropical bass genres.
That’s the fruits of a few quick projection and filming sessions with the Cumbia Cosmonauts, featuring custom graphics made by the CC VJ – Martin Hadley (I especially liked his spaceship control deck!). I’d like to think if there’s ever a Mexi-Australian space program, that it looks something like this… ie has that Ed Wood in space vibe about it, maybe with styling by Lee Scratch Perry & Sun Ra.
The Cumbia Cosmonauts are a Melbourne band who are celebrated around the world with their take on Mexico’s cumbia music, and so fittingly, they release their new album, Tropical Bass Station, on the Berlin label, Chusma records, on Nov 23, 2012. The track ‘Our Journey To The Moon (And Back)’ comes from that album.
Developed and performed for the Graphic Festival – it was an audacious project – inside a tiny time frame, create 18 songs and animations to reinterpret or remix the books of Dr.Seuss for the stage. It never felt like enough time – and yet, the amazing zoo / crew at Elefant Traks pulled it together and nailed a dynamic audiovisual smorgasbord (that apparently had some of the Seuss publishing folk moved to tears!).
My role was to develop and live trigger the animations for the show, which was akin to developing a feature film in 6 or so weeks.. while liasing with around 20 different musicians… “hey man, I’ve got this new idea for a beat / I’ll get you those lyrics soon.. etc etc” – so I wasn’t surprised to find myself still rendering out clips on stage, right up to the last minute.
I’m going to put up some more animation info later, over at skynoise.net/projects, but for now, while still floating, I wanted to put out a huge thank you to:
- Jono ‘Dropbear‘ Chong + Darin Bendall, who did an amazing job, animating half of the tracks between them.
- Urthboy - who oversaw the crazy production, as well as performed throughout the show
- Unkle Ho, who helped tie together the visual production, and developed his own flash-based interactive visuals for the show, AV jamming on a wii-board to Green Eggs & Ham, with Jim from Sietta + Angus from Hermitude.
- Luke Snarl Dearnley, who did a stellar job as technical producer, keeping the whole show smooth as butter.
- Owen Field, who covered all the logistics with grace and calm…
And that list could go on and on – there were endless Elefants who who were such a pleasure to collaborate with…
Some Elefant clips:
X-Continental, a clip I did for the Herd back in 2001. Urthboy, Ozi Batla, Solo, The Tongue and L-FRESH: Cipher at the Opera House
and below, Dropbear’s fantastic animation for ‘And To Think That I Saw It on Mulberry st’, which was performed as the first track of the show, by Urthboy, Jane Tyrrell + Angus from Hermitude. Ozi Batla had just given his show-intro in an aviator costume, and hooded Urthboy came on to do a quick rap about Dr Seuss, before pulling back the hood as the lights came up, the decks started up, and MCs roamed the stage with this as backdrop:
I’d first learned of his cancer diagnosis a few months ago, after wandering once again to his youtube page, and noticing a short and simple message underneath his most recent short film:
Down With The Dawn, is Run Wrake’s usual virtuosic animation, but knowing that this 8 minute short film was his response to being diagnosed with cancer, made it quite confrontational viewing. I was shocked then, but somehow presumed he was turning things around, he was on the slow path to recovery, that although tragic, everything would be okay.
“It is with incredible sadness that I have to let you know that our darling Run passed away very suddenly at 5am on Sunday morning as an end result of his cancer. He had spent a beautiful Saturday with his two children Florence and Joe, his sister Fiona and myself. We left him at 7pm doing what he loved best- drawing and animating with peg bar and paper.
I was with him for his last moments. We love you Run.
Above, hard-drive snapshot of some of my favourite RunWrake animations.
I first learned of Run Wrake around 10 years ag0, through his compilation Gas DVD, “Dinnertime”. Somehow it had laid unwatched in a pile of media for a few months, until late one evening I spied it again and lazily inserted, then pressed play. What followed was dizzying and overwhelming – that mix of exhilaration and exhaustion when discovering an artist so consistently good, so relentlessly inventive, and so utterly prolific that you’re left wondering if they exist under different laws of time and space.
Where did ‘Run Wrake’ come from?
Actually a nickname earned whilst keeping wicket particularly badly during a game of cricket aged 11. A friend was sent in for sarcastically shouting ”Run”, as the ball went thru’ my legs for four.
With so much animation under your belt, what has it taught you?
It’s taught me that I’m very lucky to have the desire and ability to scrape a living doing what I enjoy, and that you will never make a piece of work with which you are entirely satisfied.
To what extent do you storyboard your clips? Or how do you approach narrative?
”Rabbit” is the first film that I have rigorously boarded, with a view to telling a story, and I thoroughly enjoyed the discipline.
Any desire for feature films, or longer works?
Absolutely, watch this space*.
(*As of 2012: Wrake was developing an animated feature, The Way to a Whole New You, with writer Neil Jaworski for BBC Films.)
One of my questions was whether Run Wrake had ever animated a skateboarder, and Run Wrake was kind enough to add a note at the end saying that he’d done an ad featuring a skater, and that he’d attached a little quicktime movie of it for me. One of those wow moments – a favourite artist sending me something they’d made?? Below, a screenshot sequence from it, which demonstrates one of his trademark ‘perpetual zoom outs’…
A glimpse at his biography (have you seen a more delightful online CV?), showed some of how this was all possible. Run Wrake had gone through the Chelsea College of Art and Design, and the Royal College of Art, before achieving a breakthrough with his 1990 student film Anyway on MTV’s Liquid Television. With Anyway, several strengths were already evident – an eagerness to playfully deconstruct form, an ability to adapt and incorporate many kinds of media and animation styles, and an incredible capacity for fluid transitions – smoothly morphing into wildly different scenarios or character transformations.
The DVD documents the development of all those strengths, as well as introducing others - a highly attuned sense of animation rhythm and pacing, and a flair for visualising sound and loops. That kinship with music was partially nurtured over time with his job as an illustrator for NME magazine, (the DVD includes a virtual gallery of these illustrations, narrated by a flying turtle-armed boy.), but is most evident across his trajectory of music videos, most notably those with long-time collaborator, Howie B.
“my first job, commissioned by an Elvis suited Jonathan Ross to make a title sequence…making Jukebox, my first animate! commission, a two year slog…meeting and working with Howie B, initially on a short film to accompany the release of his album Music For Babies, and subsequently on a series of freeform promos…presenting storyboards to Roy Lichtenstein in his New York studio for U2′s Popmart Tour visuals…and the critical acclaim for Rabbit, a short film completed in 2005.”
Less easy to understand is why Run Wrake wasn’t better known, even amongst animators. Even though he worked on U2 tours, and Rabbit won plenty of awards, it still felt that there was an animation giant walking amongst us, and not enough recognition of how much terrain his work covered. That was at least partially remedied, earlier this year, with a Run Wrake Retrospective at the Ottawa International Animation Festival, with the title referencing one of his favourite characters:
(Above, a messy example VDMX interface of mine. Click screenshot to see full version)
Here’s a review brewed since I got my review copy back in 2005 (when VDMX first turned 5, says the Vidvox software museum*). Now that it’s 2012 and we’re at Beta version 184.108.40.206, it seems as good a time as any to declare VDMX 5 ripe and ready. Let’s do this.
What is VDMX 5?
VDMX 5 = A ‘modular, highly flexible realtime performance video application‘ developed by vidvox.net.
What does that even mean? The six word executive summary by @Protostarrr :
‘A hipsters version of After Effects‘ is cute, but misses a crucial difference – VDMX is software built for real-time usage – ie no waiting around for rendering, it means live adjusting, manipulating and sequencing of video clips and video parameters – during a theatre performance, while musicians play on stage, within an installation, or to create some hybrid of what might be called live cinema. Just as hiphop and electronic music producers have long been playing live with audio samples, we now have the ability to shift from a studio production mentality, towards using video samples in a live setting. This means VDMX must be capable of letting it’s users adapt and respond to any unfolding events – and the importance of having that flexibility is reflected with how Vidvox define their software:
“VDMX5 is a program that lets you assemble custom realtime video processing applications. This is an important distinction- instead of being stuck with a fixed processing engine and a static interface, it gives you the freedom to assemble not only whatever custom processing backend you desire, but it allows you a great deal of creative control over how you wish to interact with your backend.”
(Example search for ‘VDMX interface’ )
So what can VDMX 5 do?
- Trigger separate clips for playback across different projectors ( a desktop with multiple outputs, or an external graphics card for laptop is also needed)
- Mix several clips together to create layered collages and compositions (multi-blend mode options / compositing options / cross-fade options / customisable quartz transition modes)
- Map separate video layers onto physical objects (VDMX5 has basic perspective mapping functions, or can send video layers via syphon to other mapping software)
- Organise video layers into groups (which allows composition or FX parameters to be adjusted per layer or per group)
- Re-route any video layers into other layers / compositions (enables easy creation of visual feedback loops, or addition of more organic complexity with FX)
- Adjust or control any video parameter or Fx parameter easily with an onscreen slider or button – and in turn, control these by various data sources (eg mouse / midi / audio analysis from built-in laptop microphone / LFO oscillators and wave values / midi + OSC controllers / wii controller / iOS or android controller etc ), and these values can be flexibly refined by using a range of in-built math behaviours ( eg invert values, smooth values, multiply values etc).
- Build Control Surface Plug-ins – which are ways to consolidate various controls into a a customised interface ( eg have 4 meta sliders, each of which may control any number of other parameters, when activated )
- Capture camera inputs, apply effects to these. Can also record and playback camera samples in real-time.
- Capture the visual output from a window of any other application running, and re-route this through the VDMX signal chain (eg mix in a live webcast from a browser, bring in a photoshop sketching window, bring in a skype window etc )
- Record your clip-triggering and visual FX experiments to disk (Fast and reliable, records directly into a VDMX media bin for immediate re-triggering / remixing / recording and etc etc )
- Use a built in step sequencer for arranging clip-triggering or FX over time.
- Save and trigger presets in extensive ways (global, per layer, per FX chain, and per slider. And more recently, we can cut and paste parameter settings between sliders. Very useful for quickly copying refined parameter and interactivity settings from one effect to another.)
- Tightly integrate customised quartz composer patches and FX, including customised interface elements – where each of these can be controlled by the various methods described above. (It’s hard to overemphasise how useful and powerful this is).
- Use flash, text and HTML files, as well as Freeframe FX.
- New : send DMX (Artnet) data – to control / interact with lights / lighting desks… (I’m yet to play with this, but it’s a great addition. Requires a computer to DMX box such as the Enttec ODE. )
There’s much more, but you get the idea – it’s flexible, and can be adapted to suit your project by project needs. These open ended possibilities are both a strength and weakness of VDMX – it’s fantastic being able to make your own customised interface to suit a particular workflow or project, but first time users tend to find can be daunting to approach for first time users.
Below, an example of 3 layers being mapped to suit particular shapes. (The canvas controls can be enlarged for easier mapping / alignment, with pixel increment adjustments on corners, available by pressing arrow keys )
Understanding the VDMX Workflow
With the above multitude of options, getting to know the ropes is pretty important. Here’s a few learning pathways:
1. Plug N Play… aka ‘explore’ : Even within the downloadable demo software, VDMX5 comes with built-in template projects that can be accessed through the topscreen menu. These can be easily modified and used as a foundation for your own projects. Playing with each template will show some of the features and variety on offer. 2. Vidvox Wiki : Extensive, detailed listing and explanation of the progam’s various parameters. Read over, then go back to step 1 and play some more. 3. tutorials.vidvox.net : In-depth video tutorials from the pixelated horse’s mouth. 4. VDMX forums : Over time, I’ve probably learnt more about the program here than anywhere else – as with any software of depth, the possible solutions to any particular problem posed, are multiple and varied, and am regularly learning new ways to use VDMX through the discussions here. The developers also contribute frequently, debugging problems, clarifying how various aspects work, and helping point beginners in the right direction.
Some Example VDMX Projects
Aka – here’s some links to material I’ve used VDMX for.
$349US - Refreshingly, this licences the user to run VDMX on up to three different computers for personal use. On one level it’s a very generous licence – but on the other, it’s merely acknowledging the likely practices of most digital artists (across many workplaces, home, venues, installations, multi-screen set-ups etc). At any rate, very handy. Educational pricing = $199
There’s also a ‘Starving Artist Discount’ – ‘Put your skills to work helping out the VDMX community and you can get a license of VDMX5 for only $199 USD.’
While VDMX 5 is overkill for some people, and others might prefer the complexities of say of MAX/MSP or coding their own software, for me it strikes a great balance of depth and accessibility. Complex results and interfaces are possible, with relatively little mental investment. Once that initial learning has happened, it’s a very versatile tool, easily refined to suit each project (eg for this gig, let’s make the playback timeline fill the whole screen, so we can fine tune tiny little loops more easily – or let’s create 3 media bins so it’s very clear which samples to trigger for each of 3 stage characters – or let’s emphasise the FX palette here.. etc etc). VDMX 5 has evolved over many years, taking on board much user feedback, as well as introducing users to better ways of approaching video signals and introducing all manner of nuanced interface elements and processes. There is a lot of significant functionality in the program, but it’s in the nuanced details of those features, that the merits of VDMX 5 really come into play. Take it for a test drive….
For the TZU ’Beautiful’ music video, I recently found myself out near Hanging Rock, with plastic-wrapped laptop, projector, camera, lights, and a mini-crew – filming ghost projections in the night winter rain. Despite the weather drastically mismatching the supposed forecast, slowing everything to a snail’s pace, we salvaged the situation as best we could, reworking the storyboard around some of the less exposed areas, and soldiered on until about 5am. Not the end result we’d aimed for, but am happy with what we managed in the circumstances. So it goes. Full credits/links, and a series of behind the scenes photos over at the project page.
“The *spark d-fuser lets you crossfade between laptops. Whether switching between presenters or pushing avant-garde pixels, hands-on control for mixing DVI and VGA signals is now available in a compact and affordable package.
If you want to know more or see it in action, jump straight to the demo video below. If you’ve been following the project, the message is simple: pay and yours will be produced. Orders are being taken on September 5th, the manufacturing run will then take six weeks from there. Price: £710 ex. VAT, £852 inc. VAT.”
We have no jetpacks, but soon it seems, we will have affordable mixing of digital video signals, thanks to the herculean efforts of 1 x Toby Harris aka *spark aka ‘card carrying Timelord amongst VJs’.
Rattling along in the tube, in between bankers reading 50 Shades of Kindles… Toby envisioned a better world, a world where VGA and DVI signals could be mixed without repercussions, and a world where smooth crossfading could happen with a device carried in your backpack. It was also a world that he would have to build himself, and a couple of years down the track, here we are. In between priming conveyor belts and supervising factory elves, Toby was kind enough to answer these questions:
What have you enjoyed about using your prototypes during performances?
The mixer for me is in support of the laptop, and damn have I enjoyed pushing crazy pixels with my laptop. Using it two-up in a D-Fuse show with Mike, I’m freed from the need for it always to be my mix on screen, so I can rip down, prepare and experiment with the mix. Makes me push things much further! That, and I’m freed from the fear of my bleeding-edge software taking down the whole show.
The surprise for me was the tap buttons, I love them. The original prototype didn’t have them, I envisaged a cross fade from one to the other and not much else. But in the expression of interest form, lots of people asked, so on they went… and wow, tapping in a slight variation of the main laptop’s mix is a really powerful thing.
What sorts of firmware additions would you like to see / develop? (you mentioned multiply mode as an option once?)
Mix modes are in the realm of possibility. The processor has the power to compute a soft-edged key for every pixel, so there’s some per-pixel computing power to play with. Additive is the bangs-for-your-buck upgrade here, and I think would really creatively transform what is possible with the mixer as you get the ability to truly composite the two sources together. I talk about this at the end of the demo video, and I’m really trying to make it happen.
I’d love to see the processor lose its line limit of 2048 pixels, there’s the naive observation that TripleHead 800×600 should be possible given that is actually fewer pixels to process than the 1920×1080 it definitely can handle. TV-One have in a way already answered this in the 1T-C2-750‘s sibling, the 760. It can do 2880×900, but at the price of being able to fade both sources.
You have to realise however it’s TV One’s processor, and the firmware that runs it is very much their core product, their IP. There’s no possibility of them giving it to us to do, and them doing anything for us is a decision intertwined with their wider business plans. I wish it weren’t so, but the sheer fact they designed the 750 and produced it for an affordable price is something to wonder at.
Why release the firmware as Open Source?
The frustrations above should go some ways to answer! If you need to tweak, extend or optimise, its in your own hands, and in the best case that gets shared back to all. Simply put, its what I would want if I were in the community buying one. There is more to it than that, and there certainly are risks, so let’s call it an experiment and see how it plays out.
Why has the video hardware world been so slow in releasing affordable digital mixers?
Well, one thing I can say is that this project has been one of the most ridiculous things I’ve ever done in terms of effort and reward – if I had an eye on the bottom line I’d have stuck to bespoke development and on-site fees! There’s obviously a quantity sold at which point that changes, but I’m not sure that quantity is comfortably within the VJ market, and I’m doubly not sure of that if you have the overheads of a worldwide corporation.
I’m surprised however that VJs haven’t been able to co-opt generic presentation kit, the 750 is as close as I’ve seen.
In what kinds of ways have you played (live) with the OSC / DMX and ethernet capacities?
The simple answer is I haven’t – the ability to have that is everybody’s gift back to me for doing this project, along — hopefully — with additive mixing. Come the first D-Fuse gig with the new controller, we’ll be rocking the OSC out. Finally we can cut between visual laptops and have the audio follow!
1. Re-created Middle-Earth on a kitchen tabletop… (hello – every backyard plant we have, hello – fallen moss covered branches from the park across the road, hello – turntable, hello – flashing bicycle lights, hello – wonky lampshades and plastic toys.)
2. Made some custom animations (Quartz composer, After Effects), and projected these onto Middle-Earth, using software to manipulate the projections (VDMX, Madmapper, quartz composer).
3. Recorded the results (Canon 7D, various lenses), and edited together (Premiere).
The Quartz patch didn’t seem to continually update from the twitter RSS feed – but starts to cycle back through older tweets in a loop after a while… ( like 10-15 tweets ? ) + couldn’t figure out how to display author, alongside the text … Tried looking through the RSS info to find author parameters, then see where these might be adjusted within the quartz patch, no dice.
I posted a description of the problem to pastebin and asked on twitter.. and @lumabeamerz kindly wrote back *and* adjusted the quartz patch, noting…
“If you put your mouse pointer for a moment to a structure’s output, you will see what is “flowing”, like this:
So, 0-4 are indexes, “…” are keys. Basically, we need the member of key “authors”, which will give use an other structure. The index 0 member of structure is good for us, and give us an other structure. From the last structure, we can extract the name with the key “name”. It is simple if you are a programmer, since the method is same in the Obj-C land to access structures. For the updating, I connected a Signal patch to the RSS patch’s update signal input, so it actually refreshes in a 60sec. period.”
Here’s the final quartz patch, edited by @lumabeamerz - which continuously updates any tweets from a particular hashtag, and displays author name alongside. Maybe it’s a useful template for you to modify however you wish?