Software plugins are often forecasters of future fatigue – as off the shelf solutions, the ease at which they can produce satisfying results – can simultaneously dictate how quickly their techniques are likely to become widespread and our eyeballs immune to them. Zoetroepe Software‘s recent batch of plugins are attempts at ‘generative design instruments (for Final Cut Pro X, Adobe Premiere, Adobe After Effects & Apple Motion) which encourage and reward experimentation – an unsurprising approach, given they spring from the creator of VDMX (real-time video software), Johnny De Kam. (see interview below the reviews)
“My underlying design philosophy is akin to synthesizers in music, that is, visual instruments with oscillators and various visual ‘forms’ as source compositions. I tend to design these instruments to be open, allowing you the greatest flexibility to influence the output, a good dose of ‘meta’ design that can get you results quickly, and finally I strive to build systems with unique aleatoric progression, randomness and style capable of producing unexpected results… I take great joy in crafting shape-specific UV texture maps that preserve the best aspect ratio for maximum video impact.”
– Johnny De Kam
Overall, the Zoetroepe collection of plugins focus on colour controls, pattern generation and geometric transformations – but it’s the way they’ve been built, which distinguishes from a lot of more directly functional plugins out there. Let’s start with the juiciest:
“An organic, generative design plugin … which lets you explore and iterate simple or complex curving and flowing forms in 3D space, using any video source in your timeline.”
This was one of my favourite Zoetroepe plugins to use – some pretty wild shape distortions are possible, and the automated mode is fun to use, letting some of the oscillators define and keep animating some behaviours, while other parameters can be tuned to fit or juxtapose with those. An adjustible streaked motion blur option is a nice touch too, for muting video textures sometimes.
– lacks continuous rotation with some parameters, eg being able to keyframe multiple lots of 360 degrees, rather than 0-360…
– could use some presets as interesting starting points
Capable of some organic and distinctive results, with plenty of surprises rewarding exploration.
“A perpetually folding generative design instrument, manifested as a video effect plugin. With FOLD, you can explore and iterate simple or complex geometric forms in 3D space, using any video source in your timeline.”
Again, there’s a pleasure with exploring this tool – happy accidents and unexpectedly pleasing shapes pop-up regularly, and it’s an origami flavoured fun to morph between keyframed shapes. Super easy to generate unusual 3D animated shapes, for texturing with any desired video.
“A 3D video mapping & geometric alpha transition engine, based on meticulously crafted 3D mesh models – you can easily map a video source and animate it in space.”
There’s a lot of fun to be had – adding video textures to objects as easily as this. With the technical details resolved, it’s straight into compositing and animating movement, scale and rotation over time. Duplicate layers to create shadows, composite against colours, textures, backgrounds (muted? coloured? blurred?), scale large for abstraction – everything happens fast with OpenGL acceleration. Enabling the vertex distortion algorithm – animates individual vertex points in the model, offering up even more contortions on your video.
could use mirror (and other modes) for tiling..
Using this same 3D model-based approach, playful transitions are possible by using the alpha channel to define the transparency between A and B video over time. Load one of thirteen models, then define / animate the light source position, and adjust softness to vary the transition from hard edged wipe to gentle fade. In the screenshot above, a geometric transition is revealing the orange cityscape against the wet window clip. The combinations possible through object rotation, changing the light source, and softening the object edges really help tune the dynamics of the transition to the clips.
“Our exclusive collection of 120+ patterns are tightly integrated within a trio of plugins designed for different parts of your workflow: PATTR BKG is a pattern background generator, PATTR TRANS is a transition engine, and PATTR MASK is a pattern mask composite effect. Within each plugin you can, seamlessly scale, rotate, translate and animate each pattern”
These work nicely – scale, rotation and movement options facilitate a surprising amount of difference with each seamless pattern, and all of the patterns are stored in greyscale, which enables an in-built tri-tone colour mixer to mix, or hue-morph colours over time.
Being able to use user defined textures to auto-generate the seamless patterns would be nice
PATTR TRANS – Use seamless patterns as alpha channel transition between clips
PATTR MASK – Use seamless patterns as alpha mask between clips
Above – using PATTR MASK shapes to composite Mexican landscape video above a test pattern.
As a collection of keyframe adjustable color washes, leaks, gradients and vignettes – these offer a surprising flexibility for easily generating sophisticated colour looks for existing footage. Offering more than just a set of vintage insta-filters, they’re highly customisable and even the presets generate a range of uncommon and interesting colouring options.
The 4 generator instruments: Z Breath, Z CMYK, Z Bars and Z Color Fields offer easy and adjustable options for quick generation of title backgrounds, compositing elements and lighting effects. Minimal, efficient, convenient.
Other plugins available at Zoetroepe: ZSuite – FCP X only – intriguing looking temporal suite… montage suite, colour suite Painterly – 4K resolution paintstroke transitions
After Effects, Premiere, or Final Cut Pro X on OS X.
Each plugin available at FXFactory, priced between $39 and $49 US.
The Zoetroepe plugins bring some of the fun of real-time visual software – to the demands of the keyframable production timeline. Well worth a spin by anyone looking for timeline convenient ways to explore and experiment with compositing, visual styles and effects.
What have you been doing with video since creating VDMX? I decided to pass the VIDVOX torch to David Lublin when it became clear there were opportunities in large scale project work: concert tours, installations, media festivals, etc. My first big tour was with Sasha & John Digweed & my client list grew steadily over time. Touring as video director or visual content director is something I’ve continued to do until very recently. I’ve also kept very engaged with the art world (which is where my formal training began). Through it all, I’ve continued to create custom software, systems and experiments. I’ve been lucky in that I’ve managed to keep a very interdisciplinary practice over the years.
Was there a fulldome phase in there somewhere? Indeed there was. My first project when transitioning away from VIDVOX, circa 2004, was to create a digital planetarium for a children’s museum. That’s where I met the Elumenati, who had just started marketing their patented fulldome projection lenses. Within a year I started working for them directly, seeing what kind of real-time software we could create to drive immersive fulldome experiences. It was fascinating work, years ahead of its time in terms of VR. We mounted several live dome projects as well as a few permanent installations. Alas, the work was very niche and difficult to sustain, and concert tours kept calling me. I subsequently left do develop a full show for synth pop icon / renaissance man, Thomas Dolby.
What has interested you about the evolution of live video during that time? It was a very new idea for concert tours to incorporate realtime visual content. So called media-servers were in their infancy. The electronic music scene was very hip to it all, but outside of this, only a handful of VJ’s had managed to break out into larger concert production industry. It was a very interesting place to be in. I had spent years of my life pioneering my tools with VIDVOX, and now I was able to see the fruits of my labor work in front of massive audiences around the world. The pursuit then became more about the content itself, and advancing the ways in which we could create and manipulate it live. How it could seamlessly integrate with the lighting and set design. It has been fascinating to watch the technology continue to evolve.
I am so proud that VDMX continues to be such a powerful tool, used by so many people, on so many productions.
What lead you into exploring abstract plugins for editing and post production software? With touring, working for grammy winning acts, television, the whole thing, I found I was traveling nearly most of the year. Then, my wife and I had our daughter, and it quickly became important for me to focus on my family and get off the road.
I struggled a bit to figure out how I could pivot my career. We moved to Boston and I started doing `traditional` video editing and production for the many universities here, such as Harvard and MIT. It dawned on me that I had one true calling that I’ve always loved, and that was video software. FxFactory is based here in Boston, which if you didn’t know, can use Quartz Composer under the hood to create plugins. I love QC so it just made a lot of sense.
I saw a unique opportunity to explore some perennial creative ideas I’ve worked with, but in the plugin space. Most plugins out there are purpose built for utility and saving time. I am more interested in the artistic and generative design possibilities. The downside perhaps is that I’ve curtailed my market by doing so, as they may seem bit idiosyncratic and weird at first glance, but hopefully there are some folks that realize their true potential.
Which of your plugins do you enjoy using the most? Each one has its own unique appeal. One of my favorite strategies is to use COLR as a source generator, and then apply FLOW or FOLD to play with shape. It is endlessly fun! I’m also proud of GEODE, my equivalent to ‘basic research’ — It is deceptively simple, but ultimately powerful when you explore its potential.
What opportunities exist for 3D software today? I think there is a lot of interesting things one can do with 3D, but for years I stayed away from it because I found the mechanics so tedious. People who model and animate 3D really have a special temperament, it is not how I like to work, but often find I must. For me I am most interested in live generation and manipulation of the 3D form… to bring the immediacy of raster based instruments like VDMX to the 3D space. I would like to build visual instruments that use 3D vectors as their source.
You’ve mentioned the plugins work better in FCP X rather than the Adobe suite – because they were built using Quartz Composer, which I’ll presume FCP X integrates better. Is QC the reason behind the large number of plugins that are FCP X only? It’s not because of QC, it is because Apple allowed Motion to be a plugin and template engine for FCPX. Anyone with mograph experience can get their feet wet making plugins, even sell them if they like. FxFactory facilitates a lot of this. PixelFilmStudios is another player that really works almost exclusively with Apple Motion as their dev tool.
As for Adobe CC, part of the magic with the FxFactory ecosystem is that they ‘wrap’ plugins built natively with Quartz into a package that Premiere and After Effects can use directly, which has allowed people like me to make products without needing to work in Xcode, and be able to address both Apple’s and Adobe’s ecosystem with the same source. I think my plugins run ‘better’ in FCPX simply because FCPX is quite simply faster than Adobe. Mind you, this is a MacOSX discussion we are having here.
What are your thoughts about the role and future of QC today – within the Plugin developer community – and for coder artists in general? People have been saying for years QC is going to die. Yet it hasn’t happened nor has Apple ever indicated they plan to. I see it a bit like QuickTime. It is so fundamental and so many products rely on it that Apple would be shooting itself in the foot to actively kill it. That said, what they are doing is building more tech around Metal and making it easier to code with Swift, at the same time training young ones in Swift from the start. It’s a powerful long term strategy when you think about. Eventually there won’t really be a need for Quartz (in their mind).
There will always be a place for node-based graphical programming, so even if Apple doesn’t release ‘a new Quartz Composer’ I am absolutely certain other tools will be filling the space (as they already are). Vuo, VVVV, FxCore, Max/Jitter, Touch Designer Etc.
Do you have future Zoetroepe plugins in mind? I’m working on something completely different than plugins right now. I mentioned earlier an interest in realtime 3D instruments… and this is where I’m heading – a return to application software rather than plugins. I hope it will have a broader appeal. I’m particularly interested in the design community, who I think have much to gain by exploring instruments rather timelines and canvases. I also have a keen eye on blockchain tech, but I really can’t say anything else about that right now.
Software Adventure-Time! Newest* video kid on the block = Mitti, a “modern, feature-packed but easy-to-use pro video cue playback solution for events, theatre, audiovisual shows, performances and exhibitions,” – coming from the same stable that brought us Vezer (the timeline based midi/osc/dmx sequencer) and COGE (the versatile VJ software). (By *Newest, I mean it’s been around since late 2016, but since then, Mitti has enjoyed a steady rate of notable additions and updates.)
After all the work that can go into video material for an event, playback control can sometimes be left as an afterthought – it’s not unknown to see videos being played back from video editing software and ‘live-scrubbed’, or to watch users flipping between their desktop and powerpoint / keynote / Quicktime etc. VJ software of course brings a flexibility and reliability to playback control – taking care of the basics such as fading to black, or looping upon finish of clip, cross-fading, or simply – avoiding the desktop suddenly appearing on the main projected screen. The flexibility of most VJ software is also one of it’s limitations – the strength of real-time effects and mixing, tending to make interfaces more obtuse than they need to be for users seeking simple playback. And when getting simple playback exactly right becomes important, for events, theatre, installations etc – this is where Mitti seems to be aiming at, in the ballpark of apps like Playback Pro,QLAB or perhaps Millumin, where cues and critical timing are given more priority than visual effects.
So What’s Mitti Like?
Running at it’s most minimal, with a playlist of clips, Mitti appears deceptively simple – but comes dense with custom controls at every level.
The Interface – a strength would seem to be the efforts spent in making a very clear and intuitive interface for Mitti – it comes across as clean and easy to navigate, with extended options available where they might be expected. (And for further depth – go to the control menu, choose Mitti, then Preferences, to see a very well organised array of options)
Timing – Exact timing control can be critical, and Mitti boasts low latency, a GPU playback engine, and can run from an SMPTE based internal clock, or slave to external hardware or software sources. If you know what these Mitti capacities mean, you’re possibly the intended audience: external MTC (MIDI Timecode), LTC (Linear Timecode), SMPTE offsetting, Jam-Sync.
Cues – You can create and trigger cues for videos, images, cameras (and native Blackmagic support), syphon and NDI sources – each with nuanced options, and easily add or adjust in/out points per clip. Cues can be set to loop, and the playlist can be paused between cues, until ready to start the next cue.
Nuanced control of fades, transitions – eg individual control per cue over fade in or outs, and over 30 ISF based video transition options
Output finessing – Includes colour controls, video effects, audio channel routing and for multiple video displays – Mitti provides individual 4-corner warping for each, and edge blending between overlapping projectors. Mitti can also output syphon, blackmagic and NDI.
Below, showing the various cue preference options:
Video output options:
Project Preferences, which include detailed options for each item on the left:
This can be important when dealing with unusual projects or computer quirks… and Imimot boast great support – “Our average first response time was only 3 hours 5 minutes in the past 7 days!”, as well as extensive FAQ / tips and support documentation:
A mac computer – running 10.10-10.12.
$299US for perpetual licence (for 2 computers) (30% edu discount available)
$79US rental licence for 30 days from first activation.
Solid, reliable software that’ll be of interest to anyone involved with running video for time-critical events, theatre and installations. Double thumbs up!
Interview with Tamas Nagy, Creator of Mitti
Tamas, below in Imimot HQ, was nice enough to answer some questions about why he made Mitti….
With the ecosystem of video software that exists – what inspired you to add MITTI to it? The idea of creating Mitti is coming from Vezér feature requests – the funny thing with this is Vezér was also born by a couple of CoGe feature requests 🙂 A lot of Vezér users were searching …. for an any-to-use but remote controllable video playback solution, which plays nice with Vezér, or even requested video playback functionality in Vezér. Adding video playback functions to Vezér does not sound reasonable for me, the app was not designed that way, and I wanted to leave it as a “signal processes”/show control tool instead of redesigning the whole app. After doing some research I’ve found there were no app that time on the market I could offer to Vezér users which is easy to setup and use, lightweight, and controllable by various protocols that Vezér supports. So I’ve started to create one. The original plan was to make something really basic, but once I’ve started to speak about the project to my pals and acquaintances, I’ve realised there is a need for an easy to use app on the market with pro features, and by pro features I mean timecode sync, multi-output handling, capture card support.
What are some contexts you imagine MITTI being used? Mitti is targeting the presentation, theatre, broadcast and exhibitions market, usually where reliable cue-based media playback is needed – and this is where Mitti’s current user base come from: event producer companies, theatre, visual techs of touring artists, composers working together with DAWs, etc.
What interests you about NDI? I believe NDI is the next big thing after Syphon. Now you can share frames between computers even running different operating systems without or minimal latency, using cheap network gear or already exists network infrastructure. And there are even hardwares coming with native NDI support!
An other big thing is the OSC Query. This is not strictly Mitti and Vezér related. OSC Query is a protocol – still in draft mode yet – proposed by mrRay from Vidvox to discover an OSC-enabled app’s OSC address space. As far as I know only Mitti and Vezér supporting this protocol on the market, but hopefully others will join pretty soon, since this going to be a game changer in my opinion.
Why is Mitti priced more expensive than COGE? This is a rather complex topic, but basically Mitti has been designed to a fairly different market than CoGe. Also CoGe is highly underpriced in my opinion – well, pricing things is far more complex stuff than I imagined when CoGe hit the prime time – but that is a whole different topic.
I spent a few nights in a hospital basement last year, projecting video and controlling lights for The General Assembly – onto a room filled with paper strips, while audiences roamed between rooms for mini-sets. It was part of Melbourne Music Week and super fun – the video below shows it up nicely.
Will be doing projections for TGA again this saturday at The Toff In Town :
Melbourne, as the most Nathan Barley of Australian cities, so easily lampooned for its population of bushranger bearded baristas with half-baked app ideas, makes a strong argument for being Australia’s Portland. Perfectly placed then, for reviewing Lumen – new real-time visual software coded by Jason Grlicky in downtown Portland, which tries to add some contemporary twists to the quirky history of video synthesis.
What is Lumen?
A mac based app (needing OSX 10.8 or later) for ‘creating engaging visuals in real-time’… with a ‘semi-modular design that is both playable and deep.. the perfect way to get into video synthesis.’ In other words – it’s a software based video synthesiser, with all the noodling, head-scratching experiments and moments of delightful serendipity this implies. A visual synthesiser – that can build up images from scratch, then rhythmically modify and refine them over time. It has been thoughtfully put together though, so despite the range of possibilities – it’s also very quickly ‘playable’ – and always suggesting there’s plennnnttttyyyy of room to explore.
The Lumen Interface
While the underlying principles of hardware based video synthesisers are being milked here to good effect – a lot of the merits of Lumen are in the ways they’ve managed to make these principles easily accessible with well considered interface design. It has been divided into 3 sections – a preset browser (which also features a lovely X/Y pad for interpolating between various presets), a knob panel interface, and a patch panel interface. It’s a very skeuomorphic design, but it also cleverly takes the software to places where hardware couldn’t go (more on that later).
What should be evident in those screengrabs, is that experimentation is easy- and there’s a lot of depth to explore. The extensive reference material helps a lot with the latter. And as you can see, they can’t help but organise that beautifully on their site:
Lumen comes pre-loaded with 150+ presets, so it’s immediately satisfying upon launch, to be able to jump between patches and see what kind of scope and visual flavours are possible.
… and it’s easy to copy and remix presets, or export and swap them – eg on the Lumen slack channel.
Midi, OSC + Audioreactivity
Although all are planned, only midi exists in Lumen so far, but it’s beautifully integrated. With a midi controller (or a phone/tablet app sending OSC to a midi translating app on your computer) – Lumen really comes into it’s own, and the real-time responsiveness can be admired. Once various parameters are connected via midi control, those of course can effectively be made to be audioreactive, by sending signals from audioreactively controlled parameters in other software. Native integration will be nice when it arrives though.
Video Feedback, Software Style
Decent syphon integration of course opens a whole range of possibilities…. Lumen’s output can be easily piped into software like VDMX or COGE for use as a graphic source or texture, or mapping software like madmapper. At the moment there are some limitations with aspect ratios and output sizes, but that’s apparently being resolved in a near-future update.
With the ability to import video via syphon though, Lumen can reasonably considered as an external visual effects unit. Lumen can also take in camera feeds for processing, but it’s the ability to take in a custom video feed that can make it versatile – eg video clips created for certain visual ideas, or the output of a composition in a mapping program.
This screengrab below shows the signal going into Lumen from VDMX, and also out of lumen back into VDMX. Obviously, at some point this inevitably means feedback, and all the associated fun/horror.
macOS 10.8 or newer (Each license activates two computers)
There’s an army of lovers of abstracted visuals that are going to auto-love Lumen, but it has scope too for others looking for interesting ways to add visual textures, and play with real-time visual effects on video feeds. It could feasibly have an interesting place in a non-real-time video production pipeline too. Hopefully in a few years, we’ll be awash in a variety of real-time visual synthesis apps, but for now Lumen is a delightfully designed addition to the real-time video ecosystem.
Interview with Lumen creator, Jason Grlicky
– What inspired you to develop Lumen?
I’ve always loved synthesizers, but for most of my life that was limited to audio synths. As soon as I’d heard about video synthesis, I knew I had to try it for myself! The concept of performing with a true video instrument – one that encourages real-time improvisation and exploration – really appeals to me.
Unfortunately, video synths can be really expensive, so I couldn’t get my hands on one. Despite not being able to dive in (or probably because of it), my mind wouldn’t let it go. After a couple failed prototypes, one morning about I woke up with a technical idea for how I could emulate the analog video synthesis process in software. At that point, I knew that my path was set…
– When replicating analogue processes within software – what have been some limitations / happy surprises?
There have been so many happy accidents along the way. Each week during Lumen’s development, I discovered new techniques that I didn’t think would be possible with the instrument. There are several presets that I included which involve a slit-scan effect that only works because of the specific way I implemented feedback, for instance! My jaw dropped when I accidentally stumbled on that. I can’t wait to see what people discover next.
My favorite part about the process is that the laws of physics are just suggestions. Software gives me the freedom to deviate from the hardware way of doing things in order to make it as easy as possible for users. The way that Lumen handles oscillator sync is a great example of this.
Can you describe a bit more about that freedom to deviate from hardware – in how Lumen handles oscillator sync?
In a traditional video synth oscillator, you’ll see the option to sync either to the line rate or to the vertical refresh rate, which allows you to create vertical or horizontal non-moving lines. When making Lumen, I wanted to keep the feeling of control as smooth as possible, so I made oscillator sync a knob instead of a switch. As you turn it clockwise, the scrolling lines created by the oscillator slow down, then stop, then rotate to create static vertical lines. It’s a little thing, but ultimately allows for more versatile output and more seamless live performance than has ever been possible using hardware video synths.
Were there any other hardware limitations that you were eager to exploit the absence of within software?
At every turn I was looking for ways to push beyond what hardware allows without losing the spirit of the workflow. The built-in patch browser is probably the number-one example. Being able to instantly recall any synth settings allows you to experiment faster than with a hardware synth, and having a preset library makes it easier to use advanced patching techniques.
The Snapshots XY- Pad, Undo & Redo, and the Transform/K-Scope effects are all other examples of where we took Lumen beyond what hardware can do today. Honestly, I think we’re just scratching the surface with what a software video instrument can be.
How has syphon influenced software development for you?
I had an epiphany a couple years back where I took a much more holistic view of audio equipment. After using modular synths for long enough, I realized that on a certain level, the separation between individual pieces of studio equipment is totally artificial. Each different sound source, running through effects, processed in the mixer – all of that is just part of a larger system that works together to create a part of a song. This thinking led me to create my first app, Polymer, which is all about combining multiple synths in order to play them as a single instrument.
For me, Syphon and Spout represent the exact same modular philosophy – the freedom to blend the lines between individual video tools and to treat them as part of a larger system. Being able to tap into that larger system allowed me to create a really focused video instrument instead of having to make it do everything under the sun. Thanks to technologies like Syphon, the future of video tools is a very bright place!
What are some fun Lumen + Syphon workflows you enjoy – or enjoy seeing users play with?
My favorite workflow involves setting up Syphon feedback loops. You just send Lumen’s output to another VJ app like CoGe or VDMX, put some effects on it, then use that app’s output as a camera input in Lumen. It makes for some really unpredictable and delightful results, and that’s just from the simplest possible feedback loop!
What are some things you’re excited about on the Lumen roadmap ahead?
We have so many plans for things to add and refine. I’m particularly excited about improving the ways that Lumen connects with the outside world – be that via new video input types, control protocols, or interactions with other programs. We’re working on adding audio-reactivity right now, which is going to be a really fun when it ships. Just based on what we’ve seen in development so far, I expect it to add a whole new dimension to Lumen while keeping the workflow intuitive. It’s a difficult balance to strike, but that’s our mission – never to lose sight of the immediacy of control while adding new features.
I recently animated some vintage botanical illustrations for an interactive exhibition installation at The Royal Botanic Gardens, Sydney. It was fun to collaborate with Robert Jarvis ( zeal.co ) on this – who programmed the interactivity (incorporating childrens’ webcam photos into the various creatures and plant-life storylines), as well as with D.A. Calf ( dacalf.com ) who brought the world to life so well. And a special shout-out to Luke Dearnley and Sophie Daniel who produced it.
One of the video-art greats passed away recently – RIP Bill Etra, who leaves behind a huge legacy for his work at the intersections of art and technology. Below, Bill Etra demonstrates the functions of the Rutt/Etra Video Synthesizer. (1974)
“Bill Etra, an artist and inventor who, with a partner, created a video animation system in the early 1970s that helped make videotape a more protean and accessible medium for many avant-garde artists, died on Aug. 26 near his home in the Bronx. He was 69.
The cause was heart failure, said his wife, Rozalyn Rouse Etra. Mr. Etra had spinal stenosis for many years and was mostly bedridden when he died.
Mr. Etra and Steve Rutt created the Rutt/Etra video synthesizer, an analog device studded with knobs and dials that let a user mold video footage in real time and helped make video a more expressive art form. Among the artists who used it were Nam June Paik, regarded by many as the father of video art, and Woody and Steina Vasulka, who founded the Kitchen performance space in downtown Manhattan in 1971.”
“The dream was to create a compositional tool that would allow you to prepare visuals like a composer composes music,” Mr. Etra wrote. “I called it then and I call it now the ‘visual piano,’ because with the piano the composer can compose an entire symphony and be sure of what it will sound like. It was my belief then, and it is my belief now after 40 years of working towards this, that this will bring about a great change and great upwelling of creative work once it is accomplished.”
“Developed in 1972, the RUTT/ETRA Video Synthesizer was one of the first commercially available computerized video animation systems. It employed proprietary analog computer technology to perform real time three dimensional processing of the video image. In the first use of computer animation in a major Hollywood picture, Steve Rutt, working directly with Sidney Lumet, used the Rutt/Etra to create the animated graphic for the film’s “UBS” Television Network.”
Rutt-Etra-Izer is a WebGL emulation of the classic Rutt-Etra video synthesizer, by Felix Turner, which ‘replicates the Z-displacement, scanned-line look of the original, but does not attempt to replicate it’s full feature set’. The demo allows you to drag and drop your own images, manipulate them and save the output. Images are generated by scanning the pixels of the input image from top to bottom, with scan-line separated by the ‘Line Separation’ amount. For each line generated, the z-position of the vertices is dependent on the brightness of the pixels.
Am glad to finally upload that edit-medley – because creating a set of concert visuals for Hermitude was one of my favourite projects last year, seeing it from drawing-board and sketch paper, through to the stage screen. Hermitude had approached (having worked together on Dr.Seuss Meets Elefant Traks at Sydney’s Graphic Festival in 2012) – about developing video for their tour promoting Dark Night, Sweet Light – and wanted a visual set that suited their music, would work well within a hectic stage lighting environment, and was diverse but felt like a coherent, consistent show.
To suit Hermitude’s fun and festive sound and their dynamic live performances – I developed an overall visual style palette to enhance that, and mapped out a visual choreography for the show. And though I was excited about making some Hermitude clips of my own, it was also an exciting opportunity to collaborate with some talented animators, coders and cinematographers. It was fantastic to be able to work with these artists to craft the Hermitude set:
Neil Sanders – a Melbourne hand-drawn illustrator and animator extraordinaire, famous for his signature organic tumblr loops… Ori Toor – another hand-drawn abstraction loop specialist, beaming pixels to us from the Middle East. Colin E. White – moodily stylised New York animator. Brad Hammond – A Melbourne 3D Unity animation ninja + coder. (And shout-out to Kejiro Takahashi from Japan, for his ongoing publishing of Unity software addons… ) Stu Gibson – A Tasmanian surf + aerial cinematographer, who was very generous with his wild coastline footage (which I used to make the Bermuda Bay clip below)
It was also a pleasure to develop this visual set over time, because Luke ‘Dubs’ + Angus ‘El Gusto’ (aka Hermitude) are so down to earth and friendly, despite their relentless touring and acclaim, as are the whole Elefant Traks crew – especially their tireless manager (and collaborator) Urthboy and Luke Dearnley (Sub Bass Snarl), their wizardly tour manager (who designed a clever + efficient video rig featuring live cams – for routing and controlling their stage video feeds).
A lot of pixel-sweat across quite a few months, but …
.. so satisfying to see it all come together in the end.
I was lucky enough recently to catch a film-talk panel between director Joshua Oppenheimer and John Safran, at the Melbourne International Film festival. Having just seen the Look of Silence earlier that day, and already in awe of the brave and audacious film-making from the earlier companion film (The Act of Killing) – it was humbling and a privilege to hear about some of what went into the making of the film – and what some of its’ impacts have been since.
Given that Indonesia has not officially or publically discussed the mass killings that happened in 1965-66 (supposedly to get rid of a communist threat) – and that many of the perpetrators are entrenched in power today, it’s quite remarkable that these two films got made – prompted national discussions about them – and that the second film was given official recognition:
“On November 10, 2014, 2,000 people came to the official and public premiere of the film in Jakarta, and on December 10, 2014 – International Human Rights Day – there were 480 public screenings of the film across Indonesia. The screenings of the film in Indonesia has been sponsored by the National Human Rights Commission of Indonesia and the Jakarta Arts Council.” ( Via wikipedia)
Incredibly, after the first film – which featured the ‘surreal / defensive(?)’ boasting of one of the mass-killers – an Indonesian journalist saw the film, and persuaded their magazine to send out investigative journalists to document similar people in 60 different locations across Indonesia – and then published all of these in one go – alongside an in depth reaction to Oppenheimer’s film – which broke the silence, and allowed Indonesian media to move past the taboo of discussing these events.
Regardless of your awareness of this Indonesian mass killing, these are powerful films on many levels – well worth hunting down.
The film focuses on the perpetrators of the Indonesian killings of 1965–66 in the present day; ostensibly towards the communist community where almost a million people were killed.
Invited by Oppenheimer, Anwar recounts his experiences killing for the cameras, and makes scenes depicting their memories and feelings about the killings. The scenes are produced in the style of their favorite films: gangster,western, and musical.
The name “Anonymous” appears 49 times under 27 different crew positions in the credits. These crew members still fear revenge from the death-squad killers.
When the government of Indonesia was overthrown by the military in 1965, Anwar and his friends were promoted from small-time gangsters who sold movie theatre tickets on the black market to death squad leaders. They helped the army kill more than one million alleged communists, ethnic Chinese, and intellectuals in less than a year. As the executioner for the most notorious death squad in his city, Anwar himself killed hundreds of people with his own hands. Today, Anwar is revered as a founding father of a right-wing paramilitary organization that grew out of the death squads. The organization is so powerful that its leaders include government ministers, and they are happy to boast about everything from corruption and election rigging to acts of genocide.
The Act of Killing is about killers who have won, and the sort of society they have built.
In The Act of Killing, Anwar and his friends agree to tell us the story of the killings. But their idea of being in a movie is not to provide testimony for a documentary: they want to star in the kind of films they most love from their days scalping tickets at the cinemas. We seize this opportunity to expose how a regime that was founded on crimes against humanity, yet has never been held accountable, would project itself into history.
And so we challenge Anwar and his friends to develop fiction scenes about their experience of the killings, adapted to their favorite film genres – gangster, western, musical. They write the scripts. They play themselves. And they play their victims.
“Through Oppenheimer’s footage of perpetrators of the 1965 Indonesian genocide, a family of survivors discovers how their son was murdered, as well as the identities of the killers. The documentary focuses on the youngest son, an optometrist named Adi, who decides to break the suffocating spell of submission and terror by doing something unimaginable in a society where the murderers remain in power: he confronts the men who killed his brother and, while testing their eyesight, asks them to accept responsibility for their actions. This unprecedented film initiates and bears witness to the collapse of fifty years of silence.”