© 2019 Stefan Nussbaumer.
Creative Commons Lizenzvertrag

Let the worlds collide

It’s already quite some time ago since I last posted my work with VideOSC, the OSC controller for Android that allows you to control digitally generated sound or other media through the color information retrieved from the device’ inbuilt camera. I haven’t been lazy or given up the project. In fact there is a new version in the pipeline, built using the native Android API (the first version was built using Processing, a Java-based programming environment for artists). But let’s elaborate about this a bit later…

Above video was taken on December 14 last year at smallforms, a concert series for experimental music in Vienna, Austria. When I started working on VideOSC it was this strange idea that I wanted to hear what a view would sound like. Or even more, to control sound through color. I must say, the result was quite far from what I expected. Video and sound are certainly two too different phenomenons for the ear to identify what the original image might have been. Nevertheless, that’s also where the fun starts.

VideOSC is not really a “musical” instrument. If you’d like to play cords or control rhythms you’re probably better off buying a normal instrument or a different kind of software. What VideOSC produces will for most ears simply be noise, even though it’s perfectly deterministic. However, what I found was an instrument that bridges the gap between my original profession, painting, and my current filed of exploration, sound. Not only that I love to play with noise, using VideOSC enabled me to create a kind of ephemeral painting, paintings that, by nature, prevent me from painting over and over, quickly set, quickly gone again. I must admit I haven’t got much control of what I’m drawing. But that is pretty much what I always wanted – not to control things, rather letting them flow. That’s a pretty difficult exercise in painting as well as in life in general…

What can be seen in the video is still a rough sketch. Everything, sound and video, is done in SuperCollider, which is fine for audio but not ideal for video. Unfortunately SuperCollider doesn’t support OpenGL which would allow to render stuff on the computer’s graphic processor unit (GPU). Hence, I’m planning to move over to Processing or (maybe) openFrameworks for generative video. This would on the other hand mean I have to send data from VideOSC to two different applications, respectively two different ports which means, either forward OSC data from SuperCollider to the video generating app, OR, and I think this my preferred solution, enable VideOSC to send data to more than one IP address/port at a time. This is likely the next feature I will add (just for those interested…).

Meanwhile I have put the current VideOSC development version on Google Play for free download. I must admit I made an embarrassing mistake when publishing the app: Really it should have been a closed alpha release, available only for members in a Google Group (Google Play doesn’t allow alpha releases to be freely available). Unfortunately the release slipped from alpha into the regular release… sorry. Anyway, though still being in alpha state I believe the new version is preferable to the old version. It ‘s probably more stable and works on the latest Android version. Lacking features will follow…

Except from technical details on the new VideOSC version: Here are a few more news on my work.

in the noise of the night and the still of the day - an installation during Klangmanifeste 2019 at Echoraum, Vienna

Stay tuned for new updates on VideOSC or other stuff that I do…

Post a Comment

Your email is never published nor shared. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>