© 2016 Stefan Nussbaumer.
Creative Commons Lizenzvertrag

VideOSC – an experimental OSC controller

Seeing as a Construction Process of Reality

When we talk about “seeing” we will hardly understand the term as process of creation. We look at some object and if someone asks us what we are seeing we will probably answer “we see some object”. In other words, we identify an image of the object with the object that we see. Our personal visual experience will be so common with other people’s visual experience that we can say without any doubt that our visual experience is pretty much identical with the experience of others. Nevertheless this is an illusion. It is our brain that creates an individual symmetry within our common reality and our personal perception.

The German artist, programmer and theorist Julian Rohrhuber says in his article “Operation, Operator – Sehen, was das Photon sieht”[1]Julian Rohrhuber, “Operator, Operation – Seeing what the Photon sees”, in Strukturentstehung durch Verflechtung. Akteur-Netzwerk-Theorie(n) und Automatismen, Fink Verlag, Munich 2011, … Continue reading the following about the term ‘symmetry’ in scientific research:

“Symmetry and abstraction are two central as well as disputed elements within scientific representation. What they have in common is a peculiar, targeted indifference adverse to differences, an indifference which is by a lesser degree a sign of inexactness, but rather coins the credibility, the elegance or economy of scientific solutions.”[2]”Symmetrie und Abstraktheit sind zwei ebenso zentrale wie umstrittene Elemente wissenschaftlicher Darstellung. Ihnen gemeinsam ist eine eigentümliche, gezielte Gleichgültigkeit gegenüber … Continue reading

I believe the previously described mechanism can be applied to our perception to a certain degree as well, if we understand visual perception not only as the fact of seeing but beyond that as part of a complex process of comprehension. Our brain quickly forgets about the lesser important details and concentrate on those ones that identify reality in its relevance for our existence. Whatever important details may be, it’s probably those that give our lives its implicitness.

In his paper Rohrhuber distinguishes between the constitutive or formal components and the epistemic ones. The epistemic components form the operational correspondence with a measurement procedure, while the formal, constitutive ones concern all other facts. Applied to our perception this means that seeing can be understood as a first stage in an operation rather than the mere observation – seeing already as part of an action.

Within VideOSC the correlation between (machine based) visual perception and the operation (or operational chain) becomes an even stricter one as what the machine “sees” will inevitably be tied to the operation and its product, be it acoustic, visual or otherwise. As random or stochastic VideOSC’s output may seem, it will always create a strictly logic symmetry to the visual environment it is observing.


1 Julian Rohrhuber, “Operator, Operation – Seeing what the Photon sees”, in Strukturentstehung durch Verflechtung. Akteur-Netzwerk-Theorie(n) und Automatismen, Fink Verlag, Munich 2011, retrieved from http://www.wertlos.org/~rohrhuber/articles/Rohrhuber_Operation_Operator_Sehen_was_das_Photon_sieht.pdf
2 ”Symmetrie und Abstraktheit sind zwei ebenso zentrale wie umstrittene Elemente wissenschaftlicher Darstellung. Ihnen gemeinsam ist eine eigentümliche, gezielte Gleichgültigkeit gegenüber Unterschieden, eine Gleichgültigkeit, die weniger ein Zeichen von Ungenauigkeit darstellt, sondern vielmehr die Glaubwürdigkeit, Eleganz oder Ökonomie wissenschaftlicher Lösungen ganz allgemein prägt.”


  1. Posted 25 Dec ’16 at 10:20 am | Permalink

    Hi, Stefan.

    I got the “Visual Controller with Sliders” worked out.

    It’s available at:

    Note: You also need to download the abstraction “isRGB” at https://drive.google.com/open?id=0B_fz4TqQFu-UbHk2QjN4ZC1qV28 and include that in the same folder as the main patch to have it work.

    This was a LOT of fun to work on and Will be something I use as soon as I get the chance. I’ll share a link with you to the music (as a youtube, audio-only video) once I get that done.

    Want to let you know I am going to share this patch on the Pure Data forum (at: http://forum.pdpatchrepo.info/) and will drop a link here for it once I do. Hope that’s all cool and well with you. Let me know if that’s not cool with you and I’ll reign it in.

    Peace and again thanks for this great work.

    It really is amazing and can put your head in a really cool place once you begin to think about what it’s capable of.

    Peace and love flow-ering through us all,

    p.s. hope your having a great holiday (if such is your thing of course :-)

    footnote: I tried converting the rgb’s to hexcodes and then filtering for hexcodes but that did not turn out so well. Think the range is really too big to make that feasible. Figured you were using the 256 web-safe colors but looks like and I think I am right, the range is really much much larger than that.

    • Posted 26 Dec ’16 at 12:41 am | Permalink

      Hi Scott,

      Thanks so much for your contribution! I tried your new version of the patch and it seems to work. I must confess I couldn’t figure out what the “isRGB” abstraction is supposed to do. And, no, I’m not using the web-safe color palette. The values you’re receiving come from the full palette of colors that are possible with 8 bits per color channel. If I’m not mistaken the web-safe color palette is a 7 bits per channel subset of the 8 bits per channel palette (it’s really quite some time ago that I had to deal with that…). 8 bits per channel resemble to 32 bits (considering R, G, B, A) or 24 bits (without alpha), meaning every color is defined by a red channel, a blue and a green one, where each of them holds an integer value between 0 and 255 (the highest possible value that can be described in 8 bits – 11111111 expressed as binary value). The combination of the three values (resp. four values, if alpha is given too) gives you millions of possible colors – just what e.g. Photoshop was using until not so long ago (I think Ps now supports 16 bits per channel…).

      However, here’s a simple suggestion on how to reduce the bit depth, considering an arbitrary integer value between 0 and 255, e.g. 174. Think of that number as a value in an 8 bit range (0-255). We would like to reduce the bit depth from 8 to 4 (15 is the highest possible value – 1111) :

      174 expressed as binary value: 10101110

      We shift this number now to 4 bits – some precision will get lost but that’s clear as an integer range from 0-15 cannot have the same precision as a range from 0-255:

      174 >> 4 // right shift by 4 bits, result: 1010
      1010 expressed as a decimal value: 10

      That method will always work as long as we have integer values, which is the case in VideOSC as long as the calculation period is 1. If you set it to something else (2, 3, 4,…) you will get floats due to interpolation. At least in SuperCollider floats cannot easily be expressed as binary values. You’ll have to make an assumption on the length of the fractional part (the number of digits behind the dot). If you set the calculation period to 4, the fractional part of the output will be either .25, .5 or .75. If you set it to 3 it might result in a periodic fractional part: .333333333 (theoretically endless but the length is limited to 16 bits I think). But let’s say a precision of 4 digits after the dot is sufficient:

      // multiply by 10000 and convert result to an integer (sorry, SuperCollider pseudo code - basically the same as num % 1 but you'll possibly need a number of type integer, don't know...)
      num = (<a float with 4 digits after the dot> * 10000).asInteger
      // shift to 4 bits
      num = num >> 4
      // divide the result again by 10000
      result = num/10000

      This way you can easily convert to a lower bit depth. You’ll only have to consider that the output range will be 0 to highest possible value in the regarding bit depth. If you’d like to keep the output range and only reduce the resolution you simply have to apply another left shift on the result:

      174 >> 4 // 10, resp. 1010
      10 << 4 // 160, resp. 10100000

      … sorry for this lengthy reply. The conclusion is simply, if you want to use VideOSC’s output as a simple switch (without thresholds) just apply a 7 bit right shift – the output will either be 0 or 1 (no floats in-between). Just keep in mind that floats (if the calculation period is something else than 1) need a special treatment. All this could of course be done within VideOSC but would bloat the code unnecessarily due to the changes I would have to do within the user interface. Believe me, it’s much easier to handle within Pd or SuperCollider.

      Hope you’re having great holidays as well (indeed a special thing these holidays but they should be great for anyone regardless of what she or he believes in or not).

      Peace, Stefan

  2. Posted 16 Dec ’16 at 9:10 am | Permalink

    Getting back with you, Stefan. This is what i can up with tonight. (See the Google Drive pd file link below). The pure data patch reads videosc on a 2×2 grid so that you can point red, green, or blue stickers/items at it and it tracks the movement of that color on a (4 toggles) 2×2 grid. I included more details in the patch. Since VideOSC captures motion it can then be used to capture “expression”. I’ve never like the idea of pedal and have been working a lot to get away from that concept. Your Tool in Magic the Gathering terms, is a Major Path to Victory. Ciao.


    • Posted 16 Dec ’16 at 1:27 pm | Permalink

      Great! Thanks so much for this example and your detailed description!
      I think this could make an excellent starting point for Pd users interested in using VideOSC. I’ve just tried the your patch and even though I have minimal experience in using Pd it worked immediately for me.
      Would you mind if I add your example to the VideOSC repository on Github?

      • Posted 16 Dec ’16 at 1:29 pm | Permalink

        Would you mind if I added your example to the VideOSC repository on Github?

        Of course I would credit you adequately.

        • Posted 18 Dec ’16 at 1:27 am | Permalink

          You are more than welcome to, Stefan. It would be an honor. Do touch base with me once it’s up, ok?

      • Posted 18 Dec ’16 at 1:34 am | Permalink

        FYI: I am also going to be doing a 3×3 version so when the pointer is in an outer box it slides a pure data 2-d slider. So a user can “wave” a change to his controls by pointing at the app. When I’m done I’ll share that with you too.

        P.s. been thinking of using a shinier surface (electrical tape over paper stickers. The hope being it will be captured more precisely, but this hope is probably also founded on my limited knowledge of RGB.


  3. Posted 12 Dec ’16 at 7:29 pm | Permalink

    This is Real Genius!!! I just saw this the other day and am pairing it in my mind with both Pure Data’s Gem and Pure Data audio. Am thrilled to have found it. Last route I took to digitize my music was glcapture and projectm/mildrop files. But they did not allow enough real control. Thrilled to have found this and many many many happy returns to you both for having the strength of will to make it and the love to have shared it. peace. -svanya

    • Posted 13 Dec ’16 at 1:34 pm | Permalink

      Hi Scott,
      really, really happy to hear you like VideOSC. Note that this is only the first release, basically covering the idea of using visual information to control sound or other media. I am aware that this doesn’t make a full-fledged, easy-to-use OSC controller like TouchOSC and others. And I don’t really want to copy these concepts as well…
      I am myself still “learning” how to use VideOSC in audio performances. I am curious to see other people’s usage concepts. I’d be willing to promote your work with VideOSC here on this site as well (if you like).
      Thanks very much once again!

      • Posted 16 Dec ’16 at 6:48 am | Permalink

        Hi, Stefan. Thanks for getting back.

        Currently I am envisioning and preparing (bought the colored flags/dots I needed to today) to use the tool to gesturally change setting on my guitar (pd) patch/rack (with colored “flags” either on the point of my fingers or back of my hand).

        Question: What is the meaning of the “Set Calculation” Options setting?

        Pd receives it well. Which is cool. Now just a matter of finding out how to map the motion of a point across the 3d surface and resolve the issues with so many signals (possibly shades of red, for instance, as I am pretty sure safely assume, one spectrum is “faster” and easier to manage).

        Hopes are high here. And thanks, I will definitely share with you what I come up with.


        • Posted 16 Dec ’16 at 1:12 pm | Permalink

          Question: What is the meaning of the “Set Calculation” Options setting?

          Normally OSC messages are created from the color values of the pixels at every new frame. When “Set calculation period” is set to some value higher than 1 the values of the pixels are only measured at every nth frame (where n is the value that has been set) and values in-between are linearly interpolated from the last to the currently measured ones. Usually this raises the actual framerate quite a bit and possibly prevents audible steps.

          possibly shades of red, for instance, as I am pretty sure safely assume, one spectrum is “faster” and easier to manage

          In general less pixels mean a faster update rate, of course. However, just to make sure this gets understood right: Selecting e. g. the red channel through the color mode selector does not mean that now only the red channel is active. To make one or more particular colors inactive you’d have to explicitly deactivate all pixels in the regarding channel.

          • Posted 17 Dec ’16 at 4:56 am | Permalink

            I am going to have to get back about the first reply later. (But it is all good and I am very honored.)
            ? or possibly feature request for scenario such as I present:
            Would it be possible to some how set the bit-rate for the colors? meaning so it’s only 8 colors, etc? Would come in very handy in the scenario I am presenting.

            Perhaps there is already a way to do that?

            • Posted 17 Dec ’16 at 2:44 pm | Permalink

              Would it be possible to some how set the bit-rate for the colors? meaning so it’s only 8 colors, etc? Would come in very handy in the scenario I am presenting.

              That’s kind of an unexpected feature request. I’d rather have expected a higher bit rate as VideOSC is currently using 8 bits per channel (So far I haven’t been able to figure out how to use 16 bits).

              I guess it wouldn’t be too difficult to implement a (user-settable) lower bit depth though. I never thought about it as I usually handle these cases on the client side. E. g. if I need the incoming OSC to pick from a list of 8 distinct values I’d define a dedicated ControlSpec in SuperCollider and let that map the incoming values:

              // define a range from 0-7, stepsize 1
              ControlSpec(0, 7, \lin, 1.0, 0);

              … don’t know if something like that is possible in Pd?

              • Posted 18 Dec ’16 at 1:29 am | Permalink

                Seems then the issue lies in my limited knowledge of RGB.
                My feeling was when I pointed at the app, there was some blurring between regions of my boxes. Mainly because I dont know how to trap the color range that is in the sticker. So I’ll do more research to determine how to implement what you suggested.

                • Posted 18 Dec ’16 at 2:46 pm | Permalink

                  Well, it’s basically quite simple: A 24-bit image (no alpha channel) shares its information among three 8-bit channels (RGB), color is expressed as an integer value between 0 and 255 (or 11111111 expressed as a binary value) in each color channel. That’s what VideOSC sends within each OSC message (you can, however, “normalize” that range to 0.0-1.0).

                  It would basically be possible to define other ranges or to reduce the bit-depth. However I don’t think that’s too feasible. E. g. I’d like one pixel to control the frequency of some oscillator with a range of 20-5000 Hz, another one should control the phase argument of a sine oscillator, ranging from 0-2pi. I could adapt VideOSC’s output to fit one range but how should the other one be handled? Or should the output range of every single pixel be settable? I think that would really clutter the interface and make using VideOSC a lot more difficult. That’s why I think it should be handled in the client (SuperCollider, Pd, whatever).

                  I’ve already looked up the Pd documentation for some object that could do this but couldn’t find anything. Maybe you know better.

Post a Comment

Your email is never published nor shared. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.