VideOSC handles the task by using the video stream coming from the phone’s inbuilt backside camera. Each frame of the video stream is translated into a small image (user-defined size, e.g. 6 x 4 pixels) and the RGB values of every pixel within this image is sent to a receiving client in the network via OSC. Basically any software (or hardware) that understands the OSC protocol may serve as a client. Nevertheless, a client should be able to use the incoming information in a meaningful way. E. g. a setup of 6 x 4 pixels will produce a stream of 6 x 4 x 3 values at the current update rate of the phone’s display (may be as high as 60 frames/second). This does not only mean a potentially high CPU load on the receiving machine/device. It also demands a special concept within the design of sound- or video-generating structures. Ideally one would probably use algorithmic sound or video software like Pure Data, ChucK, Max/MSP, VVVV and others. Personally I prefer SuperCollider in connection with my CVCenter library.
VideOSC’s interface is simple and intuitive
Here are a few screenshots with some explanations:
As already mentioned VideOSC uses the color information of the video stream coming from the phone’s (or tablet’s) inbuilt backside camera. Here are two screenshots, demonstrating the translation from the high-resolution image (which isn’t that high – by default VideOSC will use the smallest possible preview size) to the user defined display size.
AtomRecent Commits to VideOSC:master
References [ + ]
|1.||↑||Open Sound Control, a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology – opensoundcontrol.org|