- Allow users to play the instrument itself, hum a tune (use of pitch detection) and use gestures to control arrangements, volumes..
- Users are the music, as their movements would control the sounds of the music. This would use algorithmic composition.
In the highest level there will be the Kinect/controller system and the audio system. The Kinect system will take the users movements/voice and either translate them into midi notes or a changes to the system. The audio system will have to load virtual instruments and play the midi notes.
So far I have been primarily concerned with audio system as to see how difficult it is to implement. Two technologies I have been looking at are:
- Steinberg's Virtual Studio Technology (VST) is an interface to connecting virtual instruments and host systems.
- Audio Stream Input/Output (ASIO) is a sound card driver for low-latency digital audio.
Both the SDKs are for C++ applications. Though I have been able to find ports for both of these in .NET: VST.NET and ASIO.NET. ASIO.NET seems to be an older port that is not supported anymore, but the functionality is there. The demo application was not working, and there seemed to be a problem with the DLL on my system. I have since managed to recompile it and both libraries now work with C#.
So far I have managed to load a instrument and send it a midi note. I then can use the output and input buffers to play the sound. Though I am not sure if my implementation is the best way to do it. Both ports are not well documented. And there are not many examples of how to use them. Maybe it is better to look at the original libraries they wrap to understand more about them.
I will continue attempting this implementation and try optimise how the VST and sound card interact.