Over the past few weeks I have been slowly refactoring some of my Kinect controller code. I have it so at the start of the frame it checks the posture of the user and if it matches one of the programmed gestures we can trigger a gesture start event. These gestures can also be used when doing other work with the kinect, for example we do not want to play an instrument when two hands are down below hips.
After this we can use the interface IKinectWork to pass in the skeleton and process it, this allows me to extend this to many other applications. For now I have DrumWork and GuitarWork. Both these use the kinect in many different ways.
Right now I completed work on the DrumWork. The user can just hit places defined by the program to play the drums, I want to show this on the screen somehow. Also I am working on using speed to determine other the volume of the drum sounds. I am trying to find a way to best reuse code in instruments
Dylan's 2011 Thesis Blog
Saturday, December 24, 2011
Thursday, November 24, 2011
Kinect
This week I have been working with Kinect, I have created the posture detector to change the intrument. With each skeleton it will check for a posture, if that posture is held for 3 seconds then a detected posture event will be fired. Then backend can then use this to change the instrument. Since the user will interact with each intrument differently, it will also have to change how the Kinect handles skeletons. I am now working on what insturments to use and how the user will interact with them. The drums is obviously easily, but something like a guitar is harder...
Wednesday, September 21, 2011
Interface
I have decided on this cube interface. Where the user will be able to rotate from within the cube (each face representing an instrument). They can then push in notes or remove them. I am going to change the back end a little bit, read the settings in from XML and then integrate it with the interface.
http://www.youtube.com/watch?v=r7GB9IPSIew
http://www.youtube.com/watch?v=r7GB9IPSIew
Sunday, August 21, 2011
Ideas
I have been considering other topics to see what the Kinect could be used for. To do this I have been listing what the Kinect would be good for and bad for. As for gesture based interfaces I don't believe the Kinect is that suitable, this is because there are so many other ways to interface with the computer. The only real advantage the Kinect has is that you don't have to be near the computer you are controlling. This makes it suitable for a gesture based presentation, but even then a simple remote would be easier. A lot of the interfaces which the Kinect could be good at would be better implemented using a touch screen, keyboard...
So basically I have control of an interface is not suitable for the Kinect, but the Kinect is good for measuring movement of an actual body. From this I came up with some ideas to help me figure out what the Kinect is good for. Some of the ideas were:
- Flexibility/stretching help, this could be used for rehabilitation from injuries. It could try attempt to help the person correctly perform the task and also compile progress reports
- Sign language to text, to help people communicate
- Interactive learning for younger kids, this would be an attempt to make learning more fun and involving and hopefully allows to better learn their material
- Interactive art, using the Kinect to show interactive art on a screen responding to the persons behaviour
- Developmental Coordination Disorder treatment, this would help children with the problem
- Determine how drunk someone is by their body, could be used as pub security
After these ideas I came I decided to go back to my original idea of music, but with some differences. It will evolve more around fun and movement. So continuing this idea, I have been made an small program that gets the users voice and determines the pitch by using autocorrelation I also use RMS of the signal to determine if the user is actually making a sound. I have also further been developing with the Kinect starting to a small program to detect if the hand will be open or closed. I have decided that the final implantation of my program will not have a user interface, more just some graphic representation of the movements or the music.
A good video to show something similar to what I have thought about doing is: http://www.youtube.com/watch?v=xPcoM7BIDZ4&feature=player_embedded. This shows the user controlling the filters and other aspects of the pre-made music. I am considering having the music fully composed by the user, as this will use the back end I have already completed.
Monday, June 20, 2011
Update!
I watched the recent Kinect SDK launch which gave me some ideas about how to use the Kinect. One interesting point was that when designing Kinect applications, design them from scratch so don't just take a previous application and implement a Kinect interface. This lead me to the idea to use kinect as more a live virtual instrument. Whereas before I was going to have an application similar to a digital audio workstation which would be controlled with Kinect (as the posted image shows).
Now I will concentrate on implementing different ways to interface with a live virtual instrument. I am going to have it set up as a sever/client application, so that potentially many clients could play an instrument together. Right now I now I have most of the server done. The server is designed to accept MIDI events from the clients, so it can pass these into the virtual instruments and play back the sound.
For each VST instrument I am planning to have an XML document associated with it. This will read when the instrument is loaded in. It will contain the information needed to know how to manipulate it when it is being used. For example, when the right hand is moved up it will change the filter cut-off frequency between these values. I am planning on having different ways to interact with the instruments, so the XML file will show which ways can be used with which instruments and how they will work.
Now I will concentrate on implementing different ways to interface with a live virtual instrument. I am going to have it set up as a sever/client application, so that potentially many clients could play an instrument together. Right now I now I have most of the server done. The server is designed to accept MIDI events from the clients, so it can pass these into the virtual instruments and play back the sound.
For each VST instrument I am planning to have an XML document associated with it. This will read when the instrument is loaded in. It will contain the information needed to know how to manipulate it when it is being used. For example, when the right hand is moved up it will change the filter cut-off frequency between these values. I am planning on having different ways to interact with the instruments, so the XML file will show which ways can be used with which instruments and how they will work.
Sunday, May 8, 2011
More GUI
I have been working on loading .obj files into my program. I am doing this so can load 3D objects in which I can use in my interface. I can use Vector Magic to trace the outline and then use that outline in my interface.
I am also designing my overall program, and how it will fit together. I am considering what architectural pattern to use.
I am also designing my overall program, and how it will fit together. I am considering what architectural pattern to use.
Monday, May 2, 2011
GUI
Recently I have been looking into the interface design and implementation. I have been created a storyboard of how the interface might work using PowerPoint. At the moment I am going to have a augmented reality design. I will have the interface projected over a simplified camera image of just the user.
I am going to use Windows Presentation Foundation (WPF) as my graphical subsystem. I am going to create the interface in a 3D environment. I have been programming simple interfaces that will use the same concepts as the real interface, just to learn how it is down.
I have been playing with the Kinect libraries to see how they work. I found that Microsoft is releasing a official Kinect SDK very soon. So I plan to wait till that is released to see if it is worth using.
I am going to use Windows Presentation Foundation (WPF) as my graphical subsystem. I am going to create the interface in a 3D environment. I have been programming simple interfaces that will use the same concepts as the real interface, just to learn how it is down.
I have been playing with the Kinect libraries to see how they work. I found that Microsoft is releasing a official Kinect SDK very soon. So I plan to wait till that is released to see if it is worth using.
Subscribe to:
Posts (Atom)