I am currently working on two major projects involving mixed-reality telecommunication. We use motion capture technology and image tracking to ar generate personal avatars. People can then interact using these avatars in a shared mixed reality environment. The avatars will be built based off an individual's body/face, and will respond live to facial expressions and eye gaze.
I can't talk about these projects much at this early stage, but they involve multi-user mixed reality experiences using both desktop and mobile devices. To do this I have set up servers that host these interactions over the internet so users can interact in a virtual space. Since mixed reality is heavily impacted by percieved lag I have implemented lag compensation including prediction and interpolation networked data.
Here's my avatar testing out some of the realtime facial expression tracking in an AR application.