Social VR

Current Work

Interdependence of Gaze and Exploration:

Normal face-to-face communication is an extremely rich, multimodal form of expression . Aside from the verbal channels, non-verbal channels available during face-to face communication include gaze from head posture and eye direction, arm gestures, body posture and facial expressions.[2] With the expansion of virtual reality platforms hand tracking and head postures are visible to other users but the gaze of the user is a very open subject to explore. As developers have so much freedom about how they want to create the experiences. There have been some work going in the use of gaze in collaborative virtual reality environment.[2][3][4] All of these work is focused upon how gaze effects the social behavior and how this gaze control changes the persuasion of the topic.

In Ba-Ke-Neko we try to take this process one step further along the way of use this special non verbal communication in the playful way where we utilized this as an activity to explore a dark environment. As noted in Steve Benford’s paper, one of the main challenges is interest management in Collaborative Virtual Reality Environments. Humans perceptual and cognitive limitations provide significant guide in developing responses to the problems of scale.[5] By arranging the virtual environments so that each participant is not overloaded and sees and hears “enough” of the world but no more, the problems of scale can be diminished. This was particularly useful to us while designing this activity. In the current activity users has to follow each other’s gaze and work as a light source for each other in order to explore the hidden island in the CVE.[6]

In this feature there is an island in the CVE. This island is connected with the main island by small stepping stones. This whole island is invisible until it is turned on. After turning it on, in order to find this island users have to work together. The stone bridge is only visible to you by staying with other users as the player can use other users as their light source which will guide them to the next stepping stone and this is same for all the users. Also you can only see one stone at a time so you have to stay in close proximity with each other in order to find the other stepping stones. One more thing is, if a user’s partner somehow falls down from the stepping stone then they reset at the origin point of the main island so because of that the user has lost their guiding light so it is impossible to spot the stepping stone. This forces user to jump down from the stepping stone and start the exploration all over again from the beginning. This forces them to interact more between each other in order to find the island.

Social Traces:

There has been quite a work done in the real world tracking of the person and representing that data. It is a quite a big challenge to track the real world movement of a person and representing them digitally.[7][8] This is a big challenge in computer vision world. This tracking is then use to gather the information about the user. The goal of these tracking is to use them in the AI based prediction algorithms in order to predict the behavior of the users. Theses traces are useful matric in order to understand the strong groups and group activities in the environment. They can help us figure out what activities are interesting for the users as well as what are some of the hotspots in the environment where people hangout more.

Tracing in the Ba-Ke-Neko is slightly different. The goal is not to gather any information about the user but to show the user about where all the other users have been in the CVE. This is more about having the interactive tracing of the user activities. Every user when they move lefts a small sphere that shows where the user has been. Each user has their own color to uniquely identify them. User has also a power where they can replay the whole trace where they can see all of the spheres lighting up with particle effects which will let them know the sequence of action that other users had done. The drawing inside the virtual world is also another way to convey the users interaction in the world.

References:

[1] J.-Y. Lee, J.-H. Kwon, S.-H. Nam, J.-J. Lee, and B.-J. You, “Coexistent Space: Collaborative Interaction in Shared 3D Space,” 2016, pp. 175–175.
[2] C. Peters, C. Pelachaud, E. Bevacqua, M. Mancini, and I. Poggi, “A Model of Attention and Interest Using Gaze Behavior,” in Intelligent Virtual Agents, Berlin, Heidelberg, 2005, pp. 229–240.
[3] J. N. Bailenson, A. C. Beall, and J. Blascovich, “Gaze and task performance in shared virtual environments,” J. Vis. Comput. Animat., vol. 13, no. 5, pp. 313–320, Dec. 2002.
[4] J. N. BAILENSON, A. C. BEALL, J. LOOMIS, J. BLASCOVICH, and M. TURK, “Transformed Social Interaction, Augmented Gaze, and Social Influence in Immersive Virtual Environments,” p. 27.
[5] S. Benford, C. Greenhalgh, T. Rodden, and J. Pycock, “Collaborative virtual environments,” Commun. ACM, vol. 44, no. 7, pp. 79–85, Jul. 2001.
[6] D. A. Bowman, D. Koller, and L. F. Hodges, “Travel in immersive virtual environments: an evaluation of viewpoint motion control techniques,” 1997, pp. 45-52,.
[7] T. Darrell, A. P. Pentland, A. Azarbayejani, and C. R. Wren, “Pfinder: Real-Time Tracking of the Human Body,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 780–785, 1997.
[8] C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 747–757, Aug. 2000.

Inspiration

Social VR is a fast expanding section but as you know it’s still fairly new. As we’ve seen with Social Media on the web -design choices made early on can have a big impact later. Since it’s still new, there are competing ideas about what it means to be social in VR, and so the design guidelines to support this values are underdeveloped.

Given this openness there is a timely opportunity to figure out HOW we WANT to be social in VR, and that means making some informed choices about UX design. In particular since we are working with the Mozilla foundation, we want to support particular values:

  • Support pro-social interaction on the open web
  • Prevent harassment
  • Protect the privacy of users

The domain of Social VR is extremely large. So as a constraint, we are focusing specifically on the design of light weight gathering spaces for people you already know each other or for people who are already connect to one antoher from an existing context (like an internet forum). So think of a VR version of Skype. Or a VR meeting link that you can put in a reddit thread.

We are exploring different social VR platforms and we indentified 3 main contexts that we wanted to target for social VR meeting spaces.

  1. Public Gatherings
  2. Work Meetings
  3. Get togethers with dear friends and family

We can’t build an entire platform, so instead we’re designing smaller vignettes in VR to illustrate particular affordances and make design recommendations to designers and developers at Mozilla.

Current Progress: I’ve spent a lot of time focused on getting the networking working for multiuser VR, so we’re still in the early stages of the design process, but we’ve already arrived at some interesting design insights.

Let’s say you want to reconnect with a friend or family member that you haven’t talk to in a long time. You want to hear their stories, but it’s also a little bit awkward because you haven’t seen each other in so long and now you’re interacting in VR. During lulls in the conversation, we want to give people attractions to focus on that move from background into the foreground and back again (like a safari or like a pet that might visit you). If you want to go more about this pet behaviour checkout my project for Game AI here.

Collaborators

Joshua Mcveigh-Schultz, Anya Koleshnichenko

Technology used

Unity, VR development, C#

Achievements

My own Thesis

Funded by Mozilla

P.S. Can't post pictures till it finishes up!!