Introduction
This text exposes the process of the series of experiments entitled OCEAN DANCE, which presents dance initiatives developed in the SPACE-XR project. These experiments demonstrate the capture of dance movements inserted into the metaverse and invite the audience to interact with the spatial scales suggested by Laban, which favor a wide range of practical possibilities for experimenting with the body in space, based on different paths and dynamics. This is an investigation that transposes the performative movements of dance into the environment of a metaverse, which in this case is a three-dimensional, multi-user virtual environment.
The development of these experiments began with motion capture sessions, followed by visual development and implementation in virtual reality (VR) and analysis of the experimentation.
The following researchers took part in the conception and development: Bernardo Alevato, Fábio Suim, Gabriel Cardoso, Jorge Lopes, Luiz Velho, Orlando Grillo, Sérgio Azevedo, Thaisa Martins and Carolina Navarro, an interdisciplinary group with different backgrounds and perspectives on dance, technology and immersive environments.
The experiment explores what it might be like to observe dance performances in a virtual space: a three-dimensional navigable environment that transposes the movement captured from the physical to the virtual, stimulating reflections on the perception of presence, the expression of movement and the relationship between the virtual and the real. Dance performances express a strong human element, even in computer-generated images. The experiment counted on the participation of the artists Thaisa Martins, Helena Bevilaqua and Fabiano Nunes, whose improvised performances formed the basis of the study. The soundtrack for two performances was created by Yuri Amorim.
Some previous works that inspired this experiment include: Motion Creation from Mocap Data by Louise Roy at Visgraf [1], the work Notanna by Ana Lívia Cordeiro [2] and Virtual Crossings by Cia [3]. It is also important to mention the reference to Rudolf Laban’s work, in a seminal way, which appears in the studies of part of the group, in the captures and of course in the conception, in the dancers’ performances and in the set, with regular polyhedrons as a way of provoking real-time interaction within the environment.
The project development flow involved motion capture using Optitrack [4], Blender [5] for modeling and animation and Unity 3d [6] for implementation through the Spatial.io platform [7].
The experiment began in an exploratory way, with motion capture sessions in IMPA’s Visgraf laboratory. Motion capture (MoCap) is a technique used to record and analyze the movements of a human body or object, with the aim of digitizing these movements and applying them to three-dimensional models in animations or games. The captured coordinates are structured hierarchically based on predefined skeletons. To this end, the Optitrack system was used, which, using cameras and reflective sensors, is able to distinguish points in space and capture physical movements through points and convert them into digital data. This data was then transmitted in real time to the Motive software, where parameters and the correspondence between the sensor points and the virtual model were adjusted. The process generates BVH files, which are recognized by most 3D animation programs.
MoCap (Motion Capture)

After the first recordings with Thaisa Martins, possibilities for implementation in virtual reality began to be devised. Next, dancers Helena Bevilaqua and Fabiano Nunes were invited to experiment, and their performances were recorded in subsequent sessions. In an interspersed way, improvisations and movements of Laban scales were captured, such as the cube and the icosahedron.
Visual Design
Avatars
To create the virtual models of the dancers, MB-Lab was used, a Blender add-on for generating humanoids. To avoid issues related to the “uncanny valley” [8], a minimalist visual style was chosen, without realistic features, but respecting human proportions in order to represent movement faithfully. This style was influenced by the Virtual Crossings work in that it sought a non-realistic sculptural representation, as well as the expressiveness of the naked, non-sexualized human body, according to group discussions.
Movement transfer and corrections
The movement contained in the BVH files was transferred in Blender. After creating the model, it was structured to match the hierarchy of movements and data capture, using an addon from the Rokoko system for Blender. Some adjustments were made to the capture, especially with the use of motion layers in the NLA (non-linear animation) module, which allows corrections to be added without altering the recorded keyframes.

Environment
A prototype scenario was created in Blender to facilitate discussion among the team. This prototype made it possible to explore the possibilities of navigation in virtual space, taking into account the characteristics of the Spatial platform and the objectives of the experiment. Initially represented as a miniature, the scenario was gradually developed into an interactive environment, allowing participants to navigate around the moving performance. In addition, regular polyhedrons were placed in the scenery, suggesting the relationship with Rudolf Laban’s theories. Ramps were added to the environment to make it easier for participants to move around and provide a more fluid navigation, offering new points of view on the performance.
Shading
The textures and appearance of the models and sets were initially designed in Blender. However, the final appearance of the reflective materials was only possible after implementation in the virtual environment, when interaction with the environment map and the backdrops came into play. The visual concept was inspired by dance halls with mirrors and the integration between the model/performer and the environment. For this reason, materials were created that use reflection as the main element, generating a unique aesthetic. The difference between the animation program and the game engine, in this case, became a point of attention, as it involves the design and implementation process.



Implementation
The material was implemented on the Spatial.io platform, a three-dimensional environment that allows interaction through browsers, applications and virtual reality, using devices such as the Meta Quest 2, computers and cell phones. Created with the Unity3D engine, the platform facilitates the creation of content, allowing the distribution of development files to create immersive experiences. As a connected, multi-user environment, Spatial favors collective interaction and dialogue between participants. During implementation, we opted for an enlarged scale for the models, with much larger proportions than humans, creating an effect of strangeness. The duplication of the models was also used to emphasize the graphic and symmetrical aspect of the performance.
Unity3D is compatible with importing skeletons and animations, especially of humanoids, which made it possible to program interactions and add sound effects and environment maps to the experiment.
The implementation of the experiment focused on simplicity and usability, targeting an audience with no previous experience of virtual reality, ensuring intuitive interaction without the need for complex instructions. The experience was developed for three-dimensional visualization, in line with previous experiences of virtual environments created by LAPID, focused on historical and cultural heritage.
Experimentation
The material produced was shown during the DANCE & TECH seminar in two editions of the event, in Rio de Janeiro and Porto Alegre. During the experiment, different behaviors were observed among the participants. Some, from the field of dance, tried to imitate the movements of the performance and interacted with the polyhedrons, while others simply observed the presentation, moving around the virtual model in motion. Observations were also made about the transposition of the environment and the visual experience of movement.
Deployment in Visual Communication
The motion captures were also used to develop promotional material for the DANCE & TECH event. The event’s visual identity was created collectively by the team, and a three-dimensional model was generated from the motion capture data, using typography with the event’s name as a texture map.

Considerations
The use of MoCap provides new possibilities for bodily expression in expanded reality environments. From the experience, reflections emerged on synchronicity and asynchronicity in the use of motion capture, which can alter the perception of presence in the virtual environment. In addition, the development of new video-based motion capture tools, such as MOVE.AI and QuickMagick, makes this resource more accessible.
Movement in virtual space, seen through the headset, offers a different experience from other audiovisual vehicles, as nuances and emphases can be perceived during observation, possibly emphasized by three-dimensional perception. Synchronicity is perhaps more important than realism in conveying presence, although the latter is more important.
This experiment also opens up possibilities for creating dance films in virtual environments, such as machinima, since videos of the performances were generated in real time, with variations in viewpoints and framing.
References
[1] https://www.visgraf.impa.br/Projects/motioncreation/
[2] https://www.analivia.com.br/nota-anna/
[3] https://www.gillesjobin.com/creation/virtual-crossings/
[4] https://www.Optitrack.com
[5] https://www.blender.org
[6] https://www.unity.com
[7] https://www.spatial.io
[8] https://spectrum.ieee.org/the-uncanny-valley