Our development efforts in building virtual reality (VR) modules have been ongoing for the past year. We’ve decided to summarize some of our experiments, featuring two of our VR modules below:


Operating room VR module

The first module is our operating room VR module, which was one of our first experiments in creating a VR environment. We’ve repurposed a number of models from our past projects to construct the room, including assets in our Until Then animation short released last July, as well as a rough liver model used in a liver transplant case.


The goal of the experiment was to observe how users can interact with various objects within the VR environment. These interactions range from grabbing/holding objects, manipulating lighting, to dividing the liver using a literal ‘surgical plane’ tool.


One interesting observation we’ve made concerns the placement of various objects. The lights, as well as the surgical tools are placed out of the direct view of the user. While this was done mostly to mirror the actual environment in the OR, we found users were encouraged to look around their immediate surroundings, which enhances the engagement between the user and the reality we’ve placed them into.

The module currently acts as our introductory ‘tutorial’ experience for users who are unfamiliar with VR. Numerous additions are planned for the future, including clear visual feedback cues during user-object interactions, as well as additional surgical tools that perform different functions on the anatomy.


Medical imaging VR module

Previous black/white iteration of medical imaging module

Colored imaging module, with plain/venous/arterial phases

The second module is an extension of an earlier experiment, the Medical Imaging VR Module. The idea behind the module was to import CT/MRI data directly into VR, in form of volumetric data (i.e. 3D pixels, or voxels). The earlier experiment allowed for greyscale voxels to be displayed, and we’ve since started to incorporate color in an attempt to differentiate between anatomical structures.


Polished geometric models can be created in a few days' time, with clearly differentiated structures

In optimal conditions, volumetric models can be generated within hours, but with less options to isolate structures

The rationale behind utilizing voxels and volumetric data is speed - a set of CTs with slide thickness ~3cm can yield a viewable set of voxels in a matter of hours. This can be especially beneficial in visualizing patient anatomy under a tight schedule. The drawback, however, is the ability to isolate or highlight specific organs or vessel branches.


Another aim of this module was to bridge the gap between the 2D representation (flat, CT slices) of a 3D object (patient anatomy). The concept of transforming CT/MRI scans into 3D volumetric models is not novel - in fact this is already done in various software packages such as Osirix, Horos and Myrian, which we utilize in our workflow. However, these models continue to be represented on 2D screens, with interactions limited to the keyboard and mouse. We wanted to explore the experience of interacting with 3D volumetric models, while being in a 3D space.

Interacting with the 3D volumetric model

To this end, we designed the module so that the user can grab/rotate the model in 3D, as well as being able to ‘slice’ through the model in 3 different axes (coronal, axial and sagittal). The model can also be scaled up for closer inspection of anatomical features.


We’ve also implemented an optional ‘pin’ system, in an attempt to bridge the different 2D and 3D imaging modalities. Users are able to place specific ‘pins’ on any given slice of the CT, and would be able to observe the pin in the 3D volumetric model, providing spatial context of the anatomy that surrounds the placed pin.


The medical imaging VR module is currently in testing phase - one of our surgical fellows Dr. Lawrence Lau has given it a test run earlier. Development efforts continue for this module, and one of the many features we’d like to have is a ‘multiplayer’ interface, where users from different centres would be able to examine and interact with the same anatomical model, while voice-chatting with each other to discuss findings.

As always, stay tuned for more updates on our VR development efforts, and subscribe if you’d like to be notified via our monthly newsletter.

Cheers,

The TVASurg team

Leave a Reply

Your email address will not be published. Required fields are marked *