We hope everyone is doing well. In the current atmosphere of virtual meetings and conferences, we have started to experiment with the existing virtual reality (VR) architecture we've developed, to explore whether collaboration and education could be taken to a new dimension.
To that end, we've tested two use-cases for VR, in discussions about surgical and anatomical considerations for a patient case, and have documented it as a video:
We'd like to share some of the hardware and software highlights from this iteration of the VR module.
Oculus Quest wireless headsets
The original Oculus Rift headsets, while comparatively lightweight with a robust performance, still required a physical 'tether' to a workstation to run VR applications.
No tethering - moving freely with the wireless Oculus Quest
On the other hand, the new generation Oculus Quest headsets (2nd gen available now) are completely self-contained units, which allow users to move and interact freely. We see this as a huge step towards adapting the use of VR headsets in an OR setting, as the physical space required for pre-op discussions is greatly reduced, not to mention the time needed to set up the headset.
Networking and VR avatars
On the backend, we've started adopting a more intuitive networking architecture developed by Normcore. Apart from an easier set up, the architecture also includes a new VR avatar.
Seeing the VR avatars 'talk' in real time
One amazing feature is that it responds to the audio speech from the user, so that when the user speaks, the avatar's mouth deforms according to the speech patterns, adding another layer of immersion into the experience.
Special thanks to Dr. David Cavallucci, Dr. Eyad Issa, Dr. Fabiola Oquendo, Dr. Chaya Shwaartz, as well as the folks over at Normcore for making these experiments possible.
Stay tuned for more updates on the VR end, and as always, stay safe.
-The TVASurg team