OVS+TUMOR | an OVRAS project

OVS+TUMOR

OVRAS | Open Visualization Space + TUMOR ANNOTATION


ovs+TUMOR | ovras | PREVIOUS
OVS+Tumor: A Tool for Enhanced Lung Tumor Annotation in VR for Machine Learning Training and Analysis
santiago LOMBEYDA ashish MAHABAL daniel CRICHTON heather KINCAID george DJORGOVSKI christos PATRIOTIS sudhir SRIVASTAVA
Caltech/ArtCenter Caltech JPL JPL Caltech NCI NCI
OVS+Tumor creates a seamless VR environment designed for intuitive interaction aiding in the complex task of parsing through 3D CT-scans and annotating candidate tumors. Through interactive subsetting and on-the-fly iso-cloud generation, a wider range of users beyond just domain experts (radiologists/surgeons) can generate a viable machine-learning training dataset. _ SLIDES | PDF

Our main goal in the creation of OVS has been to generate a viable work space in virtual reality; that allows scientists to exploit the benefits of immersion, while maintaining a strong sense of presence, and seamlessly allowing interaction with 3D data representations.

We have accomplished so by:

_1 mimicking the actual workspace of a researcher, inside virtual reality, so that furniture (desk) and work areas can be felt even if wearing a head mounted display _2 creating direct interaction with the 3D data, where a user can easily pick up the 3D model and manipulate it with full six degrees of freedom; and just as easily annotate it through direct drawing/gesturing in the 3D space _3 constraining the size of the model to operate at a desktop size (30cm diameter), floor size (3m diameter), or monument size (30m in diameter), with intuitive affordances to navigate between them _4 limiting the controller to a single button interface, by creating tool 'states', which allow the same one controller to operate as the annotation tool, direct interaction tool, navigation tool, etc.

OVS+Tumor utilized our innovative OVS space to partner with JPL and the National Cancer Institute to aid in the task of marking candidate tumors, from actual 3D CT-Scans, so that they may be used as a training data set, in the larger endeavor of creating a fast and reliable classification tool for masses found through radiology.

We have found that the ability to pinpoint these candidate tumors through OVS reduces the level of expertise currently needed to interpret individual 2D scans; allowing a wider set of personnel to collaborate in the creation of a viable training data set, from which we can then apply machine learning techniques to better our ability to classify, recognize, and ultimately treat tumors.

_ design & development santiago LOMBEYDA _ voice work
ashwini R NAYAK
_ meshplease_ library
loïs PAULIN
| development team
santiago LOMBEYDA
ashish MAHABAL
daniel CRICHTON
heather KINCAID
george DJORGOVSKI
christos PATRIOTIS
sudhir SRIVASTAVA
| project members
_ mathieu DESBRUN
_ jim BARRY
_ thomsom lab | CALTECH
_ with support from NSF
_ with support from CZI
_ with support from CALTECH STUDENT AFFAIRS
_ with donations from HTC
_ with donations from MICROSOFT
_ with donations from NVIDIA
_ with donations from LOGITECH
PROJECT BORN from CALTECH's OVRAS LAB
| with the support
_ developed in UNITY
_ utilizing STEAMVR
_ utilizing ZEN FULCRUM BROWSER
| tech specs