A key issue is the limitations of the computer systems used to render the graphs.
We managed to display up to 150,000 polygons at once without major slow-down.
150,000 polygons corresponds to roughly 15,000 octahedrons, plus labels and
axes. With each octahedron representing a single point, only 15,000 points could
be displayed at once, which is a serious limit in big data sets. By using tetrahedrons
instead of octahedrons, we doubled this number, but this does very little
to lessen the system requirements, especially when analyzing a single data set in
numerous ways. While using a limited amount of the data was somewhat effective,
it also limits the validity of the results. The lack of robust animation features
is another limitation in dynamic data sets. Were animation features adopted,
time-based data could be displayed much more easily, and user interactivity could
become much more powerful and meaningful, allowing the user to move graphs
on the fly within the virtual world. The inherent separation of the data visualizer
and the experimenter in our research was very limiting in terms of the results
we could develop. An excellent example of this was the liver data set, where we
didn’t understand much of our findings except in the most abstract sense. Ideally,
in practical scientific application, the researcher and the visualizer would work
together, with the visualizer developing less traditional theories and results based
on the data and the researcher grounding him/her in previous research.