Additionally the following 'final' rankings were determined by a combination of three
main variables. This is because for some questions there would be would be a clear
consensus as to what narrative visualization ranked what, but for others there would be
no clear consensus. The three variables used for the final rankings were; mode ranking,
the average ranking, and an assessment of the proportionality of participants that gave it a
specific rank versus other ranks. For a full detailed analysis of the rankings see Appendix
G: In-Depth Analysis of Satisfaction Results.
The first sub-test dealt with a user visiting a museum and constructing a narrative
visualization and so the aspects it was asking the user to rank based upon satisfaction
were related to core system goals of construction and reflection.
Table 7. Sub-Test 1 Satisfaction Results
Question 1st 2nd 3rd 4th
Reflection Dramatic
Mode: 1 or 2
Avg: 1.9
+/- 0.97SD.
+/- 0.22 S.E
Sequential
Mode: 1 or 4
Avg: 2.45
+/- 1.23 SD
+/- 0.23 S.E.
Categorical
Mode: 3
Avg: 2.6
+/- 0.88SD
+/- 0.2 S.E.
Slideshow
Mode: 4
Avg: 3.05
+/- 1.15SD
+/- 0.26 S.E.
Uniqueness Sequential
Mode: 1
Avg: 2
+/- 1.21SD.
+/- 0.28 S.E.
Dramatic
Mode: 2
Avg: 2.15
+/- 0.93SD
+/- 0.20 S.E.
Categorical
Mode: 3
Avg: 2.7
+/- 0.98SD
+/- 0.21 S.E.
Slideshow
Mode: 4
Avg: 3.35
+/- 0.93SD
+/- 0.2 S.E.
Satisfaction Dramatic
Mode: 1
Avg: 1.75
+/- 0.97SD.
+/- 0.22 S.E.
Categorical
Mode: 3
Avg: 2.3
+/- 1.03SD
+/- 0.23 S.E.
Sequential
Mode: 3
Avg:2.6
+/- 1.09SD
+/- 0.24 S.E.
Slideshow
Mode: 4
Avg: 3.35
+/- 0.81SD
+/- 0.18 S.E.
The first question dealt with the user goal of reflection. A major component of the visual
narratives was to let users reflect upon what they experienced in the museum. If the user
83
didn't feel like a specific narrative visualization caused them to be all that reflective or
served as a record of reflection they would be unlikely to find it satisfactory.
The second question addressed the uniqueness of the narrative visualizations. It referred
to how the visual narratives were conceived by the user as unique ways of presenting
their experiences.
The third question was a little more direct. It simply asked the user to rank the narrative
visualizations based upon satisfaction. It offered no real specific criteria like the other
two questions and simply sought to let the users rank them by holistic satisfaction.
Table 8. Sub-Test 2 Satisfaction Results
Question 1st 2nd 3rd 4th 5th
Engagement Dramatic
Mode: 1
Avg: 1.95
+/- 0.99 SD
+/- 0.22 S.E.
Categorical
Mode: 2
Avg: 2.2
+/- 0.95SD
+/- 0.21 S.E.
Sequential
Mode: 4
Avg: 3.05
+/- 1.4SD
+/- 0.33 S.E.
Slideshow
Mode: 4
Avg: 3.2
+/- 1.1SD
+/- 0.25 S.E.
Traditional
Mode: 5
Avg: 4.6
+/- 0.82SD
+/- 0.18 S.E.
Learning Categorical
Mode: 1
1.35
+/- 0.58SD
+/- 0.13 S.E.
Dramatic
Mode: 2
Avg: 1.9
+/- 0.71SD
+/- 0.16 S.E.
Sequential
Mode: 3
Avg: 3.35
+/- 0.99SD
+/- 0.22 S.E.
Slideshow
Mode: 4
Avg: 3.7
+/- 0.66SD
+/- 0.14 S.E.
Traditional
Mode: 5
Avg: 4.7
+/- 0.66SD
+/- 0.14 S.E.
Clarity Dramatic
Mode: 1
Avg: 1.95
+/- 1.05SD
+/- 0.23 S.E.
Categorical
Mode: 2
Avg: 2.2
+/- 0.95SD
+/- 0.21 S.E.
Sequential
Mode: 3
Avg: 3.1
+/- 1.55SD
+/- 0.34 S.E.
Slideshow
Mode: 4
Avg: 3.15
+/- 0.93SD
+/- 0.2 S.E.
Traditional
Mode: 5
Avg: 4.7
+/- 0.73SD
+/- 0.16 S.E.
The second sub-test included the traditional method in its rankings. The traditional
method was a bunch of pictures from the trip contained within a folder. The participants
ranked each method from 1 to 5 with one being the highest. The second sub-test dealt
with the second user scenario of a user viewing narrative visualizations created by
84
another and so the aspects it was asking the user to rank based upon satisfaction were
related to core system goals of communication and sharing.
The first question dealt with engagement. Where in the second scenario the user is having
a narrative visualization shared with them. If they don't find it very engaging they aren't
likely to pay attention or really gain a sense of what the other person experienced and
would be unlikely to find it satisfactory.
The second question dealt with learning. Learning is a general goal of museums and if the
users felt like they were not learning much from the narrative visualizations or were not
satisfied with what they were learning or how the material was being presented to them
then they would be unlikely to find it satisfactory.
The third question dealt with clarity. It simply asked the user to rank the narrative
visualizations based upon how clear they found them. A core system goal is
communication and so this question sought to gain user feedback on how clearly the
narrative visualizations communicated their content.
85
Chapter 7: Discussion
7.1 Summary of Narrative Visualizations
A summary of how each visualization preformed in the tests