In this paper we presented an application framework where semantic technologies, linked data and natural language processing techniques have been exploited to investigate the emotional aspects of cultural heritage artifacts, based on user sentiment detected in art social platforms. The aim is twofold: to recognize and visualize the emotional impact of artworks on people; to enable an emotion-driven organization, access and retrieval of artworks and related data in on-line collection.
We have described the OWL ontology of emotions developed for the ArsEmotica prototype. The ontology has been conceived for categorizing emotion-denoting words and has been semi-automatically populated with Italian terms from WN-Affect. In order to make the framework applicable to other languages, such as English, and eventually to multilingual descriptions of artworks, we are currently working to an integration of the ArsEmotica ontology with the LExicon Model for ONtologies (LEMON) (McCrae et al., 2011). Such an integration will allow us to classify Princeton WordNet English synsets representing affective concepts under our emotional categories, and, then, to exploit the available multilingual lexical databases aligned to WordNet to express the link to affective lexical entries (emotion-denoting words) in the language we are interested to deal with (e.g. MultiWordNet for Italian).22 The use of a multilingual encyclopedic dictionary such as BabelNet23 will be also useful to deal with the multilinguality issue.
Moreover, we plan to develop the ontology and extend the coverage of our emotion lexicon, also in order to cope with multi-word expressions, that may not explicitly convey emotions, but are related to concepts that do, as it has been confirmed also by the results of our user study. To deal with such an issue, it will be convenient to rely on resources like EmoSenticNet (Poria et al., 2014a) and to adopt a concept-based approach, like the one described in Poria et al. (2014b).
The affective model refers to a state-of-the-art cognitive model of emotions and inspired an interactive user interface for visualizing and summarizing the results of the emotion detection algorithm. The current ArsEmotica interface provides our users with the possibility to access the outcomes of the emotional analysis. On this line, the next step is to study innovative strategies to browse the artworks, by relying on their semantic organization in the ArsEmotica emotional space. The aim is to provide users with the possibility to explore the resources by exploiting the various dimensions suggested by the ontological model. Possible user preferences to deal with could be: “show me sadder artworks” (intensity relation); “show me something emotionally completely different” (polar opposites); “show me artworks conveying similar emotions” (similarity relation). Another interesting direction we are going to investigate is the possibility to use the ArsEmotica framework in order to define innovative recommendation strategies based on affective information on contents ( Tkalcic et al., 2013). Notice that, in the current interface, we are interested to offer a summary of the emotions evoked by artworks, detecting their simple presence by analysing tags, without introducing a measure about their relative strength (e.g. tags related to emotion A are more frequent than tags related to emotion B). However, in other contexts it would be interesting to use frequencies of emotion labels to give to the users a measure about different strengths of the emotions. This information, could be exploited to rank evoked emotions in case of multiple classification, and to recommend users to visit artworks emotionally similar w.r.t. the prevalent emotional category.
Another interesting issue to address is investigating how the emotional user experiences in relation to a given artwork can vary over time. The positioning of an artwork in the emotional space created by the emotional ontology is not static in time, but dynamic. Users of a tagging platform can insert new tags (from which new emotions can be inferred), new users can insert their emotional feedbacks about tags, or we envision the possibility that the same user can have different responses to the same artwork at different point of time. Accordingly, the emotion-driven browsing experience provided can be different from time to time. We can envisage an enhanced application framework where, initially, the position of the artworks in the emotional space will be mainly determined by the interaction of artists and curators: they will be the first ones to add meanings to the artworks, and then to give an input to the “emotional engine”. Later, when the application starts to collect the new meanings expressed by the visitors, artworks will start to float in the emotional space. New artworks and meanings can be added anytime, the emotional relations will continuously change, by reflecting the evolution of the community and its latent perception of a sort of emotional “zeitgeist”. Dealing with these dynamic aspects is out of the scope of the current proposal, but it could be an interesting line of research to follow in future work.
ArsEmotica has been tested on social data and artworks from the real-world online collection arsmeteo.org. The ArsMeteo dataset, enriched with the metadata about the emotions detected by ArsEmotica, is available in RDF format and can be accessed via a demo SPARQL endpoint. A novel unified semantic data model has been defined, where artworks belonging to the collection were semantically described by referring to emotional categories of an ontology of emotions. The framework allows us to model relations among artworks, persons and emotions, by combining the ArsEmotica ontology of emotions with available ontologies, such as FOAF and OMR. Moreover, where possible and relevant, our data where linked to external repositories of the LOD, such as DBpedia.
A user study has been proposed with the main aim to test the effectiveness of the framework for an emotional tagging tasks, with a special focus, on the one hand, on the underlying affective model, on the other hand, on the visualization interface inspired by the Plutchik’s wheel of emotions. Results were very positive on both aspects, and comments of subjects during the thinking aloud session will be very helpful in designing future developments of the framework. In particular, some subjects suggested to develop the social level of the application, since they are interested in comparing their emotional responses with the maybe different emotional responses of other users (friends or simply visitors) interested in the same artwork. Moreover, it emerged that users want to have more freedom in choosing a tag for the feedback, as they expressed the desire to evaluate tags that were not selected as having a significant sentiment load according to SentiWordNet. As observed in Strapparava et al. (2006), in some cases the affective power of a word is part of the collective imagination (think for instance of words like ‘war’), but some words can be emotional for someone due to her individual story. Therefore, it is maybe better to widen the filter of words to be offered for the user feedback, since the current one was perceived as restrictive.
For what concerns the prospected applications of the ArsEmotica framework, it could be exploited, along the direction traced in Simon (2010), as a co-creation instrument formuseums, virtual galleries, and other activities falling under the general umbrella ofcreative industries. In this sector, the demand is growing for new user experiences, and therefore for applications, including smart-phone apps, where the key aim is to stimulate user-community interaction and encourage visitors to share their experiences. Such applications can have a cultural flavor but can also be more intrinsically related to leisure, and should help transforming classical art-fruition experiences into innovative, more immersive experiences, with a greater impact on visitors.
Art and emotions are naturally related Silvia (2005) and, as also our user study confirmed, the possibility to speculate about the artistic arousal of emotions and to reason on artworks, authors and evoked emotions within a social dimension, where personal feelings about an artworks can be compared with what is felt by others, seems to be attractive from a visitor’s perspective. Moreover, the possibility to collect emotional responses to artworks and collections can be important for artists and curators, in order to get a feedback about their creations, but also for policymakers in the cultural heritage sector, that need advanced e-participation tools for being supported in their work, both at the decision-making stage, and in the ex-post evaluation of the impact of their policies (e.g. What is the sentiment of citizens about a publicly funded exhibition?). Another interesting application field is given by a growing number of virtual galleries, such as http://www.saatchiart.com, that raised in the last years with a business perspective, having the main aim to sell online artworks (visual arts, especially). Also, in this case, detecting emotions in keywords and other information associated to the artworks could be useful in order to offer new emotion-driven searching functionalities to the website customer, to be possibly combined with traditional searching criteria based on genres (e.g paintings, photography or sculpture) which encompass stylistic characteristics. Recently, a similar approach has been already successfully applied to digital music, with the proposal of music streaming services that instead of finding music by genre or musical-relation, propose a selection tailored to the user’s mood and feelings, e.g. Stereomood, http://www.stereomood.com.
Finally, the possibility to exploit the geographic information about the place where artists work, together with
ในเอกสารนี้เรานำกรอบการแอพลิเคชันที่เชื่อมโยงข้อมูล ความหมายเทคโนโลยี และเทคนิคการประมวลผลภาษาธรรมชาติจะถูกใช้ประโยชน์ สืบด้านอารมณ์ของมรดกทางวัฒนธรรม สิ่งประดิษฐ์ ขึ้นอยู่กับความเชื่อมั่นของผู้ใช้ตรวจพบในระบบสังคม จุดมุ่งหมายมีอยู่สองประการ: การรับรู้ และเห็นภาพผลกระทบอารมณ์ของงานศิลปะบนคน เพื่อให้องค์กรขับเคลื่อนอารมณ์ เข้าถึง และการรับงานและข้อมูลที่เกี่ยวข้องในคอลเลกชันที่ง่ายดายเราได้อธิบายภววิทยานกฮูกของอารมณ์พัฒนาต้นแบบ ArsEmotica ภววิทยาได้รับรู้สึกสำหรับประเภทกำหนดเรียกค่าอารมณ์คำ และมีการเติมข้อมูลกึ่งอัตโนมัติ ด้วยเงื่อนไขภาษาอิตาลีจากดับเบิ้ลยูเอ็นส่งผลกระทบต่อ เพื่อให้กรอบการใช้ ภาษาอื่น ๆ เช่นภาษาอังกฤษ และภาษาคำอธิบายของงานศิลปะ เรากำลังทำงานเพื่อการรวมของภววิทยา ArsEmotica กับแบบปทานุกรมสำหรับ ONtologies (มะนาว) (แม็คเคร et al., 2011) การรวมจะช่วยให้เราสามารถจัดประเภท synsets อังกฤษปรินซ์ WordNet แสดงผลแนวคิดภายใต้ประเภทของอารมณ์ และ แล้ว กดขี่ขูดรีดมีหลายภาษาเกี่ยวกับคำศัพท์ฐานข้อมูลชิด WordNet แสดงการเชื่อมโยงกับรายการผลเกี่ยวกับคำศัพท์ (คำที่กำหนดเรียกค่าอารมณ์) ในภาษาเราสนใจเพื่อจัดการกับ (เช่น MultiWordNet ในภาษาอิตาลี) .22 ใช้พจนานุกรมสารานุกรมหลายภาษาเช่น BabelNet23 จะยังประโยชน์ในการจัดการกับปัญหา multilingualityนอกจากนี้ เราวางแผนภววิทยาพัฒนา และขยายความครอบคลุมของเราอารมณ์ปทานุกรม นอกจากนี้เพื่อรับมือกับนิพจน์ หลายคำที่อาจไม่ชัดเจนสามารถถ่ายทอดอารมณ์ แต่เกี่ยวข้องกับแนวคิดที่ทำ ตามที่มีการยืนยันยัง โดยผลการศึกษาของผู้ใช้ การจัดการกับปัญหาดังกล่าว มันจะสะดวกใน การใช้ทรัพยากรเช่น EmoSenticNet (Poria et al., 2014a) และ เพื่อนำมาใช้เป็นแนวคิดตามแนวทาง แดใน Poria et al. (2014b)The affective model refers to a state-of-the-art cognitive model of emotions and inspired an interactive user interface for visualizing and summarizing the results of the emotion detection algorithm. The current ArsEmotica interface provides our users with the possibility to access the outcomes of the emotional analysis. On this line, the next step is to study innovative strategies to browse the artworks, by relying on their semantic organization in the ArsEmotica emotional space. The aim is to provide users with the possibility to explore the resources by exploiting the various dimensions suggested by the ontological model. Possible user preferences to deal with could be: “show me sadder artworks” (intensity relation); “show me something emotionally completely different” (polar opposites); “show me artworks conveying similar emotions” (similarity relation). Another interesting direction we are going to investigate is the possibility to use the ArsEmotica framework in order to define innovative recommendation strategies based on affective information on contents ( Tkalcic et al., 2013). Notice that, in the current interface, we are interested to offer a summary of the emotions evoked by artworks, detecting their simple presence by analysing tags, without introducing a measure about their relative strength (e.g. tags related to emotion A are more frequent than tags related to emotion B). However, in other contexts it would be interesting to use frequencies of emotion labels to give to the users a measure about different strengths of the emotions. This information, could be exploited to rank evoked emotions in case of multiple classification, and to recommend users to visit artworks emotionally similar w.r.t. the prevalent emotional category.Another interesting issue to address is investigating how the emotional user experiences in relation to a given artwork can vary over time. The positioning of an artwork in the emotional space created by the emotional ontology is not static in time, but dynamic. Users of a tagging platform can insert new tags (from which new emotions can be inferred), new users can insert their emotional feedbacks about tags, or we envision the possibility that the same user can have different responses to the same artwork at different point of time. Accordingly, the emotion-driven browsing experience provided can be different from time to time. We can envisage an enhanced application framework where, initially, the position of the artworks in the emotional space will be mainly determined by the interaction of artists and curators: they will be the first ones to add meanings to the artworks, and then to give an input to the “emotional engine”. Later, when the application starts to collect the new meanings expressed by the visitors, artworks will start to float in the emotional space. New artworks and meanings can be added anytime, the emotional relations will continuously change, by reflecting the evolution of the community and its latent perception of a sort of emotional “zeitgeist”. Dealing with these dynamic aspects is out of the scope of the current proposal, but it could be an interesting line of research to follow in future work.ArsEmotica has been tested on social data and artworks from the real-world online collection arsmeteo.org. The ArsMeteo dataset, enriched with the metadata about the emotions detected by ArsEmotica, is available in RDF format and can be accessed via a demo SPARQL endpoint. A novel unified semantic data model has been defined, where artworks belonging to the collection were semantically described by referring to emotional categories of an ontology of emotions. The framework allows us to model relations among artworks, persons and emotions, by combining the ArsEmotica ontology of emotions with available ontologies, such as FOAF and OMR. Moreover, where possible and relevant, our data where linked to external repositories of the LOD, such as DBpedia.
A user study has been proposed with the main aim to test the effectiveness of the framework for an emotional tagging tasks, with a special focus, on the one hand, on the underlying affective model, on the other hand, on the visualization interface inspired by the Plutchik’s wheel of emotions. Results were very positive on both aspects, and comments of subjects during the thinking aloud session will be very helpful in designing future developments of the framework. In particular, some subjects suggested to develop the social level of the application, since they are interested in comparing their emotional responses with the maybe different emotional responses of other users (friends or simply visitors) interested in the same artwork. Moreover, it emerged that users want to have more freedom in choosing a tag for the feedback, as they expressed the desire to evaluate tags that were not selected as having a significant sentiment load according to SentiWordNet. As observed in Strapparava et al. (2006), in some cases the affective power of a word is part of the collective imagination (think for instance of words like ‘war’), but some words can be emotional for someone due to her individual story. Therefore, it is maybe better to widen the filter of words to be offered for the user feedback, since the current one was perceived as restrictive.
For what concerns the prospected applications of the ArsEmotica framework, it could be exploited, along the direction traced in Simon (2010), as a co-creation instrument formuseums, virtual galleries, and other activities falling under the general umbrella ofcreative industries. In this sector, the demand is growing for new user experiences, and therefore for applications, including smart-phone apps, where the key aim is to stimulate user-community interaction and encourage visitors to share their experiences. Such applications can have a cultural flavor but can also be more intrinsically related to leisure, and should help transforming classical art-fruition experiences into innovative, more immersive experiences, with a greater impact on visitors.
Art and emotions are naturally related Silvia (2005) and, as also our user study confirmed, the possibility to speculate about the artistic arousal of emotions and to reason on artworks, authors and evoked emotions within a social dimension, where personal feelings about an artworks can be compared with what is felt by others, seems to be attractive from a visitor’s perspective. Moreover, the possibility to collect emotional responses to artworks and collections can be important for artists and curators, in order to get a feedback about their creations, but also for policymakers in the cultural heritage sector, that need advanced e-participation tools for being supported in their work, both at the decision-making stage, and in the ex-post evaluation of the impact of their policies (e.g. What is the sentiment of citizens about a publicly funded exhibition?). Another interesting application field is given by a growing number of virtual galleries, such as http://www.saatchiart.com, that raised in the last years with a business perspective, having the main aim to sell online artworks (visual arts, especially). Also, in this case, detecting emotions in keywords and other information associated to the artworks could be useful in order to offer new emotion-driven searching functionalities to the website customer, to be possibly combined with traditional searching criteria based on genres (e.g paintings, photography or sculpture) which encompass stylistic characteristics. Recently, a similar approach has been already successfully applied to digital music, with the proposal of music streaming services that instead of finding music by genre or musical-relation, propose a selection tailored to the user’s mood and feelings, e.g. Stereomood, http://www.stereomood.com.
Finally, the possibility to exploit the geographic information about the place where artists work, together with
การแปล กรุณารอสักครู่..
