Sharing personal experiences in a speech can enable your audience to identify and connect with you, but you need to organize those details so that they illustrate an argument. Like any other kind of speech, one drawing from personal experience should not ramble -- its conversational style still needs structure. Even if your speech needs only be introductory, it should still present a precise and pointed version of you or your views that is best demonstrated by a particular experience you had.
We present a robust method to map detected facial Action Units (AUs) to six basic emotions. Automatic AU recognition is prone to errors due to illumination, tracking failures and occlusions. Hence, traditional rule based methods to map AUs to emotions are very sensitive to false positives and misses among the AUs. In our method, a set of chosen AUs are mapped to the six basic emotions using a learned statistical relationship and a suitable matching technique. Relationships between the AUs and emotions are captured as template strings comprising the most discriminative AUs for each emotion. The template strings are computed using a concept called discriminative power. The Longest Common Subsequence (LCS) distance, an approach for approximate string matching, is applied to calculate the closeness of a test string of AUs with the template strings, and hence infer the under lying emotions. LCS is found to be efficient in handling practical issues like erroneous AU detection and helps to reduce false predictions. The proposed method is tested with various databases like CK+, ISL, FACS, JAFFE, MindReading and many real-world video frames. We compare our performance with rule based techniques, and show clear improvement on both benchmark databases and real-world datasets.
ประสบการณ์ส่วนบุคคลในคำพูดสามารถเปิดใช้งานระบุ และเชื่อมต่อกับผู้ชมของคุณ แต่คุณต้องจัดระเบียบรายละเอียดดังกล่าวเพื่อให้พวกเขาแสดงอาร์กิวเมนต์ เช่นประเภทอื่น ๆ เสียง หนึ่งวาดจากประสบการณ์ส่วนตัวควร ramble - ลักษณะการสนทนายังต้องการโครงสร้าง แม้ว่าคำพูดของคุณต้องมีการเกริ่นนำ มันยังควรนำเสนอแบบแม่นยำ และชี้ของคุณ หรือแสดงให้เห็นว่ามุมมองของคุณที่ดีที่สุดจากประสบการณ์เฉพาะที่ได้We present a robust method to map detected facial Action Units (AUs) to six basic emotions. Automatic AU recognition is prone to errors due to illumination, tracking failures and occlusions. Hence, traditional rule based methods to map AUs to emotions are very sensitive to false positives and misses among the AUs. In our method, a set of chosen AUs are mapped to the six basic emotions using a learned statistical relationship and a suitable matching technique. Relationships between the AUs and emotions are captured as template strings comprising the most discriminative AUs for each emotion. The template strings are computed using a concept called discriminative power. The Longest Common Subsequence (LCS) distance, an approach for approximate string matching, is applied to calculate the closeness of a test string of AUs with the template strings, and hence infer the under lying emotions. LCS is found to be efficient in handling practical issues like erroneous AU detection and helps to reduce false predictions. The proposed method is tested with various databases like CK+, ISL, FACS, JAFFE, MindReading and many real-world video frames. We compare our performance with rule based techniques, and show clear improvement on both benchmark databases and real-world datasets.
การแปล กรุณารอสักครู่..
