Analyzing Multimodal Public Sentiment Based on Hierarchical Semantic Attentional Network | |
Xu, Nan1,2 | |
2017-07 | |
会议日期 | Jul 22-24, 2017 |
会议地点 | Beijing, China |
英文摘要 | Public sentiment is regarded as an important measure for event detection, information security, policy making etc. Analyzing public sentiments relies more and more on large amount of multimodal contents, in contrast to the traditional text-based and image-based sentiment analysis. However, most previous works directly extract feature from image as the additional information for text modality and then merge these features for multimodal sentiment analysis. More detailed semantic information in image, like image caption which contains useful semantic components for sentiment analysis, has been ignored. In this paper, we propose a Hierarchical Semantic Attentional Network based on image caption, HSAN, for multimodal sentiment analysis. It has a hierarchical structure that reflects the hierarchical structure of tweet and uses image caption to extract visual semantic feature as the additional information for text in multimodal sentiment analysis task. We also introduce the attention with context mechanism, which learns to consider the context information for encoding. The experiments on two public available datasets show the effectiveness of our model. |
语种 | 英语 |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/39146] |
专题 | 自动化研究所_复杂系统管理与控制国家重点实验室_互联网大数据与安全信息学研究中心 |
作者单位 | 1.Institute of Automation, Chinese Academy of Sciences 2.University of Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Xu, Nan. Analyzing Multimodal Public Sentiment Based on Hierarchical Semantic Attentional Network[C]. 见:. Beijing, China. Jul 22-24, 2017. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论