ログイン
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

{"_buckets": {"deposit": "3afb1e81-fac5-4889-a0fe-c0230ac69efc"}, "_deposit": {"id": "2008835", "owners": [1], "pid": {"revision_id": 0, "type": "depid", "value": "2008835"}, "status": "published"}, "_oai": {"id": "oai:u-ryukyu.repo.nii.ac.jp:02008835", "sets": ["1642838338003", "1642838406845"]}, "author_link": [], "item_1617186331708": {"attribute_name": "Title", "attribute_value_mlt": [{"subitem_1551255647225": "\u7573\u307f\u8fbc\u307f\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u7528\u3044\u305f\u8868\u60c5\u8868\u73fe\u306e\u7372\u5f97\u3068\u9854\u7279\u5fb4\u91cf\u306e\u5206\u6790", "subitem_1551255648112": "ja"}, {"subitem_1551255647225": "Feature Acquisition and Analysis for Facial Expression Recognition Using Convolutional Neural Networks", "subitem_1551255648112": "en"}]}, "item_1617186419668": {"attribute_name": "Creator", "attribute_type": "creator", "attribute_value_mlt": [{"creatorNames": [{"creatorName": "\u897f\u9298, \u5927\u559c", "creatorNameLang": "ja"}]}, {"creatorNames": [{"creatorName": "\u9060\u85e4, \u8061\u5fd7", "creatorNameLang": "ja"}]}, {"creatorNames": [{"creatorName": "\u7576\u9593, \u611b\u6643", "creatorNameLang": "ja"}]}, {"creatorNames": [{"creatorName": "\u5c71\u7530, \u8003\u6cbb", "creatorNameLang": "ja"}]}, {"creatorNames": [{"creatorName": "\u8d64\u5dba, \u6709\u5e73", "creatorNameLang": "ja"}]}, {"creatorNames": [{"creatorName": "Nishime, Taiki", "creatorNameLang": "en"}]}, {"creatorNames": [{"creatorName": "Endo, Satoshi", "creatorNameLang": "en"}]}, {"creatorNames": [{"creatorName": "Toma, Naruaki", "creatorNameLang": "en"}]}, {"creatorNames": [{"creatorName": "Yamada, Koji", "creatorNameLang": "en"}]}, {"creatorNames": [{"creatorName": "Akamine, Yuhei", "creatorNameLang": "en"}]}]}, "item_1617186476635": {"attribute_name": "Access Rights", "attribute_value_mlt": [{"subitem_1522299639480": "open access", "subitem_1600958577026": "http://purl.org/coar/access_right/c_abf2"}]}, "item_1617186609386": {"attribute_name": "Subject", "attribute_value_mlt": [{"subitem_1522299896455": "en", "subitem_1522300014469": "Other", "subitem_1523261968819": "facial expression"}, {"subitem_1522299896455": "en", "subitem_1522300014469": "Other", "subitem_1523261968819": "convolutional neural networks"}]}, "item_1617186626617": {"attribute_name": "Description", "attribute_value_mlt": [{"subitem_description": "Facial expressions play an important role in communication as much as words. In facial expression recognition by human, it is difficult to uniquely judge, because facial expression has the sway of recognition by individual difference and subjective recognition. Therefore, it is difficult to evaluate the reliability of the result from recognition accuracy alone, and the analysis for explaining the result and feature learned by Convolutional Neural Networks (CNN) will be considered important. In this study, we carried out the facial expression recognition from facial expression images using CNN. In addition, we analysed CNN for understanding learned features and prediction results. Emotions we focused on are \"happiness\", \"sadness\", \"surprise\", \"anger\", \"disgust\", \"fear\" and \"neutral\". As a result, using 32286 facial expression images, have obtained an emotion recognition score of about 57%; for two emotions\\n(Happiness, Surprise) the recognition score exceeded 70%, but Anger and Fear was less than 50%. In the analysis of CNN, we focused on the learning process, input and intermediate layer. Analysis of the learning progress confirmed that increased data can be recognized in the following order \"happiness\", \"surprise\", \"neutral\", \"anger\", \"disgust\", \"sadness\" and \"fear\". From the analysis result of the input and intermediate layer, we confirmed that the feature of the eyes and mouth strongly influence the facial expression recognition, and intermediate layer neurons had active patterns corresponding to facial expressions, and also these activate patterns do not respond to partial features of facial expressions. From these results, we concluded that CNN has learned the partial features of eyes and mouth from input, and recognize the facial expression using hidden layer units having the area corresponding to each facial expression.", "subitem_description_type": "Other"}, {"subitem_description": "\u8ad6\u6587", "subitem_description_type": "Other"}]}, "item_1617186643794": {"attribute_name": "Publisher", "attribute_value_mlt": [{"subitem_1522300295150": "ja", "subitem_1522300316516": "\u793e\u56e3\u6cd5\u4eba \u4eba\u5de5\u77e5\u80fd\u5b66\u4f1a"}, {"subitem_1522300295150": "en", "subitem_1522300316516": "THE JAPANESE SOCIETY FOR ARTIFICIAL INTELLIGENCE"}]}, "item_1617186702042": {"attribute_name": "Language", "attribute_value_mlt": [{"subitem_1551255818386": "jpn"}]}, "item_1617186783814": {"attribute_name": "Identifier", "attribute_value_mlt": [{"subitem_identifier_type": "HDL", "subitem_identifier_uri": "http://hdl.handle.net/20.500.12000/37607"}]}, "item_1617186920753": {"attribute_name": "Source Identifier", "attribute_value_mlt": [{"subitem_1522646500366": "ISSN", "subitem_1522646572813": "1346-0714"}]}, "item_1617186941041": {"attribute_name": "Source Title", "attribute_value_mlt": [{"subitem_1522650068558": "ja", "subitem_1522650091861": "\u4eba\u5de5\u77e5\u80fd\u5b66\u4f1a\u8ad6\u6587\u8a8c"}]}, "item_1617187056579": {"attribute_name": "Bibliographic Information", "attribute_value_mlt": [{"bibliographicIssueNumber": "5", "bibliographicVolumeNumber": "32"}]}, "item_1617258105262": {"attribute_name": "Resource Type", "attribute_value_mlt": [{"resourcetype": "journal article", "resourceuri": "http://purl.org/coar/resource_type/c_6501"}]}, "item_1617265215918": {"attribute_name": "Version Type", "attribute_value_mlt": [{"subitem_1522305645492": "VoR", "subitem_1600292170262": "http://purl.org/coar/version/c_970fb48d4fbd8a85"}]}, "item_1617353299429": {"attribute_name": "Relation", "attribute_value_mlt": [{"subitem_1522306287251": {"subitem_1522306382014": "DOI", "subitem_1522306436033": "https://doi.org/10.1527/tjsai.F-H34"}}, {"subitem_1522306287251": {"subitem_1522306382014": "DOI", "subitem_1522306436033": "info:doi/10.1527/tjsai.F-H34"}}]}, "item_1617605131499": {"attribute_name": "File", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_access", "download_preview_message": "", "file_order": 0, "filename": "Vol37no5F.pdf", "future_date_message": "", "is_thumbnail": false, "mimetype": "", "size": 0, "url": {"objectType": "fulltext", "url": "https://u-ryukyu.repo.nii.ac.jp/record/2008835/files/Vol37no5F.pdf"}, "version_id": "3739ac18-b28c-4aa6-bf4a-d1745eaa8dbc"}]}, "item_title": "\u7573\u307f\u8fbc\u307f\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u7528\u3044\u305f\u8868\u60c5\u8868\u73fe\u306e\u7372\u5f97\u3068\u9854\u7279\u5fb4\u91cf\u306e\u5206\u6790", "item_type_id": "15", "owner": "1", "path": ["1642838338003", "1642838406845"], "permalink_uri": "http://hdl.handle.net/20.500.12000/37607", "pubdate": {"attribute_name": "PubDate", "attribute_value": "2018-01-29"}, "publish_date": "2018-01-29", "publish_status": "0", "recid": "2008835", "relation": {}, "relation_version_is_last": true, "title": ["\u7573\u307f\u8fbc\u307f\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u7528\u3044\u305f\u8868\u60c5\u8868\u73fe\u306e\u7372\u5f97\u3068\u9854\u7279\u5fb4\u91cf\u306e\u5206\u6790"], "weko_shared_id": -1}
  1. 学術雑誌論文
  2. その他
  1. 部局別インデックス
  2. 工学部

畳み込みニューラルネットワークを用いた表情表現の獲得と顔特徴量の分析

http://hdl.handle.net/20.500.12000/37607
http://hdl.handle.net/20.500.12000/37607
687ab34d-24d9-40f3-89fd-e162556ad862
名前 / ファイル ライセンス アクション
Vol37no5F.pdf Vol37no5F.pdf
Item type デフォルトアイテムタイプ(フル)(1)
公開日 2018-01-29
タイトル
タイトル 畳み込みニューラルネットワークを用いた表情表現の獲得と顔特徴量の分析
言語 ja
作成者 西銘, 大喜

× 西銘, 大喜

ja 西銘, 大喜

遠藤, 聡志

× 遠藤, 聡志

ja 遠藤, 聡志

當間, 愛晃

× 當間, 愛晃

ja 當間, 愛晃

山田, 考治

× 山田, 考治

ja 山田, 考治

赤嶺, 有平

× 赤嶺, 有平

ja 赤嶺, 有平

Nishime, Taiki

× Nishime, Taiki

en Nishime, Taiki

Endo, Satoshi

× Endo, Satoshi

en Endo, Satoshi

Toma, Naruaki

× Toma, Naruaki

en Toma, Naruaki

Yamada, Koji

× Yamada, Koji

en Yamada, Koji

Akamine, Yuhei

× Akamine, Yuhei

en Akamine, Yuhei

アクセス権
アクセス権 open access
アクセス権URI http://purl.org/coar/access_right/c_abf2
主題
言語 en
主題Scheme Other
主題 facial expression
言語 en
主題Scheme Other
主題 convolutional neural networks
内容記述
内容記述タイプ Other
内容記述 Facial expressions play an important role in communication as much as words. In facial expression recognition by human, it is difficult to uniquely judge, because facial expression has the sway of recognition by individual difference and subjective recognition. Therefore, it is difficult to evaluate the reliability of the result from recognition accuracy alone, and the analysis for explaining the result and feature learned by Convolutional Neural Networks (CNN) will be considered important. In this study, we carried out the facial expression recognition from facial expression images using CNN. In addition, we analysed CNN for understanding learned features and prediction results. Emotions we focused on are "happiness", "sadness", "surprise", "anger", "disgust", "fear" and "neutral". As a result, using 32286 facial expression images, have obtained an emotion recognition score of about 57%; for two emotions\n(Happiness, Surprise) the recognition score exceeded 70%, but Anger and Fear was less than 50%. In the analysis of CNN, we focused on the learning process, input and intermediate layer. Analysis of the learning progress confirmed that increased data can be recognized in the following order "happiness", "surprise", "neutral", "anger", "disgust", "sadness" and "fear". From the analysis result of the input and intermediate layer, we confirmed that the feature of the eyes and mouth strongly influence the facial expression recognition, and intermediate layer neurons had active patterns corresponding to facial expressions, and also these activate patterns do not respond to partial features of facial expressions. From these results, we concluded that CNN has learned the partial features of eyes and mouth from input, and recognize the facial expression using hidden layer units having the area corresponding to each facial expression.
内容記述タイプ Other
内容記述 論文
出版者
言語 ja
出版者 社団法人 人工知能学会
言語
言語 jpn
資源タイプ
資源タイプ journal article
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
出版タイプ
出版タイプ VoR
出版タイプResource http://purl.org/coar/version/c_970fb48d4fbd8a85
識別子
識別子 http://hdl.handle.net/20.500.12000/37607
識別子タイプ HDL
関連情報
関連識別子
識別子タイプ DOI
関連識別子 https://doi.org/10.1527/tjsai.F-H34
関連識別子
識別子タイプ DOI
関連識別子 info:doi/10.1527/tjsai.F-H34
収録物識別子
収録物識別子タイプ ISSN
収録物識別子 1346-0714
収録物名
言語 ja
収録物名 人工知能学会論文誌
書誌情報
巻 32, 号 5
戻る
0
views
See details
Views

Versions

Ver.1 2022-01-28 07:18:28.037208
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON

確認


Powered by WEKO3


Powered by WEKO3