{"id":7655,"date":"2019-02-12T16:43:00","date_gmt":"2019-02-12T15:43:00","guid":{"rendered":"https:\/\/nes.aau.at\/?p=7655"},"modified":"2020-07-01T07:14:24","modified_gmt":"2020-07-01T06:14:24","slug":"self-calibration-of-visual-sensor-networks","status":"publish","type":"post","link":"https:\/\/nes.aau.at\/?p=7655","title":{"rendered":"Self-calibration of visual sensor networks"},"content":{"rendered":"<p>Many multi-camera applications rely on the knowledge of the spatial relationship among the individual nodes. However, establishing such a network-wide calibration is typically a time-consuming task and requires user interaction. In her recent work, Jennifer Simonjan developed a decentralized and resource-aware algorithm for estimating the poses of all camera nodes without any user interaction. &#8220;Self-calibration is achieved in two steps&#8221;, she explains. &#8220;First, overlapping camera pairs estimate relative positions and orientations by exchanging locally measured distances and angles to detected objects. Second, calibration information of overlapping cameras is spread throughout the network such that poses of non-overlapping cameras can also be estimated.&#8221;<\/p>\n<figure id=\"attachment_7656\" aria-describedby=\"caption-attachment-7656\" style=\"width: 300px\" class=\"wp-caption alignleft\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7656 size-medium\" src=\"https:\/\/nes.aau.at\/wp-content\/uploads\/2019\/02\/VSN-calibration-300x238.jpg\" alt=\"\" width=\"300\" height=\"238\" srcset=\"https:\/\/nes.aau.at\/wp-content\/uploads\/2019\/02\/VSN-calibration-300x238.jpg 300w, https:\/\/nes.aau.at\/wp-content\/uploads\/2019\/02\/VSN-calibration-768x610.jpg 768w, https:\/\/nes.aau.at\/wp-content\/uploads\/2019\/02\/VSN-calibration-1024x814.jpg 1024w, https:\/\/nes.aau.at\/wp-content\/uploads\/2019\/02\/VSN-calibration-69x55.jpg 69w, https:\/\/nes.aau.at\/wp-content\/uploads\/2019\/02\/VSN-calibration-800x636.jpg 800w, https:\/\/nes.aau.at\/wp-content\/uploads\/2019\/02\/VSN-calibration.jpg 1800w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><figcaption id=\"caption-attachment-7656\" class=\"wp-caption-text\">Self-calibration determines the position and orientation of all camera nodes in the network.<\/figcaption><\/figure>\n<p>Her PhD supervisor <a href=\"https:\/\/nes.aau.at\/?page_id=5984\">Bernhard Rinner<\/a> points out: &#8220;This approach does not rely on a priori topological information and delivers the extrinsic camera parameters with respect to a common coordinate system.&#8221; Such network-wide calibration is important for many multi-camera applications. It helps to to automate the network setup and to account for topology changes during network operation. A fully decentralized approach was realized in order to support scalability and network dynamics. In the recently published journal paper, they perform a simulation study and analyze the performance of their approach concerning the achieved spatial accuracy and computational effort considering noisy measurements and different communication schemes.<\/p>\n<p>&nbsp;<\/p>\n<h5>Publication<\/h5>\n<p>Jennifer Simonjan and Bernhard Rinner. <span class=\"title-text\"><a href=\"https:\/\/doi.org\/10.1016\/j.adhoc.2019.01.007\">Decentralized and resource-efficient self-calibration of visual sensor networks<\/a>. Ad Hoc Networks. 88: 212-228. 2019.<br \/>\n<\/span><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Many multi-camera applications rely on the knowledge of the spatial relationship among the individual nodes. However, establishing such a network-wide calibration is typically a time-consuming task and requires user interaction. In her recent work, Jennifer Simonjan developed a decentralized and resource-aware algorithm for estimating the poses of all camera nodes [&hellip;]<\/p>\n","protected":false},"author":8,"featured_media":6123,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[2],"tags":[240,351,220],"class_list":["post-7655","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-publications","tag-camera-networks","tag-self-calibration","tag-sensor-networks"],"jetpack_featured_media_url":"https:\/\/nes.aau.at\/wp-content\/uploads\/2017\/01\/pervasive-computing-800.jpg","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/nes.aau.at\/index.php?rest_route=\/wp\/v2\/posts\/7655","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nes.aau.at\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nes.aau.at\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nes.aau.at\/index.php?rest_route=\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/nes.aau.at\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7655"}],"version-history":[{"count":14,"href":"https:\/\/nes.aau.at\/index.php?rest_route=\/wp\/v2\/posts\/7655\/revisions"}],"predecessor-version":[{"id":8101,"href":"https:\/\/nes.aau.at\/index.php?rest_route=\/wp\/v2\/posts\/7655\/revisions\/8101"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nes.aau.at\/index.php?rest_route=\/wp\/v2\/media\/6123"}],"wp:attachment":[{"href":"https:\/\/nes.aau.at\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7655"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nes.aau.at\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7655"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nes.aau.at\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7655"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}