{"id":977,"date":"2017-06-14T11:08:19","date_gmt":"2017-06-14T11:08:19","guid":{"rendered":"http:\/\/blog.tiran.info\/?p=977"},"modified":"2017-12-04T14:56:25","modified_gmt":"2017-12-04T13:56:25","slug":"reseaux-de-neurones-avec-r-3","status":"publish","type":"post","link":"https:\/\/blog.tiran.stream\/?p=977","title":{"rendered":"R\u00e9seaux de neurones avec R (#3)"},"content":{"rendered":"<p style=\"text-align: justify;\">Il peut \u00eatre int\u00e9ressant de visualiser la topologie d&rsquo;un ANN. Le blog suivant indique une m\u00e9thode simple pour y parvenir: <a href=\"https:\/\/beckmw.wordpress.com\/2013\/11\/14\/visualizing-neural-networks-in-r-update\/\" target=\"_blank\" rel=\"noopener\">https:\/\/beckmw.wordpress.com\/2013\/11\/14\/visualizing-neural-networks-in-r-update\/<\/a><\/p>\n<p style=\"text-align: justify;\">Le code de la fonction plot.nnet est disponible sur <a href=\"https:\/\/gist.github.com\/fawda123\/7471137\" target=\"_blank\" rel=\"noopener\">https:\/\/gist.github.com\/fawda123\/7471137<\/a><\/p>\n<p>Essayons \u00e0 l&rsquo;aide de cette fonction de repr\u00e9senter \u00a0le r\u00e9seau de neurones construit dans <a href=\"http:\/\/blog.tiran.info\/reseaux-de-neurones-avec-r-1\">ce billet<\/a>\u00a0:<\/p>\n<pre class=\"brush: js; ruler: true;\">\u00a0\r\n&gt; library(devtools)\r\nWarning message:\r\npackage \u2018devtools\u2019 was built under R version 3.2.5 \r\n&gt; source(\"C:\/RTI\/Stats\/nnet_plot_update.r\")\r\n&gt;\r\n&gt; plot.nnet(nn_model)\r\n&gt;\r\n<\/pre>\n<p>La repr\u00e9sentation s&rsquo;affiche alors:<\/p>\n<p><a href=\"https:\/\/blog.tiran.stream\/wp-content\/uploads\/2017\/09\/ANN.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-993\" src=\"https:\/\/blog.tiran.stream\/wp-content\/uploads\/2017\/09\/ANN.png\" alt=\"\" width=\"600\" height=\"484\" \/><\/a><\/p>\n<p>On retrouve bien:<\/p>\n<ul>\n<li style=\"text-align: justify;\">les 9 pr\u00e9dicteurs en entr\u00e9es (associ\u00e9s \u00e0 9 neurones d&rsquo;entr\u00e9e I1 -&gt; I9)<\/li>\n<li style=\"text-align: justify;\">les 5 neurones de la couche cach\u00e9e (H1 -&gt; H5)<\/li>\n<li style=\"text-align: justify;\">le neurone de sortie O1 (entrain\u00e9 pour estimer V11)<\/li>\n<li style=\"text-align: justify;\">les neurones de biais pour chaque couche (B1 et B2)<\/li>\n<\/ul>\n<p style=\"text-align: justify;\">Les poids synaptiques ne sont pas figur\u00e9s car cela alourdirait consid\u00e9rablement le graphique. En revanche, l&rsquo;\u00e9paisseur des liens est proportionnelle \u00e0 l&rsquo;importance du poids. La couleur informe du signe associ\u00e9: gris pour un poids n\u00e9gatif, noir pour un poids positif.<\/p>\n<p style=\"text-align: justify;\">Par exemple, dans le graphique ci-dessus, on voit que le neurone H3 a le poids positif le plus important.<\/p>\n<p style=\"text-align: justify;\">On peut le confirmer ais\u00e9ment avec les donn\u00e9es num\u00e9riques:<\/p>\n<pre class=\"brush: js; highlight: [15, 16]; ruler: true;\">\u00a0\r\n&gt; summary(nn_model)\r\na 9-5-1 network with 56 weights\r\noptions were -\r\n b-&gt;h1 i1-&gt;h1 i2-&gt;h1 i3-&gt;h1 i4-&gt;h1 i5-&gt;h1 i6-&gt;h1 i7-&gt;h1 i8-&gt;h1 i9-&gt;h1 \r\n  1.04 -11.14 -14.36 -14.44 -12.92 -12.05 -14.21 -12.57 -14.14  -7.14 \r\n b-&gt;h2 i1-&gt;h2 i2-&gt;h2 i3-&gt;h2 i4-&gt;h2 i5-&gt;h2 i6-&gt;h2 i7-&gt;h2 i8-&gt;h2 i9-&gt;h2 \r\n  2.07 -11.58 -12.34 -12.19  -9.69 -10.64 -12.48 -11.75 -10.62  -5.45 \r\n b-&gt;h3 i1-&gt;h3 i2-&gt;h3 i3-&gt;h3 i4-&gt;h3 i5-&gt;h3 i6-&gt;h3 i7-&gt;h3 i8-&gt;h3 i9-&gt;h3 \r\n -1.50   3.37   3.68   3.90   3.02   2.87   4.01   3.29   3.28   1.58 \r\n b-&gt;h4 i1-&gt;h4 i2-&gt;h4 i3-&gt;h4 i4-&gt;h4 i5-&gt;h4 i6-&gt;h4 i7-&gt;h4 i8-&gt;h4 i9-&gt;h4 \r\n -0.42  -1.53  -2.51  -1.69  -1.37  -1.76  -3.17  -1.76  -1.90  -1.75 \r\n b-&gt;h5 i1-&gt;h5 i2-&gt;h5 i3-&gt;h5 i4-&gt;h5 i5-&gt;h5 i6-&gt;h5 i7-&gt;h5 i8-&gt;h5 i9-&gt;h5 \r\n -1.60   5.33   7.01   6.26   5.65   5.71   6.81   6.44   5.68   2.15 \r\n  b-&gt;o  h1-&gt;o  h2-&gt;o  h3-&gt;o  h4-&gt;o  h5-&gt;o \r\n -5.80 -35.31  -5.04  41.26  24.80   2.44 \r\n&gt;\r\n<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Il peut \u00eatre int\u00e9ressant de visualiser la topologie d&rsquo;un ANN. Le blog suivant indique une m\u00e9thode simple pour y parvenir:<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[2,12,17],"tags":[],"class_list":["post-977","post","type-post","status-publish","format-standard","hentry","category-ann","category-r","category-visualisation"],"_links":{"self":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/977","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=977"}],"version-history":[{"count":1,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/977\/revisions"}],"predecessor-version":[{"id":1150,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/977\/revisions\/1150"}],"wp:attachment":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=977"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=977"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=977"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}