{"id":1091,"date":"2017-11-01T14:43:14","date_gmt":"2017-11-01T14:43:14","guid":{"rendered":"http:\/\/blog.tiran.info\/?p=1091"},"modified":"2017-11-01T14:43:14","modified_gmt":"2017-11-01T14:43:14","slug":"annore-5","status":"publish","type":"post","link":"https:\/\/blog.tiran.stream\/?p=1091","title":{"rendered":"ANN\/ORE #5 &#8211; Forward-propagation manuelle"},"content":{"rendered":"<p style=\"text-align: justify;\">On a vu dans <a href=\"http:\/\/blog.tiran.info\/annore-3\">un billet pr\u00e9c\u00e9dent<\/a> que la construction du r\u00e9seau de neurones avec le package neuralnet \u00e9tait notablement plus rapide qu&rsquo;avec nnet. En revanche, le mod\u00e8le ainsi produit ne semble pas r\u00e9utilisable une fois sauvegard\u00e9 dans le datastore ORE car il est beaucoup trop volumineux&#8230; (~24GB). \ud83d\ude41<\/p>\n<p style=\"text-align: justify;\">Je ne sais pas exactement ce qui est stock\u00e9 via ore.save mais, \u00e0 mon sens les \u00e9l\u00e9ments les plus importants du mod\u00e8le (outre sa topologie fix\u00e9e \u00e0 priori) sont les poids synaptiques. En tout \u00e9tat de cause, cela ne devrait pas constituer une telle masse d&rsquo;information&#8230;<\/p>\n<p style=\"text-align: justify;\">En effet, avec la topologie utilis\u00e9e dans les pr\u00e9c\u00e9dents billets \u2013 \u00e0 savoir :<\/p>\n<p style=\"text-align: justify; padding-left: 30px;\"><strong>Input Layer<\/strong> (784 valeurs d&rsquo;entr\u00e9es)<\/p>\n<p style=\"text-align: justify; padding-left: 60px;\">\u2192 <strong>Premier Hidden Layer<\/strong> (300 neurones)<\/p>\n<p style=\"text-align: justify; padding-left: 90px;\">\u2192 <strong>Second Hidden Layer<\/strong> (100 neurones)<\/p>\n<p style=\"text-align: justify; padding-left: 120px;\">\u2192 <strong>Output Layer<\/strong> (10 neurones)<\/p>\n<p style=\"text-align: justify;\">Cela repr\u00e9sente <strong>266610 poids synaptiques<\/strong> [(784 + 1 ) * 300 + (300 + 1) * 100 + (100 + 1) * 10].<\/p>\n<p style=\"text-align: justify;\">Mon id\u00e9e est donc de r\u00e9cup\u00e9rer directement ces poids pour les stocker dans des tables classiques. La mise en \u0153uvre de la <a href=\"https:\/\/fr.wikipedia.org\/wiki\/R%C3%A9seau_de_neurones_artificiels#Propagation_de_l.E2.80.99information\" target=\"_blank\" rel=\"noopener\">forward-propagation<\/a> sera ensuite r\u00e9alis\u00e9e \u00ab\u00a0manuellement\u00a0\u00bb \u00e0 l&rsquo;aide de ces donn\u00e9es.<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; library(ORE)\nLoading required package: OREbase\nLoading required package: OREcommon\n\nAttaching package: \u2018OREbase\u2019\n\nThe following objects are masked from \u2018package:base\u2019:\n\n    cbind, data.frame, eval, interaction, order, paste, pmax, pmin, rbind, table\n\nLoading required package: OREembed\nLoading required package: OREstats\nLoading required package: MASS\nLoading required package: OREgraphics\nLoading required package: OREeda\nLoading required package: OREmodels\nLoading required package: OREdm\nLoading required package: lattice\nLoading required package: OREpredict\nLoading required package: ORExml\n&gt; \n&gt; ore.connect(user=&quot;c##rafa&quot;, password=&quot;Password1#&quot;, conn_string=&quot;\/\/clorai2-scan:1521\/pdb_hodba08&quot;)\n&gt; \n&gt; library(tictoc)\n&gt;\n&gt; tic()\n&gt; \n&gt; ore.doEval(function() {\n+   library(ORE)\n+   library(neuralnet)\n+   set.seed(3456)\n+   ore.sync(table = &quot;MNIST_TRAINING_SET&quot;)\n+   mnist_training &lt;- ore.pull(ore.get(&quot;MNIST_TRAINING_SET&quot;))\n+   \n+   # -- One Hot Encoding du champ IMG_LBL\n+   mnist_training_ohe &lt;- as.data.frame(model.matrix(~.-1,mnist_training))\n+   \n+   # -- Construction de la formule en sp\u00e9cifiant les champs\n+   f &lt;- as.formula(paste(paste(paste(&quot;IMG_LBL&quot;,seq(0,9),sep=&quot;&quot;),collapse=&quot;+&quot;), &quot; ~&quot;, paste(paste(&quot;P&quot;,seq(1,784),sep=&quot;&quot;),collapse=&quot;+&quot;)))\n+   \n+   # -- Construction du mod\u00e8le\n+   nn_neuralnet &lt;- neuralnet(f, data=mnist_training_ohe,\n+                             hidden=c(300, 100),\n+                             linear.output=FALSE)\n+ \n+   # -- R\u00e9cup\u00e9ration des poids du premier Layer, ajout d&#039;un champ de tri et sauvegarde en base\n+   L1 &lt;- as.data.frame(nn_neuralnet$weights[[1]][[1]])\n+   L1$&quot;R1$$&quot; &lt;- seq_len(nrow(L1))                      \n+   ore.drop(table = &quot;L1&quot;)\n+   ore.create(L1, table = &quot;L1&quot;)\n+   ore.exec(&quot;alter table L1 add constraint L1_PK primary key (\\&quot;R1$$\\&quot;)&quot;)\n+   \n+   # -- R\u00e9cup\u00e9ration des poids du second Layer, ajout d&#039;un champ de tri et sauvegarde en base\n+   L2 &lt;- as.data.frame(nn_neuralnet$weights[[1]][[2]])\n+   L2$&quot;R1$$&quot; &lt;- seq_len(nrow(L2))                      \n+   ore.drop(table = &quot;L2&quot;)  \n+   ore.create(L2, table = &quot;L2&quot;)\n+   ore.exec(&quot;alter table L2 add constraint L2_PK primary key (\\&quot;R1$$\\&quot;)&quot;)\n+   \n+   # -- R\u00e9cup\u00e9ration des poids du Layer de sortie, ajout d&#039;un champ de tri et sauvegarde en base\n+   L3 &lt;- as.data.frame(nn_neuralnet$weights[[1]][[3]])\n+   L3$&quot;R1$$&quot; &lt;- seq_len(nrow(L3))                      \n+   ore.drop(table = &quot;L3&quot;)  \n+   ore.create(L3, table = &quot;L3&quot;)\n+   ore.exec(&quot;alter table L3 add constraint L3_PK primary key (\\&quot;R1$$\\&quot;)&quot;)\n+   \n+   }, ore.connect = TRUE)\nNULL\n&gt; \n&gt; toc()\n1973.29 sec elapsed\n&gt; \n\n<\/pre>\n<p style=\"text-align: justify;\">Ici, on a donc construit le mod\u00e8le nn_neuralnet mais au lieu de le sauvegarder dans le datastore ORE, on se contente de r\u00e9cup\u00e9rer les valeurs des poids synaptiques calcul\u00e9s (nn_neuralnet$weights). Ces derniers sont ensuite stock\u00e9s dans des tables (L1, L2 et L3) dans la base de donn\u00e9es. On y ajoute un champ (R1$$) qui permet d&rsquo;ordonner les valeurs (et de conserver cet ordre une fois en base via l&rsquo;utilisation d&rsquo;une cl\u00e9 primaire).<\/p>\n<p style=\"text-align: justify;\">On peut alors proc\u00e9der au scoring du mod\u00e8le. Pour cela, on charge localement (dans une session R interactive) le dataframe MNIST_TEST \u00e0 partir de la table MNIST_TEST_SET. On charge aussi les dataframes P1, P2 et P3 \u00e0 partir respectivement de L1, L2 et L3.<\/p>\n<p>On cr\u00e9\u00e9 aussi une fonction sigmoid qui sera charg\u00e9e du calcul la <a href=\"https:\/\/fr.wikipedia.org\/wiki\/Fonction_logistique\" target=\"_blank\" rel=\"noopener\">fonction logistique<\/a>:<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; ore.sync(table = &quot;MNIST_TEST_SET&quot;)\n&gt; mnist_test &lt;- ore.pull(ore.get(&quot;MNIST_TEST_SET&quot;))\n&gt; \n&gt; ore.sync(table = &quot;L1&quot;)\n&gt; P1 &lt;- ore.pull(ore.get(&quot;L1&quot;))\n&gt; \n&gt; ore.sync(table = &quot;L2&quot;)\n&gt; P2 &lt;- ore.pull(ore.get(&quot;L2&quot;))\n&gt; \n&gt; ore.sync(table = &quot;L3&quot;)\n&gt; P3 &lt;- ore.pull(ore.get(&quot;L3&quot;))\n&gt; \n&gt; \n&gt; sigmoid = function(x) {\n+   1 \/ (1 + exp(-x))\n+ }\n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">La propagation est alors relativement ais\u00e9e \u00e0 mettre en oeuvre. On peut utiliser l&rsquo;op\u00e9rateur de <a href=\"https:\/\/stat.ethz.ch\/R-manual\/R-devel\/library\/base\/html\/matmult.html\" target=\"_blank\" rel=\"noopener\">produit matriciel<\/a> de R pour multiplier les \u00e9l\u00e9ments de chaque couche avec les poids synaptiques associ\u00e9s. L&rsquo;ajout de biais est quant \u00e0 lui r\u00e9alis\u00e9 via l&rsquo;op\u00e9rateur <a href=\"http:\/\/stat.ethz.ch\/R-manual\/R-devel\/library\/base\/html\/cbind.html\" target=\"_blank\" rel=\"noopener\">cbind<\/a>.<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; \n&gt; mnist_test_biais &lt;- cbind(rep(1,10000), mnist_test[,c(-1,-2)])\n&gt; \n&gt; HL1 &lt;- as.matrix(mnist_test_biais) %*% as.matrix(P1[,-ncol(P1)])\n&gt; HL1_sigmoid &lt;- sigmoid(HL1)\n&gt; HL1_sigmoid &lt;- cbind(rep(1,10000), HL1_sigmoid)\n&gt; \n&gt; HL2 &lt;- HL1_sigmoid %*% as.matrix(P2[,-ncol(P2)])\n&gt; HL2_sigmoid &lt;- sigmoid(HL2)\n&gt; HL2_sigmoid &lt;- cbind(rep(1,10000), HL2_sigmoid)\n&gt; \n&gt; OL &lt;- HL2_sigmoid %*% as.matrix(P3[,-ncol(P3)])\n&gt; OL_sigmoid &lt;- sigmoid(OL)\n&gt; \n&gt; colnames(OL_sigmoid) &lt;- c(&quot;0&quot;,&quot;1&quot;,&quot;2&quot;,&quot;3&quot;,&quot;4&quot;,&quot;5&quot;,&quot;6&quot;,&quot;7&quot;,&quot;8&quot;,&quot;9&quot;)\n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">Le dataframe OL_sigmoid contient la couche de sortie. C&rsquo;est une matrice de 10000 x 10. Pour chaque ligne, la pr\u00e9diction va correspondre \u00e0 la colonne ayant le r\u00e9sultat de la fonction d&rsquo;activation le plus important:<\/p>\n<p><span style=\"font-family: terminal, monaco, monospace; font-size: 8pt;\">&gt; head(OL_sigmoid, n=2)<br \/>\n<span style=\"color: #ffffff;\">aaaaa<\/span>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a00\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 2\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 3\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 4\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 5\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 6\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 7\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 8\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 9<br \/>\n[1,] 5.034350e-21 2.567005e-20 5.541205e-21 1.909965e-23 4.049345e-09 2.895403e-22 4.900381e-17 <span style=\"color: #ff0000;\"><strong>1.000000e+00<\/strong><\/span> 1.094959e-27 8.415124e-12<br \/>\n[2,] 2.304927e-16 1.343347e-14 <strong><span style=\"color: #ff0000;\">1.000000e+00<\/span><\/strong> 1.133973e-17 5.348067e-18 1.887870e-14 7.471780e-11 1.250825e-17 3.126613e-17 5.096724e-24<br \/>\n&gt;\u00a0<\/span><\/p>\n<p style=\"text-align: justify;\">Pour construire une table de contingence, on peuple le vecteur \u00ab\u00a0pred\u00a0\u00bb avec le nom de la colonne correspondant \u00e0 la valeur la plus importante de chaque ligne. On peut alors comparer cette valeur \u00e0 celle du label connu (mnist_test$IMG_LBL):<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; pred &lt;- colnames(OL_sigmoid)[max.col(OL_sigmoid,ties.method=&quot;first&quot;)]\n&gt; lbl &lt;- mnist_test$IMG_LBL\n&gt;\n&gt; table(pred, lbl)\n    lbl\npred    0    1    2    3    4    5    6    7    8    9\n   0  966    0    8    2    1    3    8    3    9    9\n   1    0 1121    2    3    2    1    3    3    1    4\n   2    3    2  977    8    6    3    3   12    9    0\n   3    1    2   11  952    2   13    1    6   17   11\n   4    1    1    4    1  934    4    5    4    5   15\n   5    3    0    1    7    1  847   12    1   12    4\n   6    2    3    5    1    7    7  924    0    5    0\n   7    2    2   10   11    5    2    0  987    4   12\n   8    1    4   13   18    2    8    2    2  909    5\n   9    1    0    1    7   22    4    0   10    3  949\n&gt;\n<\/pre>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On a vu dans un billet pr\u00e9c\u00e9dent que la construction du r\u00e9seau de neurones avec le package neuralnet \u00e9tait notablement<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[3,9,12],"tags":[],"class_list":["post-1091","post","type-post","status-publish","format-standard","hentry","category-classification","category-oracle-r-enterprise","category-r"],"_links":{"self":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/1091","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1091"}],"version-history":[{"count":0,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/1091\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1091"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1091"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1091"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}