{"id":966,"date":"2017-06-12T07:20:44","date_gmt":"2017-06-12T07:20:44","guid":{"rendered":"http:\/\/blog.tiran.info\/?p=966"},"modified":"2017-06-12T07:20:44","modified_gmt":"2017-06-12T07:20:44","slug":"reseaux-de-neurones-avec-r-2","status":"publish","type":"post","link":"https:\/\/blog.tiran.stream\/?p=966","title":{"rendered":"R\u00e9seaux de neurones avec R (#2)"},"content":{"rendered":"<p style=\"text-align: justify;\">Dans le <a href=\"http:\/\/blog.tiran.info\/reseaux-de-neurones-avec-r-1\" target=\"_blank\" rel=\"noopener\">post pr\u00e9c\u00e9dent<\/a>, la classification d&rsquo;un dataset relativement simple a \u00e9t\u00e9 r\u00e9alis\u00e9e de mani\u00e8re tr\u00e8s efficace avec un r\u00e9seau de neurones mais \u00e9galement avec une technique beaucoup plus conventionnelle: une r\u00e9gression logistique.<\/p>\n<p style=\"text-align: justify;\">Ici, on va voir que les deux approches se d\u00e9tachent lorsque la <a href=\"https:\/\/fr.wiktionary.org\/wiki\/dimensionnalit%C3%A9\" target=\"_blank\" rel=\"noopener\">dimensionnalit\u00e9<\/a> augmente. En effet, les ANN sont beaucoup moins sensibles au ph\u00e9nom\u00e8ne connu sous le nom de <a href=\"https:\/\/fr.wikipedia.org\/wiki\/Fl%C3%A9au_de_la_dimension\" target=\"_blank\" rel=\"noopener\">fl\u00e9au de la dimension<\/a>.<\/p>\n<p style=\"text-align: justify;\">On va ici utiliser un autre dataset de l&rsquo;\u00e9tude \u00ab\u00a0<a href=\"https:\/\/archive.ics.uci.edu\/ml\/datasets\/Breast+Cancer+Wisconsin+%28Diagnostic%29\" target=\"_blank\" rel=\"noopener\">Breast Cancer Wisconsin (Diagnostic)<\/a>\u00a0\u00bb de l&rsquo;UCI: <a href=\"https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/breast-cancer-wisconsin\/wdbc.data\" target=\"_blank\" rel=\"noopener\">wdbc.data<\/a>\u00a0dont la description est ici:\u00a0<a href=\"https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/breast-cancer-wisconsin\/wdbc.names\" target=\"_blank\" rel=\"noopener\">wdbc.names<\/a>.<\/p>\n<p style=\"text-align: justify;\">L&rsquo;\u00e9tude contient 569 cas dont 37% correspondent \u00e0 une tumeur maligne. On a cette fois-ci <strong>30 pr\u00e9dicteurs<\/strong> (contre 9 dans le billet pr\u00e9c\u00e9dent).<\/p>\n<p style=\"text-align: justify;\">On charge le dataset dans un dataframe puis on proc\u00e8de \u00e0 quelques ajustements:<\/p>\n<ul style=\"text-align: justify;\">\n<li>Suppression de la premiere colonne (V1) qui correspond \u00e0 un ID patient.<\/li>\n<li>Binarisation (0\/1) de la colonne V2 qui correspond au diagnostic (codification initiale: B\/M)<\/li>\n<li>Centrage\/r\u00e9duction \u00e0 l&rsquo;aide de la fonction \u00ab\u00a0scale\u00a0\u00bb de toutes les valeurs du dataframe (sauf celles de la colonne V2 qui est binaris\u00e9e)<\/li>\n<\/ul>\n<pre class=\"brush: js; ruler: true;\"> \n&gt; breastcancer &lt;- read.csv(url(&quot;https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/breast-cancer-wisconsin\/wdbc.data&quot;), header=FALSE)\n&gt; \n&gt; breastcancer &lt;- breastcancer[-1]\n&gt; breastcancer$V2 &lt;- as.character(levels(breastcancer$V2))[breastcancer$V2]\n&gt; breastcancer$V2 &lt;- as.numeric(ifelse(breastcancer$V2==&quot;B&quot;,0,1))\n&gt; breastcancer[,-1] &lt;- scale(breastcancer[,-1])[,]\n&gt; summary(breastcancer)\n       V2               V3                V4                V5                V6                V7                 V8                V9         \n Min.   :0.0000   Min.   :-2.0279   Min.   :-2.2273   Min.   :-1.9828   Min.   :-1.4532   Min.   :-3.10935   Min.   :-1.6087   Min.   :-1.1139  \n 1st Qu.:0.0000   1st Qu.:-0.6888   1st Qu.:-0.7253   1st Qu.:-0.6913   1st Qu.:-0.6666   1st Qu.:-0.71034   1st Qu.:-0.7464   1st Qu.:-0.7431  \n Median :0.0000   Median :-0.2149   Median :-0.1045   Median :-0.2358   Median :-0.2949   Median :-0.03486   Median :-0.2217   Median :-0.3419  \n Mean   :0.3726   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  \n 3rd Qu.:1.0000   3rd Qu.: 0.4690   3rd Qu.: 0.5837   3rd Qu.: 0.4992   3rd Qu.: 0.3632   3rd Qu.: 0.63564   3rd Qu.: 0.4934   3rd Qu.: 0.5256  \n Max.   :1.0000   Max.   : 3.9678   Max.   : 4.6478   Max.   : 3.9726   Max.   : 5.2459   Max.   : 4.76672   Max.   : 4.5644   Max.   : 4.2399  \n      V10               V11                V12               V13               V14               V15               V16               V17         \n Min.   :-1.2607   Min.   :-2.74171   Min.   :-1.8183   Min.   :-1.0590   Min.   :-1.5529   Min.   :-1.0431   Min.   :-0.7372   Min.   :-1.7745  \n 1st Qu.:-0.7373   1st Qu.:-0.70262   1st Qu.:-0.7220   1st Qu.:-0.6230   1st Qu.:-0.6942   1st Qu.:-0.6232   1st Qu.:-0.4943   1st Qu.:-0.6235  \n Median :-0.3974   Median :-0.07156   Median :-0.1781   Median :-0.2920   Median :-0.1973   Median :-0.2864   Median :-0.3475   Median :-0.2201  \n Mean   : 0.0000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  \n 3rd Qu.: 0.6464   3rd Qu.: 0.53031   3rd Qu.: 0.4706   3rd Qu.: 0.2659   3rd Qu.: 0.4661   3rd Qu.: 0.2428   3rd Qu.: 0.1067   3rd Qu.: 0.3680  \n Max.   : 3.9245   Max.   : 4.48081   Max.   : 4.9066   Max.   : 8.8991   Max.   : 6.6494   Max.   : 9.4537   Max.   :11.0321   Max.   : 8.0229  \n      V18               V19               V20               V21               V22               V23               V24                V25         \n Min.   :-1.2970   Min.   :-1.0566   Min.   :-1.9118   Min.   :-1.5315   Min.   :-1.0960   Min.   :-1.7254   Min.   :-2.22204   Min.   :-1.6919  \n 1st Qu.:-0.6923   1st Qu.:-0.5567   1st Qu.:-0.6739   1st Qu.:-0.6511   1st Qu.:-0.5846   1st Qu.:-0.6743   1st Qu.:-0.74797   1st Qu.:-0.6890  \n Median :-0.2808   Median :-0.1989   Median :-0.1404   Median :-0.2192   Median :-0.2297   Median :-0.2688   Median :-0.04348   Median :-0.2857  \n Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.00000   Mean   : 0.0000  \n 3rd Qu.: 0.3893   3rd Qu.: 0.3365   3rd Qu.: 0.4722   3rd Qu.: 0.3554   3rd Qu.: 0.2884   3rd Qu.: 0.5216   3rd Qu.: 0.65776   3rd Qu.: 0.5398  \n Max.   : 6.1381   Max.   :12.0621   Max.   : 6.6438   Max.   : 7.0657   Max.   : 9.8429   Max.   : 4.0906   Max.   : 3.88249   Max.   : 4.2836  \n      V26               V27               V28               V29               V30               V31               V32         \n Min.   :-1.2213   Min.   :-2.6803   Min.   :-1.4426   Min.   :-1.3047   Min.   :-1.7435   Min.   :-2.1591   Min.   :-1.6004  \n 1st Qu.:-0.6416   1st Qu.:-0.6906   1st Qu.:-0.6805   1st Qu.:-0.7558   1st Qu.:-0.7557   1st Qu.:-0.6413   1st Qu.:-0.6913  \n Median :-0.3409   Median :-0.0468   Median :-0.2693   Median :-0.2180   Median :-0.2233   Median :-0.1273   Median :-0.2163  \n Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  \n 3rd Qu.: 0.3573   3rd Qu.: 0.5970   3rd Qu.: 0.5392   3rd Qu.: 0.5307   3rd Qu.: 0.7119   3rd Qu.: 0.4497   3rd Qu.: 0.4504  \n Max.   : 5.9250   Max.   : 3.9519   Max.   : 5.1084   Max.   : 4.6965   Max.   : 2.6835   Max.   : 6.0407   Max.   : 6.8408  \n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">On divise ensuite le dataset breastcancer en un \u00e9chantillon d&rsquo;apprentissage (70%) et un \u00e9chantillon de test (30%). L&rsquo;appel a set.seed permet d&rsquo;obtenir des r\u00e9sultats reproductibles.<\/p>\n<pre class=\"brush: js; ruler: true;\"> \n&gt; set.seed(1234)\n&gt; sub &lt;- sample(nrow(breastcancer), floor(nrow(breastcancer) * 0.70))\n&gt; breastcancer_train &lt;- breastcancer[sub,]\n&gt; breastcancer_test &lt;- breastcancer[-sub,]\n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">On tente de construire un mod\u00e8le de r\u00e9gression logistique:<\/p>\n<pre class=\"brush: js; highlight: 4; ruler: true;\"> \n&gt; breastcancer_train$V2 &lt;- as.factor(breastcancer_train$V2)\n&gt; log_model &lt;- glm(V2 ~ .,family=binomial,data=breastcancer_train)\nWarning messages:\n1: glm.fit: algorithm did not converge \n2: glm.fit: fitted probabilities numerically 0 or 1 occurred \n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">L&rsquo;algorithme ne converge pas.<\/p>\n<p style=\"text-align: justify;\">On peut aussi tenter une stepwise regression en autorisant un nombre d&rsquo;it\u00e9rations tr\u00e8s important (10000) de mani\u00e8re \u00e0 maximiser la probabilit\u00e9 de d\u00e9termination d&rsquo;un ensemble pertinent de pr\u00e9dicteurs:<\/p>\n<pre class=\"brush: js; highlight: [16, 18, 20]; ruler: true;\"> \n&gt; step(log_model, dir=&quot;backward&quot;, trace=0, steps=10000)\n\nCall:  glm(formula = V2 ~ V3 + V8 + V9 + V12 + V14 + V16 + V17 + V22 + \n    V23 + V24 + V30, family = binomial, data = breastcancer_train)\n\nCoefficients:\n(Intercept)           V3           V8           V9          V12          V14          V16          V17          V22          V23          V24          V30  \n      197.7       -841.6       -532.0        319.8        543.8       -125.8       1135.4        145.3       -481.3       1860.9        367.8        700.4  \n\nDegrees of Freedom: 397 Total (i.e. Null);  386 Residual\nNull Deviance:\t    531.2 \nResidual Deviance: 5.285e-06 \tAIC: 24\nThere were 50 or more warnings (use warnings() to see the first 50)\n&gt; tail(warnings())\nWarning messages:\n1: glm.fit: algorithm did not converge\n2: glm.fit: fitted probabilities numerically 0 or 1 occurred\n3: glm.fit: algorithm did not converge\n4: glm.fit: fitted probabilities numerically 0 or 1 occurred\n5: glm.fit: algorithm did not converge\n6: glm.fit: fitted probabilities numerically 0 or 1 occurred\n&gt;\n<\/pre>\n<p style=\"text-align: justify;\">M\u00eame chose ici. L&rsquo;algorithme ne parvient toujours pas \u00e0 converger&#8230;<\/p>\n<p style=\"text-align: justify;\">Essayons maintenant de construire un ANN avec la fonction nnet. A l&rsquo;instar du billet pr\u00e9c\u00e9dent, j&rsquo;utilise arbitrairement 15 neurones au sein de la couche cach\u00e9e:<\/p>\n<pre class=\"brush: js; ruler: true;\"> \n&gt; library(nnet)\n&gt; breastcancer_train$V2 &lt;- as.numeric(levels(breastcancer_train$V2))[breastcancer_train$V2]\n&gt; nn &lt;- nnet(V2 ~ ., data=breastcancer_train, size=15)\n# weights:  481\ninitial  value 50.360967 \niter  10 value 23.745479\niter  20 value 18.998525\niter  30 value 16.996355\niter  40 value 15.005363\niter  50 value 15.000040\niter  60 value 14.998821\niter  70 value 14.003261\niter  80 value 14.000612\niter  90 value 14.000243\niter 100 value 14.000087\nfinal  value 14.000087 \nstopped after 100 iterations\n&gt; pred &lt;- predict(nn, newdata=breastcancer_test, supplemental_cols=c(&quot;V2&quot;))\n&gt; pred.bin &lt;- ifelse(pred&gt;0.5,1,0)\n&gt; table(pred.bin, breastcancer_test$V2)\n        \npred.bin  0  1\n       0 89  1\n       1 12 69\n&gt;\n<\/pre>\n<p style=\"text-align: justify;\">Contrairement \u00e0 la r\u00e9gression logistique, le mod\u00e8le ANN permet d&rsquo;obtenir une excellente performance de classification (92%) sur l&rsquo;\u00e9chantillon de test.<\/p>\n<p style=\"text-align: justify;\">Dans le prochain post on verra un exemple de beaucoup plus grande envergure&#8230;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Dans le post pr\u00e9c\u00e9dent, la classification d&rsquo;un dataset relativement simple a \u00e9t\u00e9 r\u00e9alis\u00e9e de mani\u00e8re tr\u00e8s efficace avec un r\u00e9seau<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[2,3,12,13],"tags":[],"class_list":["post-966","post","type-post","status-publish","format-standard","hentry","category-ann","category-classification","category-r","category-regression"],"_links":{"self":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/966","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=966"}],"version-history":[{"count":0,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/966\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=966"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=966"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=966"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}