{"id":655,"date":"2016-07-31T13:49:49","date_gmt":"2016-07-31T12:49:49","guid":{"rendered":"http:\/\/blog.tiran.info\/?p=655"},"modified":"2017-12-22T14:44:15","modified_gmt":"2017-12-22T13:44:15","slug":"classification-bayesienne-naive-avec-r","status":"publish","type":"post","link":"https:\/\/blog.tiran.stream\/?p=655","title":{"rendered":"Classification Bay\u00e9sienne Na\u00efve avec R"},"content":{"rendered":"<p style=\"text-align: justify;\">Pour faire suite au <a href=\"http:\/\/blog.tiran.info\/classification-bayesienne-naive-avec-oracle\" target=\"_blank\">billet pr\u00e9c\u00e9dent<\/a>, une analyse similaire est r\u00e9alis\u00e9e cette fois-ci \u00e0 l&rsquo;aide de R.<\/p>\n<p style=\"text-align: justify;\">La mise en forme textuelle est r\u00e9alis\u00e9e \u00e0 l&rsquo;aide des fonctions de la librairie tm (text mining) et l&rsquo;analyse bay\u00e9sienne \u00e0 l&rsquo;aide de l&rsquo;impl\u00e9mentation disponible dans la librairie e1071.<\/p>\n<h3>Chargement des donn\u00e9es<\/h3>\n<p style=\"text-align: justify;\">A partir des donn\u00e9es stock\u00e9es en base, on r\u00e9cup\u00e8re sous forme de data-frames le dataset d&rsquo;apprentissage (train_set), celui de test (test_set) ainsi que la liste des mots vides.<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; library(ROracle)\nLe chargement a n\u00e9cessit\u00e9 le package : DBI\nWarning message:\nle package \u2018DBI\u2019 a \u00e9t\u00e9 compil\u00e9 avec la version R 3.2.1 \n&gt; ora = Oracle()\n&gt; cnx = dbConnect(ora, username=&quot;rafa&quot;, password=&quot;rafa&quot;, dbname=&quot;S1401037:1521\/STATPDB&quot;)\n&gt; train_set &lt;- dbGetQuery(cnx, &quot;select * from TRAIN_SET_BOOKS&quot;)\n&gt; test_set &lt;- dbGetQuery(cnx, &quot;select * from TEST_SET_BOOKS&quot;)\n&gt; mots_vides &lt;- dbGetQuery(cnx, &quot;select * from MOTS_VIDES&quot;)\n&gt; dbDisconnect(cnx)\n[1] TRUE\n&gt; \n<\/pre>\n<h3>Mise en forme des donn\u00e9es<\/h3>\n<pre class=\"brush: js; ruler: true;\">&gt; train_set$CATEGORIE &lt;- as.factor(train_set$CATEGORIE)\n&gt; test_set$CATEGORIE &lt;- as.factor(test_set$CATEGORIE)\n&gt; summary(train_set)\n                    CATEGORIE      EXTRAIT         \n litterature-sentimentale:5419   Length:16104      \n philo-socio             :4578   Class :character  \n polar                   :6107   Mode  :character  \nWarning message:\nIn summary(train_set) : bytecode version mismatch; using eval\n&gt;\n<\/pre>\n<p style=\"text-align: justify;\">On cr\u00e9e ensuite une fonctions CreeCorpus permettant de mettre en forme le texte sous la forme d&rsquo;une repr\u00e9sentation appel\u00e9e <a href=\"https:\/\/en.wikipedia.org\/wiki\/Text_corpus\" target=\"_blank\">Corpus<\/a>.<\/p>\n<p style=\"text-align: justify;\">Les appels \u00e0 tm_map permettent d&rsquo;appliquer des transformations (minuscule, suppression de la ponctuation, des nombres, des mots vides et des espaces) aux tokens. A noter qu&rsquo;on pr\u00e9f\u00e8re cr\u00e9er notre propre fonction replacePunctuation plut\u00f4t que d&rsquo;utiliser la m\u00e9thode removePunctuation de tm_map dans la mesure o\u00f9 cette derni\u00e8re\u00a0ne remplace pas les symboles par un espace (celui-ci -&gt; celuici).<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; library(tm)\nLe chargement a n\u00e9cessit\u00e9 le package : NLP\n&gt;\n&gt; replacePunctuation &lt;- function(x) {\n+ gsub(&quot;[[:punct:]]+&quot;, &quot; &quot;, x)\n+ }\n&gt; \n&gt; CreeCorpus &lt;- function(x) {\n+ Crp &lt;- Corpus(VectorSource(x))\n+ Crp &lt;- tm_map(Crp, content_transformer(tolower))\n+ Crp &lt;- tm_map(Crp, content_transformer(replacePunctuation))\n+ Crp &lt;- tm_map(Crp, removeNumbers)\n+ Crp &lt;- tm_map(Crp, removeWords, mots_vides$MOT)\n+ Crp &lt;- tm_map(Crp, stripWhitespace)\n+ return(Crp)\n+ }\n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">L&rsquo;\u00e9chantillon d&rsquo;apprentissage ainsi que celui de test sont convertis sous la forme de Corpus de mots:<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; TrainCorpus &lt;- CreeCorpus(train_set$EXTRAIT)\n&gt; TestCorpus &lt;- CreeCorpus(test_set$EXTRAIT)\n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">Les Corpus sont ensuite transform\u00e9s en <a href=\"https:\/\/en.wikipedia.org\/wiki\/Document-term_matrix\" target=\"_blank\">matrices Documents-Termes<\/a> (on supprime au passage les mots de moins de 3 lettres):<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; TrainDTM &lt;- DocumentTermMatrix(TrainCorpus, control=list(minWordLength=3))\n&gt; TestDTM &lt;- DocumentTermMatrix(TestCorpus, control=list(minWordLength=3))\n&gt;\n&gt; TrainDTM\n&lt;&lt;DocumentTermMatrix (documents: 16104, terms: 73329)&gt;&gt;\nNon-\/sparse entries: 1091034\/1179799182\nSparsity           : 100%\nMaximal term length: 29\nWeighting          : term frequency (tf)\n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">On voit que la matrice d&rsquo;apprentissage contient 16104 documents repr\u00e9sentant plus de 74429 mots. L&rsquo;essentiel de cette matrice est vide (Sparsity 100%) dans la mesure ou peu de mots se retrouvent dans beaucoup de documents.<\/p>\n<p style=\"text-align: justify;\">On va donc essayer de r\u00e9duire cette matrice en se focalisant sur les mots les plus fr\u00e9quents. Pour cela on utilise la fonction findFreqTerms &#8211; ici on s&rsquo;int\u00e9resse aux termes\u00a0pr\u00e9sents dans plus de 100 documents, il y en a 2170 que l&rsquo;on stocke dans le vecteur mots_freq:<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; length(findFreqTerms(TrainDTM, 100))\n[1] 2170\n&gt; mots_freq &lt;- findFreqTerms(TrainDTM, 100)\n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">On restreint ensuite les matrices Termes-Documents sur la base de cette liste de mots fr\u00e9quents:<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; TrainDTMFreq &lt;- TrainDTM[,mots_freq]\n&gt; TestDTMFreq &lt;- TestDTM[,mots_freq]\n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">Finalement, pour pouvoir r\u00e9aliser une analyse Bay\u00e9sienne, il faut \u00ab\u00a0binariser\u00a0\u00bb les donn\u00e9es : on ne conserve pas l&rsquo;information du nombre d&rsquo;occurrence d&rsquo;un mot dans un document mais plut\u00f4t l&rsquo;information de sa pr\u00e9sence ou non.<\/p>\n<p style=\"text-align: justify;\">Le r\u00e9sultat est stock\u00e9 sous forme de dataframes:<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; TrainDTMFreq.bin &lt;- TrainDTMFreq &gt; 0\n&gt; TrainDTMFreq.bin &lt;- apply(TrainDTMFreq.bin, MARGIN = 2, as.factor)\n&gt; \n&gt; TestDTMFreq.bin &lt;- TestDTMFreq &gt; 0\n&gt; TestDTMFreq.bin &lt;- apply(TestDTMFreq.bin, MARGIN = 2, as.factor)\n&gt; \n&gt; TrainDTMFreq.bin[c(222,900,281),c(10,200,800)]\n     Terms\n      accepte beaut\u00e9  force  \n  222 &quot;TRUE&quot;  &quot;FALSE&quot; &quot;FALSE&quot;\n  900 &quot;FALSE&quot; &quot;FALSE&quot; &quot;TRUE&quot; \n  281 &quot;FALSE&quot; &quot;TRUE&quot;  &quot;FALSE&quot;\n&gt; \n<\/pre>\n<h3>Classification Bay\u00e9sienne<\/h3>\n<p style=\"text-align: justify;\">On construit le classifieur \u00e0 l&rsquo;aide de la fonction naiveBayes de e1071:<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; library(e1071)\n&gt; nb_classifier &lt;- naiveBayes(TrainDTMFreq.bin,train_set$CATEGORIE, laplace=1)\n&gt; \n<\/pre>\n<p style=\"text-align: justify;\">Le scoring est effectu\u00e9 sur l&rsquo;\u00e9chantillon de test:<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; nb_predict &lt;- predict(nb_classifier, TestDTMFreq.bin)\n&gt;<\/pre>\n<p style=\"text-align: justify;\">La performance du classifieur est tr\u00e8s bonne avec un\u00a0taux de r\u00e9ussite de\u00a0pr\u00e9diction de <strong>~92%<\/strong>:<\/p>\n<pre class=\"brush: js; ruler: true;\">&gt; table(nb_predict,test_set$CATEGORIE)\n                          \nnb_predict                 litterature-sentimentale philo-socio polar\n  litterature-sentimentale                     1599           3   143\n  philo-socio                                     4        1513    12\n  polar                                         229          47  1878\n&gt; \n&gt; 100*sum(diag(table(nb_predict,test_set$CATEGORIE)))\/dim(test_set)[1]\n[1] 91.93073\n&gt; \n<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Pour faire suite au billet pr\u00e9c\u00e9dent, une analyse similaire est r\u00e9alis\u00e9e cette fois-ci \u00e0 l&rsquo;aide de R. La mise en<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[3,12,19],"tags":[],"class_list":["post-655","post","type-post","status-publish","format-standard","hentry","category-classification","category-r","category-donnees-non-structurees"],"_links":{"self":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/655","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=655"}],"version-history":[{"count":1,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/655\/revisions"}],"predecessor-version":[{"id":1217,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/655\/revisions\/1217"}],"wp:attachment":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=655"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=655"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=655"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}