{"id":782,"date":"2016-12-16T07:05:43","date_gmt":"2016-12-16T07:05:43","guid":{"rendered":"http:\/\/blog.tiran.info\/?p=782"},"modified":"2016-12-16T07:05:43","modified_gmt":"2016-12-16T07:05:43","slug":"fonctions-approximatives-oracle-12-2-1","status":"publish","type":"post","link":"https:\/\/blog.tiran.stream\/?p=782","title":{"rendered":"Fonctions approximatives Oracle 12.2 (#1)"},"content":{"rendered":"<p style=\"text-align: justify;\">Apr\u00e8s avoir r\u00e9cemment lu plusieurs articles relatifs \u00e0 l&rsquo;activit\u00e9 sur Wikipedia (<a href=\"http:\/\/www.slate.fr\/story\/132275\/pages-wikipedia-2016-editees\" target=\"_blank\">pages les plus \u00e9dit\u00e9es<\/a>, <a href=\"http:\/\/www.lesoir.be\/1019940\/article\/soirmag\/actu-soirmag\/2015-10-18\/l-etrange-quotidien-du-plus-grand-contributeur-wikipedia\" target=\"_blank\">contributeurs les plus prolifiques<\/a>&#8230;), j&rsquo;ai voulu me faire une id\u00e9e plus pr\u00e9cise des chiffres. A cet effet, le site <a href=\"https:\/\/dumps.wikimedia.org\/\" target=\"_blank\">dumps.wikimedia.org<\/a> permet d&rsquo;acc\u00e9der \u00e0 une mine de donn\u00e9es brutes de tracking.<\/p>\n<p style=\"text-align: justify;\">J&rsquo;avais aussi en t\u00eate d&rsquo;en profiter pour utiliser les nouvelles <a href=\"http:\/\/docs.oracle.com\/database\/122\/DWHSG\/data-warehouse-optimizations-techniques.htm#DWHSG-GUID-F7E7DEA6-B225-43E6-97ED-CB3DBE86CD54\" target=\"_blank\">fonctionnalit\u00e9s de calcul approximatifs d&rsquo;Oracle 12.2<\/a>.<\/p>\n<p style=\"text-align: justify;\">Pour cela, je me suis servi du fichier de suivi des modifications de la version anglaise de Wikipedia (enwiki-&lt;date&gt;-pages-logging.xml.gz accessible dans l&rsquo;un des sous-dossiers de <a href=\"https:\/\/dumps.wikimedia.org\/enwiki\" target=\"_blank\">https:\/\/dumps.wikimedia.org\/enwiki<\/a>). Une fois d\u00e9compress\u00e9, on obtient un fichier XML de 30GB:<\/p>\n<pre class=\"brush: sql; ruler: true;\">$ ls -lh enwiki-20161201-pages-logging.xml\n-rw-r--r-- 1 oracle oinstall 30G Dec 16 16:47 enwiki-20161201-pages-logging.xml\n$\n<\/pre>\n<p style=\"text-align: justify;\">L&rsquo;exploitation d&rsquo;un fichier XML d&rsquo;une telle taille s&rsquo;av\u00e8re rapidement complexe et l&rsquo;objet de ce billet est d&rsquo;expliquer comment le charger en base afin de pouvoir acc\u00e9der efficacement aux donn\u00e9es &#8211; ce qui sera l&rsquo;objet\u00a0d&rsquo;un prochain article.<\/p>\n<p style=\"text-align: justify;\">Le premier essai a consist\u00e9 dans une tentative de chargement direct via le <a href=\"http:\/\/docs.oracle.com\/database\/121\/ARPLS\/t_xml.htm#ARPLS71992\" target=\"_blank\">constructeur xmltype\/bfile<\/a>:<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt; CREATE OR REPLACE DIRECTORY d_load AS &#039;\/base\/oracle\/oraflash\/wikipedia&#039;;\n\nDirectory created.\n\nElapsed: 00:00:01.50\nSQL&gt;\nSQL&gt; CREATE TABLE T_WIKI\n  2  AS\n  3      SELECT XMLTYPE (\n  4                 BFILENAME (&#039;D_LOAD&#039;, &#039;enwiki-20161201-pages-logging.xml&#039;),\n  5                 NLS_CHARSET_ID (&#039;UTF8&#039;))\n  6                 xml_f\n  7        FROM DUAL;\n    SELECT XMLTYPE (\n           *\nERROR at line 3:\nORA-31011: XML parsing failed\nORA-19213: error occurred in XML processing at lines 30730673\nLPX-00217: invalid character 4103061439 (U+8FBFBF)\nORA-06512: at &quot;SYS.XMLTYPE&quot;, line 296\nORA-06512: at line 1\n\n\nElapsed: 00:17:05.84\nSQL&gt;\n<\/pre>\n<p style=\"text-align: justify;\">Cela a permis de mettre en \u00e9vidence l&rsquo;existence dans le fichier de caract\u00e8res non-pris en charge par le parser XML. Dans un second temps, j&rsquo;ai essay\u00e9 de charger les donn\u00e9es bloc par bloc en utilisant la balise XML &lt;\/logitem&gt; comme d\u00e9limiteur.<\/p>\n<p style=\"text-align: justify;\">En effet, on peut constater que la structure du fichier correspond \u00e0 une succession de blocs &lt;logitem&gt;&#8230;&lt;\/logitem&gt;:<\/p>\n<pre class=\"brush: sql; ruler: true;\">$ tail -25 enwiki-20161201-pages-logging.xml\n  &lt;logitem&gt;\n    &lt;id&gt;79165294&lt;\/id&gt;\n    &lt;timestamp&gt;2016-12-04T05:17:33Z&lt;\/timestamp&gt;\n    &lt;contributor&gt;\n      &lt;username&gt;Rauk81&lt;\/username&gt;\n      &lt;id&gt;29812253&lt;\/id&gt;\n    &lt;\/contributor&gt;\n    &lt;type&gt;newusers&lt;\/type&gt;\n    &lt;action&gt;create&lt;\/action&gt;\n    &lt;logtitle&gt;User:Rauk81&lt;\/logtitle&gt;\n    &lt;params xml:space=&quot;preserve&quot;&gt;a:1:{s:9:&amp;quot;4::userid&amp;quot;;i:29812253;}&lt;\/params&gt;\n  &lt;\/logitem&gt;\n  &lt;logitem&gt;\n    &lt;id&gt;79165295&lt;\/id&gt;\n    &lt;timestamp&gt;2016-12-04T05:17:43Z&lt;\/timestamp&gt;\n    &lt;contributor&gt;\n      &lt;username&gt;ShadySjin&lt;\/username&gt;\n      &lt;id&gt;29812254&lt;\/id&gt;\n    &lt;\/contributor&gt;\n    &lt;type&gt;newusers&lt;\/type&gt;\n    &lt;action&gt;create&lt;\/action&gt;\n    &lt;logtitle&gt;User:ShadySjin&lt;\/logtitle&gt;\n    &lt;params xml:space=&quot;preserve&quot;&gt;a:1:{s:9:&amp;quot;4::userid&amp;quot;;i:29812254;}&lt;\/params&gt;\n  &lt;\/logitem&gt;\n&lt;\/mediawiki&gt;\n$\n<\/pre>\n<p style=\"text-align: justify;\">Pour cela, j&rsquo;ai eu recours \u00e0 une table externe:<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt; CREATE OR REPLACE DIRECTORY d_load AS &#039;\/base\/oracle\/oraflash\/wikipedia&#039;;\n\nDirectory created.\n\nSQL&gt;\nSQL&gt; CREATE TABLE ext_wikilog\n  2  (\n  3      xmlblock   CLOB\n  4  )\n  5  ORGANIZATION EXTERNAL\n  6      (TYPE oracle_loader\n  7            DEFAULT DIRECTORY d_load\n  8                ACCESS PARAMETERS (\n  9                    RECORDS DELIMITED BY &#039;&lt;\/logitem&gt;&#039;\n 10                    CHARACTERSET al32utf8\n 11                    SKIP 1\n 12                    FIELDS\n 13                    (line CHAR(4000))\n 14                    COLUMN TRANSFORMS (\n 15                        xmlblock FROM CONCAT (line, CONSTANT &#039;&lt;\/logitem&gt;&#039;))\n 16                )\n 17            LOCATION (&#039;enwiki-20161201-pages-logging.xml&#039;));\n\nTable created.\n\nSQL&gt;\n<\/pre>\n<p style=\"text-align: justify;\">Cette table externe r\u00e9alise donc le d\u00e9coupage du fichier et chaque ligne restitu\u00e9e correspond \u00e0 un bloc &lt;logitem&gt;&#8230;&lt;\/logitem&gt;. On saute la premi\u00e8re ligne (SKIP 1) qui contient un header XML et on convertit chaque extrait en tant que CLOB.<\/p>\n<p style=\"text-align: justify;\">Il n&rsquo;est malheureusement pas possible de proc\u00e9der \u00e0 une conversion directe en <a href=\"https:\/\/docs.oracle.com\/database\/121\/ARPLS\/t_xml.htm#ARPLS369\" target=\"_blank\">XMLTYPE<\/a> car ce type n&rsquo;est pas support\u00e9 par les tables externes (ORA-30656: column type not supported on external organized table).<\/p>\n<p style=\"text-align: justify;\">A ce stade, pour convertir les donn\u00e9es au format <a href=\"https:\/\/docs.oracle.com\/database\/121\/ARPLS\/t_xml.htm#ARPLS369\" target=\"_blank\">XMLTYPE<\/a>, ma premi\u00e8re id\u00e9e \u00e9tait de passer par un ordre \u00ab\u00a0insert &#8230; as select &#8230;\u00a0\u00bb coupl\u00e9 avec une <a href=\"https:\/\/docs.oracle.com\/database\/121\/ADMIN\/tables.htm#ADMIN-GUID-36DB026B-4702-477A-92C4-EA2795D2B37F\" target=\"_blank\">clause de DML error logging<\/a>:<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt; CREATE TABLE wikilog\n  2  (\n  3      logitem   XMLTYPE\n  4  )\n  5  XMLTYPE logitem STORE AS SECUREFILE BINARY XML\n  6  NOLOGGING\n  7  PARALLEL 16;\n\nTable created.\n\nSQL&gt;\nSQL&gt; BEGIN\n  2      DBMS_ERRLOG.create_error_log (dml_table_name =&gt; &#039;wikilog&#039;, skip_unsupported =&gt; true);\n  3  END;\n  4  \/\n\nPL\/SQL procedure successfully completed.\n\nSQL&gt;\nSQL&gt; alter session enable parallel DML;\n\nSession altered.\n\nSQL&gt; \nSQL&gt; set timing on;\nSQL&gt; INSERT INTO wikilog\n  2      SELECT \/*+ parallel(t 16) *\/\n  3            xmltype (xmlblock) logitem\n  4        FROM ext_wikilog t\n  5          LOG ERRORS INTO err$_wikilog;\nINSERT INTO wikilog\n*\nERROR at line 1:\nORA-12801: error signaled in parallel query server P009, instance\npsu888:IOSHR88D1_2 (2)\nORA-29913: error in executing ODCIEXTTABLEFETCH callout\nORA-31011: XML parsing failed\nORA-19213: error occurred in XML processing at lines 6\nLPX-00217: invalid character 4103061439 (U+8FBFBF)\nORA-06512: at &quot;SYS.XMLTYPE&quot;, line 272\nORA-06512: at line 1\n\nElapsed: 00:02:40.02\nSQL&gt; \n<\/pre>\n<p style=\"text-align: justify;\">Malheureusement, il s&rsquo;est av\u00e9r\u00e9 que la fonctionnalit\u00e9 de DML error logging ne prenait pas en compte les exceptions de ce genre. En fait, seules quelques exceptions \u00ab\u00a0classiques\u00a0\u00bb sont prises en charge (violation de contrainte, probl\u00e8me de capacit\u00e9 de champ etc&#8230;).<\/p>\n<p style=\"text-align: justify;\">Afin de tol\u00e9rer la survenue d&rsquo;exceptions sans pour autant compromettre la totalit\u00e9 du chargement, il ne reste plus qu&rsquo;une approche PL\/SQL int\u00e9grant une gestion sp\u00e9cifique des exceptions.<\/p>\n<p style=\"text-align: justify;\">On va ainsi lire chaque fragment XML, le convertir de CLOB \u00e0 XMLTYPE puis parser le r\u00e9sultat \u00e0 l&rsquo;aide d&rsquo;<a href=\"https:\/\/docs.oracle.com\/database\/121\/ADXDB\/xdb04cre.htm#ADXDB4230\" target=\"_blank\">expressions XQUERY<\/a>. Les r\u00e9sultats seront stock\u00e9s dans une table LOG_CONTRIBS. En cas de probl\u00e8me (notamment l&rsquo;exception ORA-31011: XML parsing failed), l&rsquo;enregistrement sera consign\u00e9 dans une table REJETS et on passera \u00e0 la ligne suivante.<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt; CREATE TABLE rejets\n  2  (\n  3      errcode     NUMBER,\n  4      xml_rejet   CLOB\n  5  )\n  6  TABLESPACE tbs01;\n\nTable created.\n\nSQL&gt;\nSQL&gt; CREATE TABLE log_contribs\n  2  (\n  3      id          NUMBER,\n  4      tmstamp     TIMESTAMP,\n  5      uname       VARCHAR2 (150),\n  6      logtype     VARCHAR2 (50),\n  7      logaction   VARCHAR2 (50)\n  8  )\n  9  TABLESPACE tbs01\n 10  NOLOGGING;\n\nTable created.\n\nSQL&gt;\n<\/pre>\n<p style=\"text-align: justify;\">Le probl\u00e8me est que cette approche est par nature tr\u00e8s lente (traitement ligne \u00e0 ligne) or le volume \u00e0 charger est cons\u00e9quent (&gt;30GB).<\/p>\n<p style=\"text-align: justify;\">Deux optimisations vont \u00eatre mises en \u0153uvre:<\/p>\n<ul style=\"text-align: justify;\">\n<li>le <a href=\"https:\/\/docs.oracle.com\/database\/121\/LNPLS\/tuning.htm#LNPLS01205\" target=\"_blank\">traitement DML par tableau BULK COLLECT\/FORALL<\/a> afin de limiter autant que possible l&rsquo;aspect ligne \u00e0 ligne<\/li>\n<li>la parall\u00e9lisation \u00ab\u00a0manuelle\u00a0\u00bb via l&rsquo;API 12c <a href=\"https:\/\/docs.oracle.com\/database\/121\/ARPLS\/d_parallel_ex.htm#ARPLS233\" target=\"_blank\">DBMS_PARALLEL_EXECUTE<\/a><\/li>\n<\/ul>\n<p style=\"text-align: justify;\">A l&rsquo;instar du parallel query (PX), ce package permet de tron\u00e7onner une op\u00e9ration en unit\u00e9s parall\u00e9lisables. Le d\u00e9coupage du travail est effectu\u00e9 par l&rsquo;appel d&rsquo;une proc\u00e9dure CREATE_CHUNK_* qui d\u00e9limite le p\u00e9rim\u00e8tre de chaque session parall\u00e8le. Ici, on va utiliser CREATE_CHUNK_BY_ROWID mais pour cela on va devoir au pr\u00e9alable copier les donn\u00e9es de la table externe vers une table classique. En effet, les lignes d&rsquo;une table externe ne disposent pas d&rsquo;un ROWID:<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt; DROP TABLE wikilog PURGE;\n\nTable dropped.\n\nSQL&gt;\nSQL&gt; set timing on;\nSQL&gt;\nSQL&gt; CREATE TABLE wikilog\n  2  NOLOGGING\n  3  PARALLEL 16\n  4  AS\n  5      SELECT \/*+ parallel(t 16) *\/\n  6            xmlblock logitem\n  7        FROM ext_wikilog t;\n\nTable created.\n\nElapsed: 00:04:49.48\nSQL&gt;\nSQL&gt; set timing off;\nSQL&gt;\n<\/pre>\n<p style=\"text-align: justify;\">On peut alors utiliser cette table comme source de parall\u00e9lisation. Chaque session parall\u00e8le va ainsi traiter un tron\u00e7on d\u00e9limit\u00e9 par une tranche de ROWID (d\u00e9but\/fin).<\/p>\n<p style=\"text-align: justify;\">A ce stade, on peut cr\u00e9er la proc\u00e9dure PL\/SQL qui prendra une tranche\u00a0de ROWID en argument et qui sera invoqu\u00e9e par DBMS_PARALLEL_EXECUTE:<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt;\nSQL&gt; CREATE OR REPLACE PROCEDURE PARSE_XML (p_start_rid ROWID, p_end_rid ROWID)\n  2  IS\n  3      CURSOR C_SRC\n  4      IS\n  5          SELECT logitem\n  6            FROM wikilog\n  7           WHERE ROWID BETWEEN p_start_rid AND p_end_rid;\n  8\n  9      TYPE logitem_array IS TABLE OF c_src%ROWTYPE;\n 10\n 11      l_xml2parse     logitem_array;\n 12\n 13      array_dml_err   EXCEPTION;\n 14      PRAGMA EXCEPTION_INIT (array_dml_err, -24381);\n 15\n 16      l_errcode       NUMBER;\n 17      l_erridx        NUMBER;\n 18  BEGIN\n 19      OPEN c_src;\n 20\n 21      LOOP\n 22          FETCH c_src BULK COLLECT INTO l_xml2parse LIMIT 1000;\n 23\n 24          BEGIN\n 25              FORALL i IN 1 .. l_xml2parse.COUNT SAVE EXCEPTIONS\n 26                  INSERT INTO log_contribs\n 27                      SELECT XMLCAST (\n 28                                 XMLQUERY (\n 29                                     &#039;\/logitem\/id&#039;\n 30                                     PASSING logitem_xml RETURNING CONTENT) AS NUMBER),\n 31                             TO_TIMESTAMP (\n 32                                 XMLCAST (\n 33                                     XMLQUERY (\n 34                                         &#039;\/logitem\/timestamp&#039;\n 35                                         PASSING logitem_xml RETURNING CONTENT) AS VARCHAR (30)),\n 36                                 &#039;YYYY-MM-DD&quot;T&quot;HH24:MI:SS&quot;Z&quot;&#039;),\n 37                             XMLCAST (\n 38                                 XMLQUERY (\n 39                                     &#039;\/logitem\/contributor\/username&#039;\n 40                                     PASSING logitem_xml RETURNING CONTENT) AS VARCHAR (150)),\n 41                             XMLCAST (\n 42                                 XMLQUERY (\n 43                                     &#039;\/logitem\/type&#039;\n 44                                     PASSING logitem_xml RETURNING CONTENT) AS VARCHAR (50)),\n 45                             XMLCAST (\n 46                                 XMLQUERY (\n 47                                     &#039;\/logitem\/action&#039;\n 48                                     PASSING logitem_xml RETURNING CONTENT) AS VARCHAR (50))\n 49                        FROM (SELECT xmltype (l_xml2parse (i).logitem)\n 50                                         logitem_xml\n 51                                FROM DUAL);\n 52          EXCEPTION\n 53              WHEN array_dml_err\n 54              THEN\n 55                  FOR i IN 1 .. SQL%BULK_EXCEPTIONS.COUNT\n 56                  LOOP\n 57                      l_errcode := SQL%BULK_EXCEPTIONS (i).ERROR_CODE;\n 58                      l_erridx := SQL%BULK_EXCEPTIONS (i).ERROR_INDEX;\n 59\n 60                      INSERT INTO rejets (errcode, xml_rejet)\n 61                           VALUES (l_errcode, l_xml2parse (l_erridx).logitem);\n 62                  END LOOP;\n 63          END;\n 64\n 65          EXIT WHEN c_src%NOTFOUND;\n 66      END LOOP;\n 67\n 68      CLOSE c_src;\n 69  END;\n 70  \/\n\nProcedure created.\n\nSQL&gt;\n<\/pre>\n<p style=\"text-align: justify;\">Ici, l&rsquo;op\u00e9ration est r\u00e9alis\u00e9e sur une machine HP ProLiant BL460c (2s24c48t) qui ne supporte pas d&rsquo;autre activit\u00e9 que mon test. Je peux donc utiliser un DOP cons\u00e9quent: 48 par exemple. Cela permet d&rsquo;\u00e9valuer le nombre blocs que devra traiter chaque parall\u00e8le :<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt; SELECT blocks \/ 48\n  2    FROM tabs\n  3   WHERE table_name = &#039;WIKILOG&#039;;\n\n BLOCKS\/48\n----------\n200571.833\n\nSQL&gt;\n<\/pre>\n<p style=\"text-align: justify;\">On peut alors cr\u00e9er la tache parall\u00e9lis\u00e9e via la proc\u00e9dure CREATE_TASK de DBMS_PARALLEL_EXECUTE et ensuite d\u00e9finir le p\u00e9rim\u00e8tre d&rsquo;action de chaque parall\u00e8le \u00e0 l&rsquo;aide de l&rsquo;appel CREATE_CHUNK_BY_ROWID:<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt; BEGIN\n  2      DBMS_PARALLEL_EXECUTE.create_task (task_name =&gt; &#039;PARSE_XML_PARAL&#039;);\n  3  END;\n  4  \/\n\nPL\/SQL procedure successfully completed.\n\nSQL&gt; BEGIN\n  2      DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid (\n  3          task_name     =&gt; &#039;PARSE_XML_PARAL&#039;,\n  4          table_owner   =&gt; &#039;C##RAF&#039;,\n  5          table_name    =&gt; &#039;WIKILOG&#039;,\n  6          by_row        =&gt; FALSE,\n  7          chunk_size    =&gt; 201000);\n  8  END;\n  9  \/\n\nPL\/SQL procedure successfully completed.\n\nSQL&gt; \n<\/pre>\n<p style=\"text-align: justify;\">Il ne reste plus qu&rsquo;\u00e0 ex\u00e9cuter la proc\u00e9dure RUN_TASK:<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt; set timing on;\nSQL&gt;\nSQL&gt; DECLARE\n  2      l_sql_stmt   VARCHAR2 (32767);\n  3  BEGIN\n  4      l_sql_stmt := &#039;BEGIN parse_xml(:start_id, :end_id); END;&#039;;\n  5\n  6      DBMS_PARALLEL_EXECUTE.run_task (task_name        =&gt; &#039;PARSE_XML_PARAL&#039;,\n  7                                      sql_stmt         =&gt; l_sql_stmt,\n  8                                      language_flag    =&gt; DBMS_SQL.NATIVE,\n  9                                      parallel_level   =&gt; 48);\n 10  END;\n 11  \/\n\nPL\/SQL procedure successfully completed.\n\nElapsed: 00:46:47.22\nSQL&gt;\n<\/pre>\n<p style=\"text-align: justify;\">Le chargement dure\u00a0un peu plus de 50 minutes (4 minutes de transfert EXT_WIKILOG vers WIKILOG puis 46 minutes de parsing des donn\u00e9es vers LOG_CONTRIBS). Au final, on trouve 250 blocs non-parsable sur 77 millions d&rsquo;enregistrements et la table LOG_CONTRIBS atteint une taille d&rsquo;environ 4G.<\/p>\n<pre class=\"brush: sql; ruler: true;\">SQL&gt;   SELECT errcode, COUNT (*)\n  2      FROM rejets\n  3  GROUP BY errcode;\n\n   ERRCODE   COUNT(*)\n---------- ----------\n     31011        250\n\nSQL&gt;\nSQL&gt; SELECT COUNT (*) FROM log_contribs;\n\n  COUNT(*)\n----------\n  77662802\n\nSQL&gt;\nSQL&gt; SELECT bytes \/ POWER (1024, 2)\n  2    FROM user_segments\n  3   WHERE segment_name = &#039;LOG_CONTRIBS&#039;;\n\nBYTES\/POWER(1024,2)\n-------------------\n               3904\n\nSQL&gt;\n<\/pre>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Apr\u00e8s avoir r\u00e9cemment lu plusieurs articles relatifs \u00e0 l&rsquo;activit\u00e9 sur Wikipedia (pages les plus \u00e9dit\u00e9es, contributeurs les plus prolifiques&#8230;), j&rsquo;ai<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[6,10],"tags":[],"class_list":["post-782","post","type-post","status-publish","format-standard","hentry","category-oracle","category-preparation-des-donnees"],"_links":{"self":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/782","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=782"}],"version-history":[{"count":0,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=\/wp\/v2\/posts\/782\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=782"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=782"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.tiran.stream\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=782"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}