Think Forward.

Les particularités de la maladie cœliaque chez l’enfant 4042

La maladie cœliaque ou intolérance au gluten est une pathologie auto-immune chronique, qui touche les intestins, suite à l'ingestion de gluten. Il s'agit plus précisément d'une intolérance (et non d'une allergie) à un composant du gluten, la gliadine (ensemble de protéines constituant les farines de certaines céréales, dont le blé, le seigle, l'orge, l'avoine). Le diagnostic de ce trouble est souvent difficile et tardif. Il n'existe toujours pas de traitement curatif et sa seule résolution réside dans l'exclusion de tout gluten de l'alimentation générale. Chez les très jeunes enfants (moins de 3 ans), il y a plus souvent présence de diarrhée, de distension abdominale et de retard de croissance. Les enfants plus âgés et les adolescents seraient plus sujets à présenter d’autres symptômes gastro-intestinaux (douleurs abdominales récurrentes, constipation ou vomissements) ou des symptômes extra-intestinaux. LES SYPTOMES DE LA MALADIE COELIAQUE CHEZ L’ENFANT Elle se manifeste essentiellement de deux manières chez les enfants : - la maladie cœliaque avec des manifestations gastro-intestinales impliquant une croissance médiocre, un abdomen distendu, des diarrhées, des vomissements, des troubles de la croissance avec une cassure de la courbe staturo-pondérale. - la maladie cœliaque atypique avec des manifestations peu significatives, des troubles à des organes autres que l’intestin, une croissance médiocre et assez souvent une anémie en fer ou en acide folique . L’enfant peut aussi présenter des manifestations auto-immunes ou être diagnostiqué de ce type de pathologie. Sachant qu'elles peuvent être associées à la maladie cœliaque, il convient alors de faire des examens en procédant à une recherche d'anticorps, notamment chez les enfants atteints de diabète de type1, de thyroïdite ou d'alopécie areata. Cette recherche est nécessaire aussi pour les enfants atteints de maladies rares comme le syndrome de Down ou de Turner. LES TESTS SANGUINS DE DETECTION DE LA PATHOLOGIE Le diagnostic de la maladie cœliaque doit être réalisé avec rigueur selon des protocoles internationaux au moyen de tests sanguins spécifiques et éventuellement par gastroscopie avec prélèvements biopsiques si nécessaire. Le dosage des anticorps anti-transglutaminase spécifiques de la maladie cœliaque, lorsqu'il est détecté avec une valeur élevée, est le test sanguin le plus approprié à réaliser pour le diagnostic suspecté de la maladie cœliaque. Il doit être associé à la détermination des immunoglobulines IGA totales. UNE PREDISPOSITION GENETIQUE La maladie cœliaque est une maladie à forte prédisposition génétique. Elle est en relation avec notre carte d’identité biologique : le système HLA (Human leukocyte Antigen), un ensemble de molécules situées à la surface des cellules pour permettre au système immunitaire de les reconnaitre. La présence de gènes spécifiques HLA DQ2 et DQ8 chez presque tous les cœliaques est un élément nécessaire mais non suffisant pour développer la maladie, puisque qu’on les retrouve aussi en moyenne dans 35% de la population alors que la maladie n’en touche que 1%. LE TRAITEMENT : LE REGIME SANS GLUTEN Le régime sans gluten (RSG) ne doit être mis en place qu'après confirmation du diagnostic, car l'élimination du gluten de l'alimentation de l'enfant entraîne des tests négatifs et la résolution des symptômes, compromettant un diagnostic ultérieur de certitude. Pour le moment, le seul traitement consiste à suivre ce régime alimentaire sans gluten (RSG). Le gluten et les protéines apparentées sont présents dans la majorité des céréales (blé, orge et seigle). Le gluten est présent aussi dans de nombreux produits très divers et souvent insoupçonnés : médicaments, rouge à lèvres, rince-bouche, dentifrice, colle, bonbons, sauce à salade, plats cuisinés… Le patient doit alors se diriger vers des produits de substitution sans gluten comme le riz (blanc, semi-complet, complet), des légumineuses (lentilles, pois chiches, haricots rouges…) ainsi que des céréales non toxiques et pseudo-céréales anciennes ou venues d’autres continents (sarrasin, millet, quinoa ou encore amarante originaire d’Amérique du sud). Au Maroc, la bonne observance du régime sans gluten est compliquée du fait qu’il n’existe pas d’étiquetage obligatoire sur ce sujet L’ASSOCIATION MAROCAINE DES INTOLERANTS ET ALLERGIQUES AU GLUTEN (AMIAG) Fondée en 2013, l’AMIAG a su s’imposer rapidement comme l’association nationale de référence pour la maladie cœliaque au Maroc et est reconnue comme telle par ses partenaires à l’étranger. Elle est présidée par Mme Jamila Cherif Idrissi. Comptant près de 1 000 adhérents, elle a mis en place ou organise : la journée nationale de la maladie cœliaque chaque année en mai ; des ateliers culinaires ; une grande fête annuelle pour les enfants cœliaques ; des conférences scientifiques avec des experts nationaux et internationaux, en particulier lors d’événements des professionnels de santé ; des aides alimentaires et des dons de moulins à céréales aux familles les plus pauvres… . Dr Moussayer khadija, spécialiste en médecine interne et en Gériatrie, vice - présidente de l’association marocaine des intolérants et allergiques au gluten (AMIAG) BIBLIOGRAPHIE - L’effiler D. Celiac disease diagnosis and management. JAMA. 2011;306(14):1582–92 - Ford AC and al. Yield of diagnostic tests for celiac disease in individuals with symp-toms suggestive of irritable bowel syndrome. Arch Intern Med. 2009;169(7):651–8. - Husby S. and al Guidelines for the Diagnosis of Coeliac Disease., for the ESPGHAN Working Group on Coeliac Disease Diagnosis, on behalf of the ESPGHAN Gastroenterology Committee European Society for Pediatric Gastroenterology, Hepatology, and Nutrition. JPGN 2012; 54: 136–160. -Diagnostic de la maladie cœliaque chez l'enfant Diagnosis of celiac disease in children, Elsevier Perfectionnement en Pédiatrie Volume 5, Issue 2, Supplement 1, May 2022, Pages S2-S6 https://doi.org/10.1016/S2588-932X(22)00071- OVERVIEW Celiac disease, defined as permanent intolerance to gluten, is an autoimmune disease, where the immune system attacks, in genetically predisposed individuals, the intestinal villi. The resultant atrophy of the intestinal wall causes malnourishment of nutrients and many other complications. The auto-immune diseases are a broad range of related diseases in which a person’s immune system produces an inappropriate response against its own cells, tissues and/or organs, resulting in inflammation and damage. There are over 100 different autoimmune diseases, and these range from common to very rare diseases. Some of the over 100 autoimmune diseases are lupus, type 1 diabetes, scleroderma, multiple sclerosis, Crohn’s disease, autoimmune hepatitis, rheumatoid arthritis, Graves disease, myasthenia gravis, myositis, antiphospholipid syndrome (APS), Sjogren’s syndrome, uveitis, polymyositis, Raynaud’s phenomenon, and demyelinating neuropathies
Dr Moussayer khadija Dr Moussayer khadija

Dr Moussayer khadija

Dr MOUSSAYER KHADIJA الدكتورة خديجة موسيار Spécialiste en médecine interne et en Gériatrie en libéral à Casablanca. Présidente de l’Alliance Maladies Rares Maroc (AMRM) et de l’association marocaine des maladies auto-immunes et systémiques (AMMAIS), Vice-présidente du Groupe de l’Auto-Immunité Marocain (GEAIM)


5300

0

Chapter 5: Formalize & Systemize 244

A working implementation begins with a narrowly defined document type. The unit of construction is a skill, which combines input schema, feature computation, semantic rules, generation constraints, and validation logic into a single packaged pipeline. The input schema defines the structure of accepted data. Each field has a fixed type and meaning. Inputs outside this structure are rejected or normalized before processing. This step removes ambiguity at the entry point. The feature layer computes derived values from the input schema. These computations are deterministic and expressed in standard tooling such as SQL or Python. The outputs include numerical transformations, aggregations, and formatted representations. Once computed, these values are stored and reused across all downstream operations for the same input. The semantic layer maps computed features into categorical labels. These mappings are expressed as explicit rules that define thresholds and conditions. The rules function as a translation layer between raw computation and narrative intent. Changes in business definition are reflected by modifying rules rather than rewriting logic. The generation layer receives three inputs: original data, computed features, and semantic labels. It produces structured text under strict constraints. The model is restricted to expressing provided values. No additional facts are introduced. Output formats are predefined, often as structured JSON containing narrative sections. The validation layer compares generated text against deterministic outputs. It extracts numerical values, categorical claims, and references, then checks them against the feature and semantic layers. Any deviation indicates failure. Output is either accepted or routed for correction. A complete skill behaves like a compiled artifact. Input enters through a fixed interface. Output is produced in a predictable format. Internal logic remains inspectable and versioned. Once a single skill is stable, the same structure can be replicated across multiple document types. Financial reports, product summaries, operational dashboards, and compliance documents follow identical architectural patterns. Variation exists only in schema definitions, feature logic, and semantic rules. As the number of skills increases, duplication appears in semantic definitions. Terms such as “strong performance,” “declining trend,” or “high risk” recur across domains, often with subtle differences in meaning depending on context. A static rule system cannot represent these contextual variations efficiently. Each skill encodes its own version of definitions, which leads to inconsistency and maintenance overhead. A knowledge graph introduces a shared semantic layer. Concepts are represented as nodes, and relationships between them are explicitly defined. Each concept carries attributes such as context, domain, and threshold values. This allows meaning to vary based on surrounding conditions rather than fixed rule files embedded in individual skills. In this structure, a query retrieves the appropriate definition of a concept based on context parameters such as industry, market state, or organizational role. The semantic layer no longer evaluates rules directly. It resolves references into context-specific definitions drawn from the graph. Feature computation remains unchanged. Inputs are still transformed into deterministic values. The difference lies in how those values are interpreted. Instead of fixed thresholds embedded in code or configuration files, interpretation depends on graph queries that return context-aware mappings. This creates composability across systems. Multiple skills reference the same underlying semantic nodes. A change in definition propagates through the graph without modifying individual pipelines. Consistency emerges from shared structure rather than replicated configuration. The generation layer remains unchanged. It still receives features and resolved semantic labels. The difference lies upstream, where those labels are derived from a shared semantic space rather than isolated rule sets. Validation also extends naturally. Outputs can be traced not only to feature computations but also to the specific semantic definitions used during interpretation. This adds a second layer of provenance, linking each statement to both numerical derivation and contextual meaning. The system shifts from isolated pipelines to a connected network of shared meaning, where document generation becomes an application of structured knowledge rather than repeated local interpretation.

Chapter 4: Tokenomics & Failure 247

Token usage in direct generation scales with both input size and document count. When identical datasets are used repeatedly, the same information is reintroduced into prompts and reprocessed each time. This creates redundancy across runs. A staged pipeline changes this behavior by separating computation from generation. Feature computation runs once per dataset. The results are stored and reused. The generation step receives only derived values and semantic tags rather than raw input data. Let Tin represent the original input size and T'in the reduced representation produced after feature extraction. For n documents derived from the same dataset, direct generation cost scales with n⋅Tin. In the staged system, cost splits into a one-time computation cost plus n⋅Tin. As n increases, the amortized cost of preprocessing becomes negligible relative to repeated generation savings. This structure also changes verification cost. When outputs depend on raw inputs embedded inside prompts, validation requires rechecking both computation and interpretation. When outputs depend on precomputed features, verification reduces to checking alignment between text and deterministic values. This reduces the scope of manual review. A second effect concerns failure containment. In end-to-end generation, errors in reasoning, calculation, and phrasing occur in the same process, making attribution difficult. A staged pipeline isolates these responsibilities. Feature computation is deterministic and testable. Semantic classification is rule-based and auditable. Generation is constrained to express only pre-validated inputs. Validation operates as a final comparison layer between text and deterministic outputs. In practical terms, this structure prevents entire classes of errors that arise when models are allowed to both compute and express facts. Numerical inconsistencies, misapplied rules, and unsupported claims can be traced back to specific layers and eliminated without affecting unrelated parts of the system. The result is a system where cost and correctness are both controlled through separation of responsibilities rather than increased model complexity.

Chapter 3: Prior Art and Pipeline Structure 251

The problem of translating structured input into structured output has been addressed in other domains through staged processing. Compiler design separates parsing, semantic analysis, transformation, and code generation into distinct phases, each operating on well-defined representations. Natural language generation research formalized a similar sequence, separating content selection, organization, lexical choice, and surface realization. These designs isolate responsibilities and prevent later stages from altering the assumptions established earlier in the pipeline. End-to-end neural generation replaced these staged systems with a single model that maps input directly to output. This removes explicit intermediate representations and shifts all responsibilities into one probabilistic process. While this simplifies implementation, it removes the boundaries that make verification and auditing feasible. When a model both computes values and expresses them, there is no clear point at which correctness can be enforced. A staged approach restores those boundaries. Data is transformed into a set of derived values using deterministic computation. These values are then mapped to semantic categories using explicit rules. Only after these steps are complete is text generated, and the generation step is constrained to use the prepared inputs. A final validation stage compares the generated text against the deterministic outputs to detect discrepancies. This structure ensures that computation, classification, and expression are handled independently. The model is not responsible for deriving facts, only for expressing them. Each stage produces artifacts that can be inspected, tested, and reused. The framework operates as a directed sequence of transformations from input data to validated text. Each layer has a defined input and output, and data flows forward without feedback into earlier stages. The input layer accepts structured records or extracts them from unstructured sources into a predefined schema. When extraction is required, it is limited to identifying and normalizing explicit facts without inference or aggregation. The goal is to produce a stable, typed representation of the data that downstream stages can consume. The feature layer performs deterministic computation. This includes arithmetic operations, aggregations, formatting, and lookups. The implementation can use SQL, Python, or any environment that produces consistent outputs for identical inputs. Results from this layer are cacheable and reusable, since they depend only on the input data. The semantic layer applies rule-based classification to the computed features. Rules encode domain definitions such as thresholds, categories, or states. These rules are externalized as data so they can be modified without changing application code. The output of this layer is a set of labels or tags that describe the state of the input according to business logic. The generation layer receives the original inputs, computed features, and semantic tags. The prompt specifies exactly which values must be included and prohibits the introduction of additional facts. Structured output constraints restrict the format of the response. The model converts the provided values into text without performing new calculations or introducing new data. The validation layer inspects the generated text and compares it against the outputs of the feature and semantic layers. Numeric values, percentages, and categorical statements are extracted and checked for agreement. Any mismatch results in rejection or routing to review. No document proceeds without passing this reconciliation step. This sequence enforces separation between computation, interpretation, and expression. It also creates a complete lineage from each statement in the text back to a deterministic source.

Chapter 2: Why Agents, MCP, and RAG Fail for Data-to-Text 251

The current default approach to generating documents from data combines agents, multi-step prompting, and retrieval. These methods are often grouped together in practice, but they introduce the same structural issue: the model repeatedly interprets and transforms the same data without a fixed, verifiable intermediate state. Start with agent workflows. A typical setup assigns roles such as writer, reviewer, and editor. Each role operates on text produced by the previous step while also referencing the original data. The data is not processed once and stored as a stable representation; it is re-read and reinterpreted at every stage. Derived values are recomputed multiple times, sometimes with small differences. The final document depends on a chain of generated text rather than a single transformation from source data. When a number is incorrect, there is no clear point in the process where the error can be isolated, because each stage mixes interpretation with generation. Multi-chain prompting attempts to impose order by splitting the task into explicit steps within a single workflow. One step extracts information, another computes metrics, another organizes structure, and a final step generates the document. This looks closer to a pipeline, but the boundaries are not enforced. Each step still depends on the model to preserve exact values from the previous step. Intermediate outputs remain probabilistic. A value that is slightly altered during extraction will be used as input for all subsequent steps. The system accumulates small inconsistencies rather than preventing them. Retrieval-augmented generation changes how data is accessed, not how it is processed. Relevant documents or records are retrieved and inserted into the prompt. The model then reads and synthesizes them. For data-to-text tasks, this means that the model is responsible for selecting, combining, and expressing values from retrieved sources. If multiple sources contain overlapping or conflicting information, the model resolves them implicitly during generation. There is no requirement that the output match any single source exactly. Retrieval improves coverage but does not enforce consistency. These methods are often combined. A system may retrieve data, process it through multiple prompting steps, and coordinate the process with agents. The number of transformations applied to the same data increases. Each transformation introduces another opportunity for deviation. Token usage grows because the same information is processed repeatedly. The final output reflects a sequence of interpretations rather than a controlled mapping from input to output. Data-to-text generation requires a different structure. Numerical values must remain exact. Classifications must follow defined rules. Every statement must be traceable to a source. These requirements assume that data is processed once, stored in a stable form, and then used consistently throughout the pipeline. Agents, MCP, and RAG do not provide this property because they rely on iterative interpretation. They remain useful in earlier stages where the goal is to gather information, explore alternatives, or synthesize unstructured inputs. In those contexts, variation is acceptable and often necessary. Once the data is fixed and the task is to produce a document that must align exactly with that data, the process must shift to a deterministic pipeline where computation, classification, and generation are separated and verified.
bluwr.com/Chapter 2: Why Agents,...

Chapter 1: Setting The Stage- Deloitte AI Scandal 251

In December 2024, the Australian government paid Deloitte $290,000 for a report that appeared complete and professionally written but contained fabricated material throughout. Several citations referred to sources that do not exist, some quotations were attributed to judges who never made them, and multiple references pointed to academic work that cannot be found in any database. The content was generated using GPT-4o and delivered to the client without these issues being identified during internal review. The problems were later discovered by a university researcher after the report had already been submitted, which led Deloitte to issue a corrected version and return the final payment. The failure originates from how current systems handle data-to-text generation. A single prompt is expected to read structured data, compute derived values, apply classification logic, organize content, and produce readable prose while preserving exact numerical and factual accuracy. These steps require different forms of reasoning, yet they are executed inside one probabilistic generation process without separation or verification between them. The result is text that is coherent at the surface level but unreliable when examined against the underlying data. This becomes a scaling problem rather than a one-off mistake. When document production relies on this approach, teams must allocate time to verify outputs, reconcile inconsistencies, and correct numerical or factual errors. As volume increases, the cost of review grows in proportion, often offsetting the time saved during generation. Attempts to improve reliability by adding more prompts or introducing agent-based workflows tend to increase repetition of the same operations without establishing a stable mechanism for verification. The approach presented in this series replaces that structure with a defined pipeline in which data processing, classification, generation, and validation are separated into distinct stages. Each stage has a fixed role, and outputs from earlier stages are treated as immutable inputs for later ones. The model is limited to producing language from already verified inputs rather than participating in computation or decision-making about the data itself.

Renault Restructuring: Social Threat or Industrial Opportunity for Morocco? 290

Renault's announcement of a drastic reduction in the number of engineers fits into a global dynamic of transformation in the automotive sector. Cost pressures, the shift to electric vehicles, and the digitalization of industrial processes: these factors are pushing major manufacturers to overhaul their internal structures, particularly in engineering roles. This still amounts to nearly 25% in Renault's case. At this stage, nothing indicates that Moroccan sites, particularly the Renault Tanger plant and the Renault Casablanca plant (SOMACA), will be affected, but the hypothesis deserves serious consideration. Above all, it opens up a field of strategic reflection. What if this potential wave of released expertise represented a historic opportunity for Morocco? For several years, major automotive groups have been redirecting their investments toward high-value-added areas such as embedded software, artificial intelligence, and electric batteries. This shift mechanically reduces the need for generalist engineers while creating strong demand for specialized profiles. It's a true global transformation redefining engineering in this industry. Renault's strategic plan, particularly through its electric subsidiary Ampere, illustrates this evolution. It's not just about cutting headcounts, but redeploying skills. Morocco is no longer merely a low-cost assembly site. Over two decades, the Kingdom has built one of Africa's most performant automotive ecosystems. It has evolved from an industrial assembly workshop to an integrated platform with local integration rates exceeding 60% in certain segments, the presence of major global tier-one suppliers, competitive logistics infrastructure (Tanger Med Port), and targeted training through highly effective specialized institutes. Groups like Stellantis and Lear Corporation have strengthened this ecosystem, consolidating Morocco's position as a regional industrial hub. If workforce reductions were to impact Morocco, they would release highly qualified profiles such as process engineers, quality specialists, industrial logistics experts, and R&D applied managers. A true pool of underutilized engineers. This human capital, trained to international standards, represents a rare strategic resource. In many countries, such a concentration of skills would be immediately absorbed by a dense local industrial fabric. In Morocco, the challenge is precisely to create these outlets. The hypothesis of a Moroccan automotive brand then imposes itself, with a central point: why not turn this constraint into a lever for industrialization? Morocco today has several assets: A solvent domestic market. The Moroccan middle class, though under pressure, remains capable of supporting demand for affordable, robust vehicles adapted to local realities. A near-complete supply chain. Wiring harnesses, seats, plastic components, cabling, majority of constituent elements are already produced locally, and industrial legitimacy has been achieved. The "Made in Morocco" automotive label is no longer an abstraction. In this context, the emergence of a national brand, with models symbolically named Taroudante, Fassia, or Itto, is no longer utopian. Even if it poses several structuring challenges, such as access to financing (patient capital, sovereign or private), mastery of intellectual property, the ability to develop a competitive technical platform, and an export strategy. There are precedents from comparable emerging countries worth examining closely. Countries like these have succeeded in this gamble: Dacia in Romania, successfully relaunched (irony of history, under Renault's impetus), Tata Motors in India, or Proton in Malaysia. These examples show that a national automotive industry can emerge provided there is clear alignment between the state, private capital, and technical expertise. It's truly a matter of political and industrial will. The real question, therefore, is not technical, but strategic. Does Morocco wish to remain a performant link in a globalized value chain, or does it aspire to become a full-fledged player capable of designing, producing, and marketing its own vehicles? The answer requires a proactive industrial policy, incentives for innovation, mobilization of national capital, and above all, confidence in local skills. It's about transforming uncertainty into an ambitious national project. If Renault's restructurings were to affect Morocco, they would rightly be perceived as a social threat. But they could also become a founding moment. Because behind every potentially released engineer lies a brick of industrial sovereignty. Stacked together, these bricks can form a true edifice. Morocco today has a rare alignment: skills, infrastructure, market, international credibility. What it still lacks, perhaps, is the audacity to take the final step: moving from the world's factory to brand creator. And in a country where the collective imagination is powerful, it's no small thing to envision that one day, owning a car named Fassia, Hada, or Itto becomes more than a purchase, truly an act of adherence to a Moroccan national industrial project.

Éliphas Lévi 1004

Éliphas Lévi (1810–1875), whose real name was Alphonse Louis Constant, was a French occult philosopher, writer, and former Catholic seminarian who played a major role in the revival of Western esoteric traditions during the nineteenth century. He was born in Paris, France, in 1810 and grew up in a modest family. As a young man, he entered a Catholic seminary with the intention of becoming a priest. However, he eventually left the religious path after becoming involved in political and social movements of the time. During the early part of his life, Lévi was interested in social reform and political ideas, and he even spent time in prison because of his writings. Over time, his interests shifted toward philosophy, mysticism, and the study of ancient traditions. He became fascinated with subjects such as Kabbalah, alchemy, ceremonial magic, astrology, and Hermetic philosophy, and he began studying how these traditions related to religion and human spirituality. Lévi believed that magic was not superstition, but rather a hidden science that explained the relationship between the spiritual and physical worlds. He argued that ancient traditions preserved symbolic knowledge about the structure of the universe and human consciousness. According to Lévi, symbols, rituals, and sacred texts were ways of expressing deeper truths about nature. His most famous work is Dogme et Rituel de la Haute Magie (1854–1856), or Dogma and Ritual of High Magic. In this book, he explained his theories about magic, symbolism, and the spiritual forces that connect all things. The book became very influential among later occultists and helped shape modern ceremonial magic. Lévi is also famous for creating the well-known image of Baphomet, a symbolic figure with a goat’s head, wings, and both male and female characteristics. Contrary to popular belief, Lévi did not present Baphomet as a devil. Instead, he described it as a symbol of balance and unity, representing the harmony between opposites such as light and darkness, spirit and matter, and male and female energies. Another important idea promoted by Lévi was the connection between the Tarot and the Kabbalah. He suggested that the Tarot cards contained hidden spiritual knowledge and that the 22 Major Arcana corresponded to the 22 letters of the Hebrew alphabet. Although historians debate the accuracy of this idea, it became extremely influential and later shaped the teachings of groups like the Hermetic Order of the Golden Dawn. Throughout his life, Lévi wrote several books on magic and philosophy, including The History of Magic (1860) and The Key of the Mysteries (1861). His writings combined religion, symbolism, philosophy, and mysticism, making him one of the most important figures in the development of modern occultism. Today, Éliphas Lévi is remembered as a key thinker who helped transform magic from something associated with superstition into a philosophical and symbolic system. His ideas influenced many later occult traditions, writers, and magical orders, and his work continues to be studied by people interested in esotericism, mysticism, and Western magical traditions.

Doping: Move Beyond Fiction, Confront the Public Health Issue... 1041

It’s tempting to dismiss the recent doping cases in Moroccan football with a wave of the hand, reducing them to individual errors, mishaps, or even injustices. It’s tempting, but dangerous. What’s at stake today goes far beyond a few disciplinary sanctions. Doping, in its contemporary form, is no longer just cheating: it’s a brutal revealer of a deeper dysfunction—an out-of-control sports and health ecosystem, sustained by a comfortable illusion: “football isn’t affected.” For a long time, football has sheltered itself behind a convenient fiction: that of a sport relatively spared from doping, an illusion maintained on a global scale despite well-documented precedents. In Morocco, this fiction persists: every case is treated as an anomaly, never as a signal. That said, what has recently come to light does concern football, but it’s far from the only sport affected. The rise of the Moroccan Anti-Doping Agency (AMAD) and the significant increase in controls have changed the game: what we’re seeing today isn’t necessarily more doping, but more truth. And that truth is unsettling. The narrative of “accidental doping” is increasingly holding up poorly against the facts. The dominant discourse is well-rehearsed: athletes are victims of involuntary doping, from contaminated supplements, poorly prescribed medications, and good-faith errors. This discourse isn’t entirely false. It’s simply incomplete. Because behind “involuntary doping” lies a more troubling reality: a widespread normalization of substance ingestion, in a culture where presumed immediate performance gains take precedence over knowledge, caution, and medical oversight. Yet it’s nearly impossible to prove that ingesting this or that substance enhances sports performance. What is certain and proven, however, are the inevitable health consequences. Anti-doping law is implacable: the athlete is responsible for everything they consume, whether they intended to cheat or not. This principle of strict liability isn’t an injustice, it’s a safeguard. But athletes must first be given the real means to understand what they’re ingesting. Clearly, that’s not the case for a large portion of them today. For elite athletes, controls are there to deter and sanction when necessary. The problem becomes even graver for young people—and not-so-young—who train for themselves, outside the most visible circuits. That’s where supplements represent a new gray area and the heart of the issue, widely underestimated. Supplements have become the gateway to a diffuse, invisible, insidious form of doping. Uncertified products, uncontrolled imports, aggressive marketing: everything conspires to maintain an illusion of safety, while these products are a sanitary blind spot. Their massive consumption among young people is rarely medically supervised. It relies on informal recommendations, locker-room advice, impromptu sellers, and sometimes even social media “influencers.” You can even find them in some souks and dairies. The result is unequivocal: careers shattered over a few grams of unidentified powder, but above all, and most alarmingly, weakened bodies, hormonal disorders, metabolic imbalances appearing earlier and earlier. Doping is no longer just a sports fraud; it’s becoming a full-fledged public health issue. The silence and sometimes passive complicity of clubs and gyms is another blind spot in the system. It takes courage to ask the uncomfortable question: where are the clubs in all this? Few gyms are truly spared. Some don’t hesitate to sell, without the slightest scruple, products whose true composition and potential effects on users’ bodies are known only to their suppliers. And how do you respond to a young person who challenges you: “You tell us these products aren’t good, but the coach says we have to take them”? In many cases, medical oversight is insufficient, if not nonexistent. Young people evolve in an environment where physical appearance is glorified, but scientific and medical culture remains marginal. This void is filled by improvisation and worse, a form of collective abdication of responsibility. When the scandal breaks, the athlete faces the sanction alone. The club vanishes from the story. Yet the law clearly defines the various levels of responsibility: products don’t fall from the sky. This asymmetry is no longer sustainable. Responsibility can no longer be considered solely individual. Doping in Moroccan football, ever since two high-level players have been implicated, can no longer be analyzed solely through the lens of personal fault. It’s the product of an insufficiently regulated supplements market, a lack of structured medical oversight, increasingly early performance pressure, and a sports culture that values results over understanding, in denial of an existing law. In response, the AMAD, based on strict rules, has been tasked with implementing the national anti-doping policy, and it does so brilliantly. For it, mechanically applying rules without fine-tuned adaptation to local realities and without massive education isn’t enough. Sanctioning without educating treats symptoms while ignoring the disease. What needs to change now is no longer marginal correction: the system must be rethought. Concretely: - Mandate medical oversight in all clubs. - Create a national list of certified, controlled, and traceable supplements. - Systematically train young athletes and their coaches on substance risks. - Hold clubs and staff legally accountable, so they can no longer hide behind ignorance or good faith. And above all: drop the general hypocrisy and face reality. Morocco isn’t an isolated case. It’s simply at a turning point. What’s at play today is the shift from marginal doping to a systemic form, not organized, but diffuse, cultural, almost unconscious. Refusing to see it is accepting that a generation of young people will pay the price for this blindness. Doping isn’t just a matter of cheating. It’s a public health issue, and now, a matter of collective responsibility.