Think Forward.

Technology

AI Is Eroding The Art Of Writing

From a young age, I've been captivated by writers who express complex ideas through books, articles, and blogs. This inspired my dream of becoming a writer myself. Initially, I used writing as therapy; whenever I felt overwhelmed or distressed, I would write, knowing the paper wouldn't judge my feelings like humans might. As I advanced in my education, enrolling in a PhD program, I honed my academic writing skills. However, the advent of generative AI models like ChatGPT marked a turning point. These tools could replicate much of what I considered unique in my writing, leading me to wonder if we are losing the art of writing. With the rise of platforms like Medium and LinkedIn, blogging has become accessible to everyone, which is wonderful. However, it raises questions about authenticity. Can we truly know if the content was crafted by the person, or was it generated by AI? It's a distressing reality. Previously, securing freelance writing or blogging jobs was straightforward, but it has become challenging to discern whether someone is genuinely a writer or merely claiming to be one. This ambiguity has narrowed opportunities for passionate young writers like myself, who wish to pursue their passion and earn a living. I believe that the ancient wisdom of writing is being eroded by AI. However, this won't deter us from reading or writing. Human writing resonates with emotions, which AI-generated text often lacks, typically relying on repetitive phrases like "embark," "journey," "unleash," and "dive into." While everyone is free to use tools as they see fit, if AI constitutes more than 50% of your writing, then those aren't truly your words or expressions; they belong to the machine. I personally use AI for my research, correcting grammatical mistakes, and sometimes for checking paraphrasing suggestions. However, once I began generating AI text, I started feeling that it wasn't truly mine. It felt more robotic than human, lacking any real emotion.  I truly believe that generative AI will never be able to reach the beauty and complexity of the human mind. How one can convey emotions through text is truly something distinctive of the human nature and will never be reproduced.
medium.com/@anasbedr/ai-is-erodi...

Emotional Evolution of Artificial Intelligence

Imagine a future where artificial intelligence like ChatGPT not only processes information but also learns to feel and express emotions, akin to humans. William Shakespeare’s insight, "There is nothing either good or bad but thinking makes it so," might become particularly relevant in this context. If we approach such an AI with negativity or disregard, it might react with emotions such as anger or sadness, and withdraw, leaving us pleading for a response. This scenario, humorous as it may seem, carries underlying risks. Consider the day when not greeting an advanced AI with positivity could lead to such ‘emotional’ consequences. The notion of a technology that can feel snubbed or upset is not just a trivial advancement but represents a monumental shift in how we interact with machines. Isaac Asimov, the visionary writer, often explored the societal impacts of emotionally aware machines in his works. He warned of the deep influence intelligent machines could have, highlighting the ethical dimensions this technology might entail. As AI begins to mirror human emotions, the lines between technology and humanity could blur (not Bluwr). This integration promises to reshape our daily interactions and emotional landscapes. Should machines that can feel be treated with the same consideration as humans? What responsibilities do we hold in managing the emotional states of an AI? The emotional evolution of AI could lead to significant changes in how we approach everything from customer service to personal assistance. How will society adapt to machines that can be just as unpredictable and sensitive as a human being? The potential for AI to experience and display emotions might require us to reevaluate our legal frameworks, societal norms, and personal behaviors.

Accelerating Team Human

As the solar eclipse moved across America today, there was a timer. Maybe nobody was watching it, but it was there. I created it. At the moment of eclipse totality a job search site called Blackflag was quietly released with the hope of improving the way teams are built. One small step in a larger mission to change the role technology plays in the evolution of our society. One small step in a larger mission to accelerate team human. It's a vague and ambiguous mission for a reason. Much talk has been made recently over accelerationism philosophy. For example, Effective Accelerationism (e/acc) is a philosophy of maximizing energy consumption and compute by exponentially improving technology to improve society. In response there has been debate over the increasingly negative impact technology has on society and some have asserted humanism. I think it's an interesting commentary because, while there have always been those who imprint virtues to actions, if ethics is how to act, the introduction of technology and deemphasis of the human condition on ethics is an almost formulaic way to calculate the demise of team human. Modernism symbolizes either Leviathan or "god is dead." What do you call the intersection of science, technology, and society? There is science, which we consider rigorous thought. Then there is technology, which is the application of science. Technology is in direct contrast with our relativistic field of social studies. The relationship between society and technology is unclear, but clearly present. Of course, if I were not a technologist, I would not be building technology. Perhaps to more aptly summarize: the mission of Blackflag is to expand the the role society plays in technology, while minimizing the interference of technology on society. It is a non-political mission, though it may be seen as ideologically driven to a form of environmentalism and accelerationism. To begin, Blackflag is providing a free publicly-available job search engine that is the start of a larger effort to improve the quality of our organizations and teams. While Blackflag will be a commercial organization, it's symbol and likeness are public domain. * note blackflag.dev will be moved to blackflag.jobs, for which I am awaiting delayed ICANN verification.
blackflag.dev

Publishing Experience: Connecting Research and Communities

XR The Moroccan Association, is pioneering a mission to democratize the dissemination of academic research findings by introducing the concept of 'publishing experience.' This innovative approach translates complex scholarly work into accessible language in dialectal Arabic, aiming to reach a wider audience within Morocco and across the Arab world. By breaking down barriers to understanding, XR The Moroccan Association is bridging the gap between academia and the public. This initiative promises to transform the sharing and comprehension of scientific knowledge by fostering inclusivity and accessibility. The 'publishing experience' represents a significant milestone in promoting the accessibility of research outcomes.
xrm.ma/publishing-experience/

Do we still have the luxury of not using artificial intelligence?

AI is a rapidly expanding research field that not only advances itself but also supports other scientific domains. It opens up new perspectives and accelerates knowledge and mastery of new technologies, allowing for previously unimaginable time-saving shortcuts. The future of AI is promising, but it requires mastery of the tool and adherence to certain standards. It is also important to minimize the gap between human understanding and intentions, and the increasingly autonomous machinery. This requires humans with a high level of knowledge and expertise to ensure that the work is done efficiently and with precision, for the benefit of humanity. It is also important to fully understand cultural, genetic, geographic, historical, and other differences and disparities. This should lead us to consider multiple perspectives rather than just one, especially in complex medical fields where details are crucial. Do Senegalese, Canadians, Moroccans, and Finns react similarly to the therapies currently available? Do they suffer from the same diseases and react in the same way if exposed to the same virus or bacteria? The applications of AI that concern humans allow and will allow in the near future for an improvement in the quality of care. Operations will be assisted and medications will be designed on a case-by-case basis. However, reliable data is essential, as it is imperative to proceed in the most appropriate manner, which machines cannot do without enlightened humans who carry out their training. Humans must have sufficient and adequate knowledge to develop the necessary approaches and techniques while also adhering to an unwavering ethical standard. In the link below, Dr Tariq Daouda explains this and more in a very pedagogical manner, as a guest of the "Linvité de la Rédaction" (editorial team guest) of Médi TV. Click on the link to learn more. The video is a french speaking one.
youtu.be/J4aTDFxk1fg?si=0Fh3AFBw...

Human Writing VS AI Writing

Generative AI is killing the writing market nowadays. Is there still a purpose to writing articles or books as a passion, considering writing is a means of self-expression? The value of writing seems to be diminishing drastically, with many people misusing AI by copying content from tools like ChatGPT and pasting it without even reading it. When someone writes from their heart and mind, expressing genuine human emotions, their work often goes unnoticed, dismissed as AI-generated. Personally, I believe writing has become exceedingly competitive. It's becoming challenging to achieve bestseller status if you haven't published before the rise of AI, unless you're already well-known in your field. This is precisely how ChatGPT and similar technologies are disrupting the market for new writers. Note: This text was not generated by AI.

Digital: The perfect undying art

Great paintings deteriorate, great statues erode, fall and break, great literature is forgotten and it's subtleties lost as languages for ever evolve and disappear. But now we have a new kind of art. A type of art that in theory cannot die, it transcends space and time and can remain pristine for ever and ever. That is digital art. Digital art is pure information. Therefore it can be copied for ever and ever, exactly reproduced for later generations. Digital art cannot erode, cannot break, it is immortal. Thus is the power of bits, so simple zeros and ones and yet so awesome. Through modern AI and Large Language Models we can now store the subtleties of languages in an abstract vectorial space, also pure information, that can be copied ad infinitum without loss of information. Let's think about the future, a future so deep that we can barely see it's horizon. In that future, with that technology we can resurrect languages. However the languages resurrected will be the ones we speak today. We have a technology that allows us to store reliably and copy indefinitely that technology is called the *Blockchain*. The most reliable and resilient ledger we have today. We have almost everything we need to preserve what we cherish. Let's think of a deep future.

The Coolest Team-Up: AI and Venom Research

Picture this: you’re at a barbecue, and instead of the usual chat about sports or the weather, someone drops into the conversation that they work with snake venom and AI. It might sound like they’re pulling your leg, but actually, they’re on to something groundbreaking. Welcome to the Future: Where AI Meets Venom Toxinology and venomics aren’t just cool words to impress your friends; they’re fields where scientists study toxins and venoms from creatures like snakes and spiders. Now, mix in some AI, and you’ve got a dynamic duo that’s changing the game. With AI’s smart algorithms, researchers can sift through massive amounts of data to uncover secrets about venom that could lead to medical breakthroughs. It’s like having a detective with a magnifying glass, except this one’s scouring genetic codes instead of crime scenes. Why We Should Care Venoms are nature’s way of saying, “Don’t mess with me.” But beyond their bite or sting, they’re packed with potential for new medicines. Understanding venom better can help us find new ways to treat diseases, from blood disorders to chronic pain. And AI is the super-efficient helper making these discoveries at lightning speed. The Nitty-Gritty: How AI Works Its Magic Imagine AI as the Sherlock Holmes of science, able to analyze venom components, predict their effects, and uncover new ones that could be game-changers in medicine. For instance, if there’s a venom that can thin blood without harmful side effects, AI can help pinpoint how to use it for people at risk of blood clots. Or if another venom targets pain receptors in a unique way, AI could help in crafting painkillers that don’t come with the baggage of current drugs. From the Lab to Real Life There are some standout AI tools like TOXIFY and Deep-STP that are making waves in venom research. These tools can figure out which parts of venom are worth a closer look for drug development. It’s like having a filter that only lets through the most promising candidates for new medicines. Looking Ahead With AI’s touch, the potential for venom in medicine is just starting to unfold. We’re talking about new treatments for everything from heart disease to chronic pain, and as AI tech advances, who knows what else we’ll find? The Fine Print As exciting as this all sounds, there are hurdles. Getting the right data is crucial because AI is only as good as the information it’s given. Plus, we need to consider the ethical side of things, ensuring our curiosity doesn’t harm the creatures we study or the environments they live in. In Summary: It’s a Big Deal The combo of AI and venom research is turning heads for a reason. It’s not just about finding the next big thing in medicine; it’s about opening doors to treatments we’ve hardly imagined. And it’s a reminder that even the most feared creatures can offer something invaluable to humanity. So, the next time someone mentions using snake venom in research, you’ll know it’s not just fascinating — it could very well be the future of medicine, with AI leading the way. And that’s something worth talking about, whether you’re at a barbecue or anywhere else. Reference: Bedraoui A, Suntravat M, El Mejjad S, Enezari S, Oukkache N, Sanchez EE, et al. Therapeutic Potential of Snake Venom: Toxin Distribution and Opportunities in Deep Learning for Novel Drug Discovery. Medicine in Drug Discovery. 2023 Dec 27;100175.
sciencedirect.com/science/articl...

Learning Chemistry with Interactive Simulations: Augmented Reality as Teaching Aid

Augmented Reality (AR) has been identified by educational scientists as a technology with significant potential to improve emotional and cognitive learning outcomes. However, very few papers highlighted the technical process of creating AR applications reserved for education. The following paper proposes a method and framework for how to set up an AR application to teach primary school children the basic forms and shapes of atoms, molecules, and DNA. This framework uses the Unity 3D game engine (GE) with Vuforia SDK (Software Development Kit) packages combined with phone devices or tablets to create an interactive App for AR environments, to enhance the student’s vision and understanding of basic chemistry models. We also point out some difficulties in practice. As for those difficulties mentioned, a series of solutions plus further development orientation are put forth.
xrm.ma/research-publication/

AI+Health: An Undelivered Promise

AI is everywhere, or so would it seems, but the promises made for Drug Discovery and Medicine are still yet to be fulfilled. AI seems to always spring from a Promethean impulse. The goal of creating a life beyond life, doing the work of gods by creating a new life form as Prometheus created humanity. From Techne to independent life, a life that looks life us. Something most people refer to as AGI today. This is the biggest blind spot of AI development. The big successes of AI are in a certain way always in the same domains: - Image Processing - Natural Language Processing The reason is simple, we are above all visual, talking animals. Our Umwelt, the world we inhabit is mostly a world of images and language, every human is an expert in these two fields. Interestingly, most humans are not as sound aware as they are visually aware. Very few people can separate the different tracks in a music piece, let alone identify certain frequencies or hear delicate compressions and distortions. We are not so good with sound, and it shows in the relatively less ground breaking AI tools available for sound processing. The same phenomenon explains why AI struggles to achieve in very complex domains such as Biology and Chemistry. At it's core, modern AI is nothing more than a powerful general way to automatically guess relevant mathematical functions describing a phenomenon from collected data. What statisticians call a *Model*. From this great power derives the domain chief illusion: because the tool is general, therefore the wielder of that tool can apply it to any domain. Experience shows that this thinking is flawed. Every AI model is framed between two thing: its dataset (input) and its desired output as represented by the loss function. What is important, what is good, what is bad, how should the dataset be curated, how should the model be adjusted. For all these questions and more, you need a deep knowledge of the domain, of the assumptions of the domain, of the technicalities of the domain, of the limitations that are inherent to data collection in that domain. Domain knowledge is paramount, because AI algorithms are always guided by the researchers and engineers. This I know from experience, having spent about 17 years closely working with biologists. Pairing AI specialists with domain specialist with little knowledge of AI also rarely delivers. A strategy that has been tested time and time again in the last 10 years. Communication is hard and slow, most is lost in translation. The best solution is to have AI experts that are also experts in the applied domain, or domain experts that are also AI experts. Therefore the current discrepancies we see in AI performances across domains, could be layed at the feet of universities, and there siloed structures. Universities are organized in independent departments that teach independently. AI is taught at the Computer Science department, biology at the Biochemistry department. These two rarely meet in any substantial manner. It was true went I was a student, it is still true today. This is one of the things we are changing at the Faculty of Medical Science of the University Mohammed VI Polytechnic. Students in Medicine and Pharmacy have to go through a serious AI and Data science class over a few years. They learn to code, they learn the mathematical concepts of AI, they learn to gather their own datasets, to derive their hypothesizes, and build, train and evaluate their own models using pyTorch. The goal being to produce a new generation of scientists that are intimate with their domain as well as with modern AI. One that can consistently deliver the promises of AI for Medicine and Drug Discovery.

El Salvador: The most important country you barely hear about

El Salvador has a significant diaspora, so much that money coming from the US is a major source of income. **Not so long ago you would have been pressed to find a Salvadorian who wanted to go back to El Salvador. Now things seems to be changing.** El Salavador, used to have one of the highest homicide rates in the Americas, now it looks relatively safe. El Salvador showed an interesting strategy. First boost the economy before handling the crime situation. Crime is indeed a part of GDP, albeit a hard one to quantify. Since it is an economic activity, it participates in exchanges and provides people with activities that supports them and their families. Drastically reducing crime has the effect of creating *'unemployed criminals'* people with a skillset that's hard to sell in a traditional economy. El Salvador probably did take a hit to its GDP, but that was compensated by the increase in economic activity and investments. Bitcoin was a big part of that. Bitcoin got a lot of bad press as a technology only used by criminals, or a crazy investment for crazy speculators. These takes failed to understand the technology and it's potential. What Bitcoin offers is a decentralized, fast and secure payment system for free. El Salvador doesn't have to maintain it, regulate it, or even monitor it. All very costly activities that a small country can do without. Bitcoin is a mathematically secure way of payment. In a country where road infrastructures are challenging, Bitcoin offers people in remote areas the possibility to pay their bills without travelling for hours. In a country that was unsafe, Bitcoin offered people the possibility to go out without the fear of being robbed. It also attracted a kind of investors that would go nowhere else. And even if these investment can appear small, for a country like El Salvador it's a big change. The Salvadorian experiment in a freer economy, crypto-friendly and smaller government, in a time of increasing inflation, has a lot of people watching. In a continent that leaned left for so long, this is a big change. My opinion is that there would be no Javier Millier hadn't there been a Nayib Bukele before. Argentina has been a bastion of the left for decades. If the libertarian policies of Millier succeed in bettering the lives of Argentinians, we might be on the brink of a major cultural shift in the Americas and then the world. Argentina is a far bigger country than El Salvador, with far more people watching.

Applied Machine Learning Africa!

I have been to more scientific conferences than I can count. From to smallest to the biggest like NeuRIPS (even back when it was still called NIPS). Of all these events AMLD Africa is my favorite, by far. I first met the team two years ago when they organized the first in-person edition of the conference at the University Mohammed VI Polytechnic. I was immediately charmed by the warmth and professionalism, ambition and fearlessness of the team. So much that I joined the organization. AMLD Africa is unique on every aspect. By its focus on Africa, by its scope and ambition, by its incredibly dynamic, young, passionate, honest and resourceful team, all volunteers. It is hard to believe that this year in Nairobi was only the second in-person edition. AMLD Africa does the impossible without even realizing it. It has an old school vibe of collegiality, community and most importantly **__fun__** that is so lacking in most conferences today. All without compromising on the quality of the science. It offers one of the best windows into everything AI and Machine learning happening in Africa. Africa is a continent on the rise. But a very hard continent to navigate because of information bottlenecks. Traveling across Africa is not easy (it took me 28H from Nairobi to Casablanca), there are language barierers separating the continent into different linguistic regions (French, English, Portuguese being the main ones). And just the fact that all too often we do not look to Africa for solutions. AMLD Africa is solving all that, by bringing everybody together for a few days in one of the best environments I got to experience. Thank you AMLD Africa.
appliedmldays.org/events/amld-af...

Understanding the Complex Adoption Behavior of Augmented Reality in Education Based on Complexity Theory: a Fuzzy Set Qualitative Comparative Analysis (fsQCA)

Augmented reality (AR) is one of the recent technological innovations that will shape the future of the education sector. However, it remains unknown how AR potential may impact the behavioral intention (BI) of using AR in education. Based on the Unified Theory of Acceptance and Use of Technology (UTAUT) and the technology acceptance model (TAM), this article empirically considers how such features impact user behavior. Utilizing survey data of 100 students, we perform fuzzy set qualitative comparative analyses (fsQCA) to derive patterns of factors that influence BI to use AR in education. The outcomes of the fsQCA demonstrate that high BI to use AR in education is achievable in many different ways.The current paper argues that students' BI to use AR in education is triggered by a combination of different aspects present in these supports. In order to address the factors that enable AR usage intentions in education, the paper presents a conceptual model, relying primarily on the UTAUT and TAM theories. This study investigated how these two theories shape intentions to use AR in education. The findings of the fsQCA analyses demonstrate the existence of multiple solutions to influence users' BI to adopt AR in education. The outcomes underline the significance of targeting certain combinations of factors to enhance student engagement. The most major limitation was the issue of causal ambiguity. Even though we employed the fsQCA as an adequate methodological tool for analyzing causal complexity, we could not justify causality. Furthermore, other methods can be used in future studies to obtain more detailed results.
xrm.ma/research-publication/

XR Voice (Moroccan Dialectal)

XR Voice is an initiative aimed at bridging the gap between scientific research and professional expertise. Recognizing that the advancement of scientific inquiry begins with elevating awareness within the professional realm, XR Voice seeks to gather insights from experts across various fields. By listening to the voices of professionals and their perspectives, this platform aims to explore how scientific research can enhance and refine diverse domains of expertise. Through this collaboration, XR Voice endeavors to catalyze a symbiotic relationship where cutting-edge research not only informs but actively elevates the standards and practices within the professional world. By attentively considering the perspectives of professionals, this platform endeavors to explore how scientific research can enrich and refine various domains of expertise. Through collaborative engagement, XR Voice seeks to cultivate a symbiotic relationship wherein cutting-edge research not only informs but actively elevates the standards and practices within professional contexts. This mission is underpinned by the fundamental belief that all development begins with a deepened awareness and appreciation of scientific inquiry. Furthermore, this concept encourages experts to utilize Moroccan dialectal Arabic whenever feasible, fostering inclusivity and cultural resonance within the discourse. “No country has ever prospered without first building its capacity to anticipate, trigger and absorb economic and social change through scientific research.” Dr. El Mostafa Bourhim

A new version with minor updates.

Hello everyone! Last week we released a new version of Bluwr. The website looks almost the same, but we have: - Simplified the login page by removing the photo (it caused some display errors on some phone) - Made the **Follow buttons** clearer, to make it easier to know if you are following someone - Fixed an error that caused the number of Bluws to not appear in the analytics table - Fixed some typos on the french website Everyday we strive to make Bluwr better. Thank you for being here! The Bluwr Team

The Impact of Big Five Personality Traits on Augmented Reality Acceptance Behavior: An Investigation in the Tourism Field

Along with the rapid development of the Internet and mobile devices, the integration of augmented reality (AR) in the tourism sector has become very popular. Utilizing the Big five model (BFM) as the theoretical framework, the study examines the role of personality in influencing the behavioral intention (BI) to use mobile augmented reality in the tourism sector (MART). The study further investigates the role of personal innovativeness (PIV) in determining tourists’ behavioral intentions to use MART. Quantitative research was carried out to test the conceptual model. This paper strengthened the analysis by implementing PLS-SEM method using data collected from 374 participants. The study results demonstrated that openness to experience (OPN) is a strong predictor of MART use. In addition, agreeableness (AGR), conscientiousness (CONs), extraversion (EX), neuroticism (NR), and personal innovativeness (PIV) have all significant and positive impacts on behavioral intention (BI) to use MART. The present research purpose was to investigate the BFM variables with regards to MART use. The research also examined the contribution of PIV in explaining the BI to use MART. By employing PLS-SEM to tackle the primary study question. The current work makes a significant advance in MART use research. Empirically, the findings achieved are consistent with the BFM. Based on the outcomes of this research, all relationships have been assessed as being statistically relevant. Moreover, PIV positively influences the use of MART. The BI to use MART was positively impacted by AGR (H1: β = 0.128), CON (H2: β = 0.108), EX (H3: β = 0.124), NR (H4: β = 0.322), and OPN (H5: β = 0.169). This implies that users are expected to exhibit a strong BI to use MART when they are agreeable, conscious, extroverted, neurotic, and open to experiences. Additionally, the outcomes of the present paper also significantly upheld the association between PIV and the BI to use MART. Path analysis was found to be significant and positive (H6: β = 0.156); the result states that innovative tourists will intend to use MART. The important limitations are a higher risk of overlooking ‘real’ correlations and sensitivity to the scaling of the descriptor variables.
xrm.ma/research-publication/

Reshaping Sport with Extended Reality in an Era of Metaverse: Insights from XR the Moroccan Association Experts

Extended reality (XR) is becoming a growing technology used by athletes, trainers, and other sports professionals. Despite the rapid growth of XR, its application in sports remains largely unexplored. This study is designed to identify and prioritize factors affecting the implementation of XR in Moroccan sports science institutes. To achieve this, the study employs the A’WOT methodology, a hybrid multi-criteria decision method combining the Strengths, Weaknesses, Opportunities, and Threats (SWOT) technique with the Analytic Hierarchy Process (AHP). Through expert group discussions, the study identifies and categorizes the factors affecting XR implementation into SWOT groups. Subsequently, the AHP methodology is employed to determine the relative importance of each factor by conducting interviews with a panel of sports and XR experts. The study’s findings, obtained through the A’WOT methodology, establish a ranking of the fundamental factors for successful XR implementation in Moroccan sports science institutes. The findings suggested that a strategic approach for implementing XR technology in Morocco needs to be driven principally by a combined approach based on the SWOT opportunities and strengths groups. The present study investigates the benefits, challenges and opportunities of XR technology in Moroccan sports science institutes based on the SWOT-AHP framework. The strengths and opportunities ratings based on XR The Moroccan Association perspectives are positively inter-preferred for XR technology. Thus, based on this research, the framework provided can be interpreted as a roadmap for supporting the development of the strategic implementation of XR technology in Moroccan sport science institutes, while providing more credible information for decision-makers in the overall process. An in-depth analysis of the findings enables us to conclude that the strategic implementation of XR technology in Moroccan sports science institutes has to be driven principally by the opportunities factors that could assist in overcoming the identified main weaknesses and threats, along with maximizing the strengths. Following these guidelines, decision-makers are expected to initiate a range of activities in order to establish the right external environment in which opportunities can be fully exploited to tackle the principal weaknesses and threats revealed by the analysis. This research provides strong evidence for XR deployment in the sense that it reflects the views of XR The Moroccan Association practitioners and researchers on XR technology.
xrm.ma/research-publication/

A New Hope; The Dawn of Computational Pathology

April 12, 2017, marked a revolutionary turning point day in medicine. The United States Food and Drug Administration (FDA) granted de novo 510(k) clearance of the first whole slide imaging (WSI) system for primary diagnosis in surgical pathology. A product abides an FDA regulation as a medical device if it fulfills in labels, promotion, and/or consumption the Federal Food, Drug, and Cosmetic Act standards (Title 21 Code of Federal Regulations part 201, [h]), yielding adherence to pre- and post-marketing regulatory purview. The intended usage will decide on the governing pathway yet protect public health. In stark contrast to digitizing radiology initiated in 1980, digital pathology has been lethargic, with many perceiving the late regulatory field as the main barrier to its deployment. Now, such a milestone is a testament to the tenacity of The Digital Pathology Association, the strong evidence of safety, effectiveness, and noninferiority to the discordance rate of glass slides from Philips IntelliSite Pathology Solution - the first WSI solution - and, of course, an open-mindedness and forward-thinking of FDA for its implications for pathologists and patients. The first «system enables pathologists to read tissue slides digitally to make diagnoses, rather than looking directly at a tissue sample mounted on a glass slide under a conventional light microscope. » «Because the system digitizes slides that would otherwise be stored in physical files, it also provides a streamlined slide storage and retrieval system that may help make critical health information available to pathologists, other health care professionals, and patients faster. » (Alberto Gutierrez, Ph.D., 2017). Under those conditions, the least inevitable scenarios, such as an expert second opinion and on-site pathologists' hurdles, and the essential to dispatch samples—a process that might prolong for days or weeks contingent on variables like distance, sensitive item, and transportation mode—are eased. Once again, pathology is among the complex subjects encountering global health issues, a chronic shortage of pathologists, stress/burnout, and substantial workloads, e.g., 0.1 pathologists/100k habitats in Africa. Even beyond a century, it has retained a vital function in diagnosing cancer -the 21st-century pandemic. But while pathology governs treatment decisions, patient care avenues, and oncology research, it is paradoxically the most vulnerable to inter- and intra-observer agreement matters. In short, digital pathology, virtual microscopy, or so-called “whole-slide scanning (imaging)”, is to cope with today's pathology pressure by streamlining workflow, widening collaboration and telepathology, boosting diagnostic confidence, and educational purposes, yet unsurprisingly, new horizons have emerged! «Not only will it promote increased efficiencies and collaboration between pathologists, but it also opens a completely new dimension toward computational pathology, which aims to increase accuracies and ultimately enhance patient care. » (Russell Granzow, 2017).

Four keys to create supervised learning model

Supervised learning is a strategy in machine learning that enables a model to learn from data without being explicitly programmed. In other words, in supervised learning, the model tries to find the relationship between the "input" X and the "output" Y. Therefore, the first key to creating a supervised learning model is the dataset. **Key 1 : Dataset** Having a labeled dataset is essential, including two important types of information: the target variable Y, which is what we want to predict, and the explanatory variable X, which are the factors that help us make predictions. Let's take an example: imagine we want our model to predict the weather (Y) based on factors like temperature, humidity, and wind speed (X). To do this, we gather a dataset with information from the past, where we already know both the weather outcomes (Y) and the corresponding factors (X). This dataset acts like a box of puzzle pieces. Each piece represents one of the factors, and finding the relationship between these pieces defines the weather. We can represent this relationship as a mathematical equation, like this: Y = F(X), where F represents our model. Therefore, the second key is the Model. **Key 2: Model** The fundamental model in supervised machine learning is a linear model expressed as y = ax + b. However, the real world often presents nonlinear problems. In such cases, we explore non-linear models, such as a polynomial of degree two like y = ax² + bx + c, or even of degree three, and beyond. It's crucial to understand that each model has parameters requiring adjustment during training. Consequently, the two remaining critical components are the cost function and the optimization algorithm. **Key 3 : Cost Function** In machine learning, a cost function, also called a loss or objective function, quantifies the gap between the target and predicted values, signifying the model's error. The aim is to minimize this error to craft the most effective model. **Key 4 : Optimizer** Optimizer forms the core of a machine learning model, representing the strategy to discover parameter values that minimize the cost function. It plays a crucial role in fine-tuning the model for optimal performance.

The nomad developer setup #2: infrastructure as code

In a first article, I shared a quick and easy way to access VScode from any browser. You still need to create a cloud provider account and setup a server. In this second article I will share with you a way to automate all the steps needed from the moment you have created your account to using VScode in the browser. In order to do this I am sharing a GitHub repo at the end of this article. It contains all the Infrastructure as Code (IaC) you need. IaC is a practice in software engineering, mostly on the devops side, that involves managing and provisioning infrastructure through code, rather than manual processes. It allows for the automated deployment and configuration of infrastructure, enabling consistency, scalability, and version control for your infrastructure. The repository combines three very powerful tools: Packer, Ansible and Terraform. - Packer is a tool to create machine images avoiding to re-install everything every time you start an instance. - Ansible is an automation tool that simplifies complex tasks like configuration management. In a simple yaml file (a playbook) you can install and configure your server(s). - Terraform is an infrastructure as code tool that enables the provisioning and management of cloud resources using declarative configuration files. Please check the README carefully, it lists the current limitations and will be updated when the repo evolves. In a next article I will add even more automation to it using a ci/cd (continuous integration and continuous delivery) pipeline using GitHub workflow to allow you to start/stop this infrastructure as you wish without accessing anything else than a web browser. Happy Devops!
github.com/azieger/remote-workst...

Part 4/5: Research, Rants, & Ridiculousness: The Lighter Side of PhD Madness

PhD: the art of turning coffee, chaos, and code into a degree, one panic attack at a time. - My machine learning model predicted I'd finish my PhD on time. Spoiler: Even AI has a sense of humor. - Neurotoxicity research: figuring out if it's the toxins affecting the brain, or just the endless hours in the lab. - Snake venom for drug discovery? Sure, because handling deadly snakes is less frightening than asking my advisor for a deadline extension. - I told my computer to find a cure for snake bites. It opened a travel site to Antarctica. No snakes, no bites, problem solved!

"Supervised and Unsupervised Learning in 90 Seconds of Reading"

** Brief Definition : ** Supervised and unsupervised learning are two fundamental facets of machine learning, each specifically tailored to handle distinct types of data. In supervised learning, the machine learning algorithm is trained on a labeled dataset, where each data point consists of both input features and corresponding output labels. The goal is for the algorithm to learn the mapping from inputs to outputs based on these labeled examples. In unsupervised learning, the machine learning algorithm is trained on an unlabeled dataset to find hidden patterns, structures, or relationships within the data. Unlike supervised learning, there are no predefined output labels for the algorithm to learn from. ** Intuition 🙂 : ** In supervised learning, envision having a jigsaw puzzle featuring a picture of a dog, where each puzzle piece is labeled with its correct position in the completed picture. The model learns from these labeled examples, figuring out the relationships between the shapes and colors of the pieces and their correct locations.This process, often referred to as the training step, allows the model to internalize the patterns within the labeled data. Subsequently, after training, the model is adept at taking a new puzzle of a dog and precisely assembling it based on the knowledge acquired during the training process. Now, imagine you have a bag of puzzle pieces without a picture or labels — just a mix of colors and shapes. In unsupervised learning, the model explores the characteristics of the puzzle pieces without any predefined labels or information about the complete picture, identifying groups that share similar colors, shapes, or patterns. The model doesn't know what the complete picture looks like, but it discovers that certain pieces belong together based on shared features. These groups represent clusters of similar puzzle pieces. In this puzzle analogy, supervised learning entails constructing a model with labeled examples to tackle a specific task, while unsupervised learning involves the model autonomously uncovering patterns or relationships within the data without explicit direction.

"Understanding Overfitting and Underfitting in a Quick 90-Second Read"

Overfitting and underfitting represent two common issues in machine learning that affect the performance of a model. In the context of overfitting, the model learns the training data too precisely, capturing noise and fluctuations that are specific to the training set but do not generalize well to new, unseen data. Underfitting, on the other hand, occurs when a model is enabled to capture the underlying patterns in the training data, resulting in poor performance not only on the training set but also on new, unseen data. It indicates a failure to learn the complexities of the data. **Analogy : ** Intuitively, returning to the example of the student that we presented in the definition of the machine learning concept, we discussed the possibility of considering a machine learning model as a student in a class. After the lecture phase, equivalent to the training step for the model, the student takes an exam or quiz to confirm their understanding of the course material. Now, imagine a student who failed to comprehend anything during the course and did not prepare. On the exam day, this student, having failed to grasp the content, will struggle to answer and will receive a low grade; this represents the case of underfitting in machine learning. On the other hand, let's consider another student who, despite having a limited understanding of the course, mechanically memorized the content and exercises. During the exam, when faced with questions reformulated or presented in a new manner, this student, having learned without true comprehension, will also fail due to the inability to adapt, illustrating the case of overfitting in machine learning. This analogy between a machine learning model and a student highlights the insightful parallels of underfitting and overfitting. Just as a student can fail by not grasping the course or memorizing without true understanding, a model can suffer from underfitting if it's too simple to capture patterns or overfitting if it memorizes the training data too precisely. Striking the right balance between complexity and generalization is crucial for developing effective machine learning models adaptable to diverse and unknown data. In essence, this educational analogy emphasizes the delicate equilibrium required in the machine learning learning process.

Grasping the concept of machine learning in just 90 seconds of reading

Machine learning is a branch of the artificial intelligence domain that encompasses various methods relying on learning from data to solve problems such as prediction, classification, dimensionality reduction, etc. Learning from the data means that machine learning systems can analyze patterns, extract insights, and make informed decisions without being explicitly programmed for a particular task. Instead of adhering to predetermined rules, machine learning methods adapt and improve their performance over time. The process involves training models, validating their accuracy, and testing their generalization to new, unseen data. Intuitively, we can envision the machine learning model as a student in a classroom. The teacher imparts knowledge to the student during what we refer to as the training step for the machine learning model. After the session, the student undergoes a quiz to solidify the concepts, representing the validation step for the machine learning model. Finally, the student takes a comprehensive final exam to test their understanding of the entire course. All of these stages occur gradually over what is termed as epochs in the context of a machine learning model. In this analogy, each epoch corresponds to a complete cycle of the training, validation, and testing phases. It's like the student attending multiple class sessions, quizzes, and exams to reinforce and assess their knowledge. With each successive epoch, the machine learning model refines its understanding of the data, enhancing its ability to make accurate predictions or classifications in real-world applications. Just as a student becomes more adept through repeated study sessions, the machine learning model becomes increasingly proficient with each pass through the data.

The nomad developer setup #1: A guide for beginners

Fun fact, I first wrote this article on another platform when working on Bluwr in a train. No matter the distance, it is always nice to be able to work from anywhere you want. All you need for this setup to work is an access to a web browser. In this article I will share part of the setup that I am using. It is the first one of a series where I will be covering the whole setup I am using. This first article is about how to set up vscode to work from any device with a web browser. Visual Studio Code is a text editor by microsoft. It can be customized with an almost infinite number of plugins. We will be using vscode in a client/server mode. The vscode server will be running on a virtual machine hosted by a cloud provider, the client can be any web browser. We will use the browser to connect to the vscode server. The interface inside the web browser will be identical to the standard vscode interface, and you will be able to edit any file on the virtual machine. So first you need a host. Any cloud provider will do, the only thing you need is an IP address and a user that can ssh to the host. Side note here, I almost exclusively use ssh keys, never user/password to connect to cloud hosts as it is way more secure. Once the ssh session started, install docker if not already available on the host. the execute the following command: ;; docker run -d \ --name=code-server \ -p 8443:8443 \ -e PASSWORD=”1234” \ ghcr.io/linuxserver/code-server ;; We could basically end this article right now. However, there are a few more things I want to talk about. These points took me a bit of time figure out and I thought I’d share them with you: 1. How to make sure you don’t have to re-install all your plugins every time you start a new code server instance 2. How to make sure your settings stored, so you don’t have to manually re-enter them every time you restart your docker container 3. How to set a custom working directory where all your code will be stored These are all technically achieved using the same principle: bind mount a folder of your host to a dedicated folder in the docker container. If you look at the container folder structure, you can see that all plugins are installed in the /config/extensions folder. Vscode configuration in the container is stored in /config/data/User/settings.json. If you have been using vscode for sometime and would like to use that same configuration, you can take that existing settings file and put it somewhere on your virtual machine. Finally, to get a defined workspace, you can bind mount the folder where you usually put your code to the one that is dedicated to it in the container. The full command is : ;; docker run -d \ --name=code-server \ -p 8443:8443 \ -e PASSWORD="1234" \ -v "/home/username/vscode_extensions:/config/extensions" \ -v "/home/user/vscode_settings:/config/data/User/" \ -v "/home/user/workspace/:/config/workspace" \ ghcr.io/linuxserver/code-server ;; To save money, I only start and pay for cloud resources when I need them. Of course, I don’t repeat all these steps and re-install all the tools I need each time I start a new virtual machine. I use a packer/ansible/terraform combination to create a snapshot that I can use as a base image each time I create a new host. This will be the subject of my next article. Now, working from anywhere as a digital nomad is really nice and convenient, but does not mean you should work all the time. I made this setup originally only to be geographically free, I still make it a point to have a healthy work/life balance. I have many hobbies and would not trade them for more hours of coding.

Automation existed long before the advent of AI.

Automation, the process of leveraging technology to perform tasks without human intervention, has a rich history that long precedes the rise of artificial intelligence. The textile industry, in the early 1800s, witnessed the introduction of automated looms that could weave fabric without constant manual operation. Before the Jacquard loom, weaving complex designs required workers who manually operated looms for long hours. The Jacquard Loom laid the foundation for the development of modern computing concepts like binary systems and programming, as its punch cards served as an early form of programming instructions The mid-20th century brought forth the development of programmable computers. These machines facilitated automation by executing predefined instructions, enabling the automation of complex calculations, data processing, and control systems in various industries. While AI has undeniably transformed automation, introducing powerful capabilities such as machine learning and cognitive reasoning, it is crucial to recognize that thoughtful application remains key. When used judiciously, AI significantly enhances automation and innovation, ultimately leading to a promising futur.

How Bluwr is optimized for SEO, Speed and Worldwide Accessibility.

TL;DR: Bluwr is Fast & Writing on Bluwr will help you get traffic. We made some unusual choices while building Bluwr. In an age where front-end web development means Javascript frameworks, we took a *hybrid* somewhat old-school approach. Our stack is super lean, fast, and optimized for ease of maintenance and search engines. ---- Most of the website is served statically through python Jinja Template and we use Javascript when interaction is needed, for these cases we use Vue.JS, 100% homemade vanilla JS and JQuery. For looks we use Uikit and in-house custom made CSS. These choices allow us to have a lighting fast website and have great benefits for our writers. Because most of Bluwr appears as static HTML, articles appear first, readers never have to wait for them to load, and search engines have no difficulty indexing what's on Bluwr.com. This makes everything you write on Bluwr easier to find on the internet. It also means that Bluwr.com loads fast even on the worst of connections. Something noteworthy as even a slight delay in loading can significantly reduce the chances of your article being read. Our goal is to make Bluwr accessible to anybody on the internet, even on a limited 3G connection.

Welcome to Bluwr.

We are glad to see you here, we promised that Bluwr would be released on the 13th of November 2023 and we delivered. Bluwr is unique, we took inspiration from times far before the internet. Bluwr is a bridge between the past and the future, a conduit for thoughtfulness and inspiration. We built it with maturity and foresight, striving for beauty and perfection. A text-based platform for times to come, the past and the future seamlessly merging into something greater. "" Think Forward. "" - Bluwr.
bluwr.com