Techbio x Africa: Early Movers - part3
850
Early Signs, Real Ventures
It's one thing to say the infrastructure and talent are here but the real test is whether it yields actual companies.
And the signs are already showing. A new class of TechBios is taking shape, raising money, and doing the first thing every good TechBio does:
… drum roll, you should know it by now…
Building proprietary datasets.
The support system is forming too. OneBio, a Cape Town venture studio, closed a $47M Series A to back founders at the biology–technology edge.
Villgro Africa in Nairobi has already incubated 40+ health and life science startups and unlocked $18M in follow-on capital. These are strides stimulate the Techbio ecosystem and in part, to close Africa's translation gap with venture tools.
And the startups coming out of this wave are telling. I thought I would share my personal pick here. Start-ups I can map on the playbook trajectory.
Yemaachi Biotech in Ghana raised $3M from YC, Tencent, and LoftyInc to build the world's most diverse cancer knowledge base, sequencing samples across the continent to power precision oncology. As founder Yaw Bediako put it:
"We're looking at trying to understand cancer in the African diaspora - African American, Black British, and continental Africans - the first initiative of its scale. You can't say you're studying a disease if you don't include the most diverse population on the planet, which is the Black population."
BioCertica in South Africa, backed by Pronexus and the Gates Foundation's I3 program with a $2.2M seed, runs consumer genetic tests but is really playing the long game of building the first African polygenic risk database.
And Bixbio, part of OneBio's portfolio and an Illumina Accelerator graduate, assembled the largest reference dataset ever from Southern Africa, nearly 400 high-quality genomes across eight ethno-linguistic groups.
Even newcomers like Pandora Biosciences are starting on the same path, building chronic disease datasets designed for drug discovery.
And just this summer, the signal got even stronger. In June 2025, Revna Biosciences, a Ghanaian precision medicine startup, announced a landmark partnership with AstraZeneca. Within months, EGFR (gene coding for cell growth protein) biomarker testing integrated into Ghanaian cancer centers, oncologists trained in precision protocols, and the rollout of one of AstraZeneca's targeted therapies for lung cancer patients.
For a sub-Saharan market that has historically had near-zero access to this kind of precision oncology, that's nothing short of historic.
As Revna's CEO Dr. Derrick Edem Akpalu put it:
"This collaboration exemplifies how a synergized biomedical ecosystem such as RevnaBio's can help address long-standing institutional voids that have limited access to advanced molecular diagnostics and targeted therapies in this region."
It's a textbook case of a TechBio going from data and diagnostics to being a direct bridge for global Pharma into Africa.
None of this is random. Data-first plays are the starting point of TechBio always.
In the West, consumer genomics followed the same arc: 23andMe built a database of 15M genomes, went bankrupt, and still got snapped up in 2025 by Regeneron for $256M because Pharma wanted the dataset.
Tempus, sitting on 20 petabytes of oncology data, signed a $160M licensing deal with Recursion to train AI models for biomarker discovery and patient stratification.
The lesson is obvious: even before a molecule is in sight, the data itself is valuable enough to Pharma. Africa's first TechBios are now running that playbook and they're doing it from the most diverse human dataset on the planet.
The Stakes for Africa x TechBio
Case Study: 54gene - The Right Start, The Wrong Turn
54gene was supposed to be Africa’s genomics moonshot. Founded in 2019 by Dr. Abasi Ene-Obong, the company set out to fix the glaring gap where less than 3% of global genomic data came from Africans despite the continent holding the greatest genetic diversity on earth. Backed by Y Combinator, Adjuvant Capital, and Cathay AfricInvest, it raised $45M across three rounds and quickly became the poster child for African TechBio.
The model at first was exactly what you’d expect from a good TechBio: start with the data. 54gene partnered with 10 of Nigeria’s largest hospitals, built a biobank that grew past 100,000 patient samples, and focused on high-value cohorts like cancer, cardiovascular disease, diabetes, and sickle cell.
This was the right first play: position as an enabler for hospitals and research centers, pile up proprietary datasets, and generate revenue through paid Pharma collaborations. In other words, service-led first, platform-led later — the same arc followed by U.S. genomics pioneers like 23andMe.
Then came COVID. 54gene pivoted into diagnostics, scaling mobile labs and at one point driving Nigeria’s daily testing capacity from 100 to over 1,000. Revenues spiked — over $20M from COVID testing — but the pivot also pulled the company away from its core playbook. Instead of doubling down on turning its biobank into translational insights with AI, it spun up Seven Rivers Labs, a costly diagnostics arm. The bets didn’t pay off.
By 2022, as COVID demand collapsed, 54gene was caught between a fading diagnostics business and a stalled genomics mission. Layoffs, valuation cuts, and boardroom fights followed. In 2023 the company shut down operations; by 2025, its assets, including the biobank of 100,000 Nigerian genomes were up for sale at just $3M, before a Lagos court froze the deal amid lawsuits between founder and investors.
The story matters because it shows how fragile the trajectory can be.
Imagine if instead of diagnostics, 54gene had invested its datasets into AI models to map dosage differences for African populations, identify new drug targets, or partner on stratified clinical trials. That’s the road from platform to assets, the road that makes a TechBio a unicorn.
Dr. Ene-Obong seems to agree. His new company, Syndicate Bio, is now doubling down on the same thesis but with AI built in from day one partnering on cancer genomics in Nigeria and aiming to turn Africa’s diversity into global drug discovery.
It’s the continuation of the playbook 54gene set in motion, but with the missing piece restored.
Share:
Techbio x Africa: Early Movers - part3
copy:
https://bluwr.com/p/413338542
TechBio x Africa Manifesto: The Edge - part2
851
In Africa The Bottleneck Was Always Here And Now There a Real drivers for change
Translation is now recognized as the great bottleneck of drug discovery worldwide. But in Africa, it has always been the bottleneck.
Not in developing drugs, but in applying them.
Most medicines were discovered and validated elsewhere, then imported with little understanding of how African populations would metabolize or respond to them. The result is a structural mismatch: Africa accounts for 18% of the global population and 20% of the disease burden, yet fewer than 3% of clinical trials take place on the continent, most of them concentrated in South Africa and Egypt.
This gap is not trivial. Drug absorption, distribution, metabolism, and excretion (the ADME framework) are heavily influenced by genetic variants, especially in liver enzymes like CYP-450, which remain poorly characterized in African populations.
In theory, Africa's extraordinary genetic diversity should have been a global advantage for understanding variability in drug safety and efficacy. In practice, it was ignored.
As Professor Kelly Chibale of the University of Cape Town has argued:
"If you really want to have confidence in a clinical trial, it must start in Africa. Why? If it works in Africa, there's a good chance it'll work somewhere else, because there is such huge genetic diversity."
Then came COVID-19. The pandemic was a turning point, mobilizing governmental, NGO, and international funding to build sequencing labs, train scientists, and set up data infrastructure.
In my opinion, the Africa Pathogen Genomics Initiative (Africa PGI) became emblematic of this shift.
The first 10,000 SARS-CoV-2 genomes from Africa took 375 days; the next 10,000 just 87 days; the following 10,000 only 24 days. Today, all 54 African countries have sequencing capacity, and African scientists identified two of the world's five variants of concern.
For the first time, Africa showed it could operate at global pace when given the tools.
These investments were catalytic and revealed what had long been latent:
Africa is not just a recipient of medicines but a potential engine of translational science.
The infrastructure layer, built with public and philanthropic support (like the Bill and Melinda Gates Foundation), is now enabling a broader ecosystem: regulatory frameworks like the Africa CDC and the African Medicines Agency, scientific hubs such as H3D in Cape Town, and new hardware capacity supported by corporates like Thermo Fisher's Centre for Innovative Research in South Africa.
From here, the snowball is rolling. What began with genomics is already extending across the translational stack. In Ghana, new medicinal chemistry capacity has positioned the country as only the second on the continent (after South Africa) able to run early-stage compound design, linked into the pan-African Drug Discovery Accelerator.
This is big, because the continent can now de-risk potential assets.
Pharma is of course watching closely. Roche's African Genomics Program is sequencing tens of thousands of African genomes through local biobanks. Sanofi's partnership with DNDi shows how compounds de-risked in Africa can enter global pipelines.
And demographics strengthen the logic: Africa's population is set to nearly double by 2050, while non-communicable diseases like diabetes, cardiovascular disease, and cancer will become leading causes of death by 2030 which is the same conditions driving Pharma pipelines worldwide.
The Continent Is Full Of Bright Tech Minds
But data infrastructure alone is not enough; translation also depends on whether there is talent capable of making sense of the data.
COVID revealed this too: it was an African-born (Tunisia) AI company, InstaDeep, that helped BioNTech build the Early Warning System able to flag >90% of WHO-designated SARS-CoV-2 variants an average of two months before their official classification.
The company had already been working with BioNTech on personalized cancer vaccines, and post-acquisition it continues to run as an independent AI lab powering BioNTech's drug discovery, improving AlphaFold-like protein folding in immunology to designing next-generation mRNA cancer vaccines.
The $700 million acquisition in 2023 was not only the largest AI deal outside the U.S. at the time, but also a watershed moment for the continent. As co-founder Karim Beguir put it in a recent podcast interview:
"our initial motive was to prove that young Tunisians, young Africans could innovate and compete at the highest level"
The significance goes beyond one company.
It validated Africa's AI talent density, which is being built from the ground up through grassroots, community-led efforts. Initiatives like Masakhane, a volunteer-driven movement advancing natural language processing for African languages, or Deep Learning Indaba, cited globally as a model for how to mobilize a continent around machine learning, are emblematic of this bottom-up energy.
I saw it myself at Applied Machine Learning Days Africa 2024 in Nairobi, where more than 3,000 participants gathered across three days mostly researchers, innovators, and students taking responsibility for local problems and showing how AI can answer them.
This effort-led culture is now being matched with hardware too infrastructure. Microsoft has launched its first Azure cloud region in South Africa, enabling GPU-grade compute to stay on the continent, while Nvidia and Cassava are building an AI factory in Johannesburg, with expansions planned for Kenya, Egypt, Morocco, and Nigeria.
Share:
TechBio x Africa Manifesto: The Edge - part2
copy:
https://bluwr.com/p/413329090
Techbios x Africa : The manifesto part 1
854
Closer to Humans: The Next Big Opportunity in TechBio:
Hitting Eroom's law in translating assets to clinics
If Moore's law promises exponential gains from technology, Eroom's law (Moore spelled backwards) reminds us that drug discovery has stubbornly resisted that curve. For decades, the cost of bringing a new drug to market has roughly doubled every nine years, even as compute and data scaled exponentially. AI-driven TechBios were supposed to break this trend and accelerate discovery, lower costs, and flood the pipeline with new medicines. In its early day's, Recursion was going with something like a 100 drugs in 10 years.
And to some extent, they have delivered. Programs from Insilico or Recursion show how AI can compress preclinical timelines from five years down to 18–30 months. Costs are lower, throughput is higher, and in silico tools have expanded the space of molecules Pharma can explore.
But reality is that most AI-first drugs are still aimed at well-known targets, and once they reach the clinic, they face the same bottlenecks as traditionally developed drugs. Phase II proof-of-concept success rates hover at ~40%, unchanged.
Back to Eroom's law in action, the bottleneck has shifted downstream. The graph from speed invest tells the story nicely.
Early discovery (target validation, compound screening, lead optimization) accounts for ~25% of costs, the bulk of time and money is lost in Phase II and Phase III, where failure rates spike and costs per molecule can exceed 20–25% of the total.
Functional Data Is the Missing Piece
Why? Because our translational models are still inadequate proxies for human biology. Drugs fail not because they weren't optimized enough in silico, but because they don't behave as expected in humans, showing weak efficacy, unexpected toxicity, or adverse effects that outweigh benefits.
Conversely regulators are now pushing for more personalized approaches: genotyping, deeper disease phenotyping, and companion biomarkers to better stratify patients.
That means the next opportunity isn't about yet another molecule generator. It's about building the translation layer: generating functional, human-relevant data at scale.
Two pillars stand out:
Bench side. New experimental systems like organoids and organ-on-a-chip can capture human biology more faithfully than animal models, giving us early readouts of drug response in tissue that resembles real patients. it can be high-dimensional functional data (cells content imaging)
Bedside. Richer molecular profiling of patients to capture complete responses to interventions across all biological layers. The omics data, reflects physiological responses from the gene expressed to the protein inhibited till the end metabolite produced.
This is the frontier TechBios have yet to tackle.
Proprietary datasets from in vitro, in vivo, or in silico work aren't enough, because by design they remain at a distance from real human complexity.
Reminder, the demand is still there as the patent cliffs of 2030 are not going anywhere.
The Funding Gap: Bench Traction, Bedside Wide Open
The common denominator in TechBio is always the same: proprietary datasets.
On the bench side, we're already seeing how this can play out. Just last month, Parallel Bio raised $21 million to push forward a new model for immune drug discovery.
Their platform combines organoids and AI is set to generate massive proprietary datasets of immune responses. This 'Immune system in a dish' allows simulate how drugs behave across populations and verify candidates in vitro before they ever enter the clinic. The company dates back to 2021, but recent series A show their gearing up for growth and points to serious answers to the translation problems from Capital Interest.
The story on the bedside is very different. Here, the prerequisite is well-characterized patient data of omics like genomics, proteomics, metabolomics, deep clinical phenotyping. Not really the type of data you can engineer in your lab with enough wetware and hardware.
Pharma companies guard their clinical trials data as part of their asset. Biobanks have the scale needed but primarily share it with research partners and academics or monetizes them directly, selling access to screened samples and metadata at high prices. Their funded by goverments and charitable organizations around projects with defined partners within a consortio that have their for privilege access.
Hospitals typically generate only small, fragmented cohorts a few hundred patients, often disease-specific and far from the scale needed to train robust models.
And once TechBios push into later stages like preclinical or Phase I, costs spike: recruiting patients, managing trial sites, and running protocols and more tailored to big Pharma economics.
In the West, shrinking patient pools for many chronic diseases add yet another barrier driving the cost further up.
This imbalance explains why most visible TechBio innovation so far has come from the bench. Benchside players like Parallel Bio are proving you can generate your own data and own the feedback loop.
On the bedside, by contrast, barriers remain high and that leaves the space wide open.
The real question is not if bedside innovation will emerge, but where.
And it may well be that the answer lies outside the traditional Pharma hubs
Share:
Techbios x Africa : The manifesto part 1
copy:
https://bluwr.com/p/413316301
TechBios: The Playbook
855
The SaaS Playbook Enters Pharma
At start, TechBios bore the heavy upfront costs of architecture design, large-scale data acquisition, massive training runs, and inference, all to learn new principles in biology and deliver them as platforms Pharma clients could use for better drug design.
This unlock was driven by compounding forces. On the tech side, models improved as they scaled in size and input, while compute and storage costs fell (Moore's Law at work). On the bio side, labs and instruments achieved higher throughput, producing exponentially more data at lower cost - the Carlson Curve in genetics being the best-known example. (Sequencing your whole genome cost ~$10 million in 2007; ten years later, it was under $1,000.)
On the demand side, techBios emerged at a time when the status quo relied on rule-based computational methods grounded in rigid theoretical models. These could only handle a limited set of parameters, making it difficult to experiment broadly and ultimately constraining R&D pipeline output. Put in perspective, the stakes of this inefficiency are massive: 69 blockbusters will face patent cliffs by 2030, putting around 236 bio USD at risk.
As Manuel Grossmann the Founding Partner of Amino Collective (Health x Bio Fund in Europe), notes:
"The TechBio space benefits from two fundamental tailwinds: technological advancement and market demand."
For tech investors, the story clicked. These companies weren't tied to one risky therapeutic bet; they looked like horizontal software platforms that could scale across the entire industry. TechBios offered Pharma innovation closer with simpler unit economics clean, recurring revenues, faster adoption curves. As Cradle, using language models for protein design puts it:
"One annual software license. No hidden fees."
No surprise then that capital rushed in. In 2021, right at the cusp of this wave, VC investment in TechBios hit $2.4 billion, with mega-rounds north of $100M backing the promise of programmable biology.
… Opposed To The Longstanding Asset Deal
Until then, most real Pharma innovation was coming from a fundamentally different breed of companies with a focus much narrower and longer time horizons for product market fit.
Biotech companies select a well-studied biological target, develop a molecule against it, and march it through the clinical gauntlet. Their path to value creation is very interactive, reducing uncertainty every step of the way and focusing where prior knowledge give a fighting chance.
Their revenues are therefore less predictable, making take asymmetric bets requiring incredibly specialized knowledge and experience.
Headlines in the domain are hence quite binary. You get the 1 trio USD added Market Cap to Novo from from the GLP-1 of Embark and or failures of expected block busters for Alzheimers.
Pharma companies are in that sense expert in M&A deals for to power their innovation, estimated 65% of its revenue come from these operations. The size of deals made here are often quite substantial, getting back to Car T in the 2010, Roche bought Poseida Therapeutics, a San Diego-based for US $1.5 billion November last year.
… Moving into Co-Development And Blurring The Lines
As the TechBio field matured, one lesson became clear: benchmarks alone aren't enough.
Validation metrics carefully crafted to showcase model performances gets the initial traction but Pharma ultimately values assets, and without them TechBios struggle to show true impact.
The economic logic makes the difference obvious. A pure platform play might reach a few hundred million in enterprise value. But the real butter in this industry sits with assets, Pharma's trillion-dollar market cap rests on drugs that make it through the clinic. Without assets, TechBios miss the home run and risk falling outside the venture playbook entirely.
This is what pushed the industry toward co-development. Instead of selling platforms as tools, TechBios began striking deals that shared both risk and upside: upfronts, milestone payments, royalties. Late exemple of this is Creyon Bio AI signing 1 bio USD in milestone deal with Lilly.
As Manuel Grossmann of Amino Collective puts it again:
"Focusing purely on providing tools as products or services can often be challenging, since the exit potential tops out in the low hundreds of millions - often misaligned with the VC model."
This is where the difference between Techbio and Biotechs gets blurry. As the former starts developing their own drug running validation, toxicity and even clinical to out-license as assets, the latter becomes more and more tech enabled and building with open source models from industry like RosettaFold.
The TechBio Playbook Has Emerged
On top of that comes the platform. This is where raw data turns into usable insight. In Recursion's case, it's the RecursionOS, an operating system for biology that fuses automated labs with ML models to map complex biology. That's what Pharma pays for. The economics here look like $150M upfronts, R&D milestones, tiered royalties, exactly the Roche and Genentech partnership structure. At this stage, platforms prove they can de-risk discovery for others.
But the real prize sits in assets. Once the platform works, you push it into your own drug programs: new targets, new molecules, lead optimization.
This is where TechBios flip into biotech economics. Out-licensing assets to Pharma brings upfronts plus large milestone packages, and potentially royalties if the drug hits the market. It's higher risk, but it's also where exits climb from hundreds of millions into the billions.
That's the sequence: data → platform → assets.
Share:
TechBios: The Playbook
copy:
https://bluwr.com/p/413303451
TechBio, A few definitions
856
Why AM I Writing This?
I did not come to TechBio as a distant observer but grew into it. When I was studying life sciences engineering, the early signs of "software eating bio" were just starting to appear. Computational tools were making dents in how biology was done, and for me it was impossible not to be fascinated.
Fast forward a few years, and I am now operator inside a TechBio startup (shameless plug) leading AI development. That vantage point is really a strange mix. Some days I get swept up in the hype, convinced the next model drop is going to change everything; other days, the scientist in me wants to push back, to ask for proof, for data, for translation into the clinic. Balancing those two minds: both the early adopter and the skeptic is hard.
But it's also what makes TechBio such a fascinating space to build in.
And then there's Africa.
Coming from the diaspora, I studied and trained abroad, where most of the breakthroughs and AI-driven advances in drug discovery were happening in the West
But I kept asking myself: what about here? What about my continent?
At first, it felt like we were always on the receiving end of innovations born elsewhere. But as I dug deeper, I realized something the very bottlenecks I was seeing firsthand inside TechBio like the gaps in translation, the missing data closer to humans, well that is were exactly where Africa holds an unfair advantage.
That's what this deep dive is about. Not a hype piece, not a catalog of every new startup, but an attempt to map the playbook, show where TechBio has already delivered, and point to the next frontier
→ one that may well be written in Africa.
If you're an investor, I want you to come away with clarity on what makes a TechBio defensible, where the real opportunities lie, and why the continent is positioned for outsized returns.
If you're a scientist, founder, or operator, you'll find the logic of the playbook, examples of what works (and what doesn't), and maybe a spark for your own next venture.
The full story starts just ahead. But first, let's look at the latest wave, AI agents and what they tell us about how fast new technology moves from silicon into cells.
TL;DR
TechBio has gone from hype to playbook: data → platform → assets.
The next bottleneck is translation: generating data closer to humans.
Africa holds the unfair advantage to solve this, thanks to its diversity, newly acquired infrastructure, and emerging research ecosystem.
Companies are already being built on this frontier, clear venture opportunities exist, exemples of exits and more investors should catch up.
Not TechBio Yet But Another Reminder Pharma Can't Escape the Tech Cycle
I f you want to know where the next disruption in pharma will show up, follow the broader tech cycle. Every new wave of technology now leaves its mark on the industry. In the age of Tech x Bio, there's nonstop traffic between silicon and cells: cloud, machine learning, robotics and now, AI Agents.
I see this almost daily. My feed is flooded whenever a new model drops or a product launches. At first it feels like a headline meant only for the tech crowd. But give it a few months, and suddenly that "just another AI update" is wired into pharma workflows with Paul Hudson, CEO of Sanofi, giving full interview to McKinsey on how transformative it is for the industry. One thing is clear: whether in discovery, trials, or manufacturing, the two domains have become inseparable.
A nice review on comes straight out of MIT to help bring perspective.
YC's (famous startup incubator from San Francisco) track record makes this pattern visible. They were early to back today's main players when skeptics thought they had it figured out. Companies like Ginkgo Bioworks and Atomwise (more on them later) proved computation could be foundational to biotech and Pharma. Now YC is backing AI Agent startups, showing once again how quickly a new stack of technology crosses into Pharma.
And if you thought leaders like Hudson were just posturing as "tech-savvy" with hyped tools, consider Benchling. One of the most established TechBio incumbents, it recently acquired Sphynx, an AI-agent startup focused on streamlining hypothesis generation and analytics in discovery. By weaving these capabilities into its stack, Benchling reinforced that this isn't a passing experiment, it's another layer becoming part of the system.
Now, let's be clear. AI Agent companies are not "TechBios" in the sense of this deep dive. They are, for now, tools or orchestration engines that Pharma teams can plug in to augment their workforce and automate tasks once dauntingly manual (and there are many, sometimes not the once you expect… like procurement). They represent the kind of technological spark that helps scale approaches to well-known problems in drug discovery.
And while every new wave of technology makes a dent in Pharma, the hundred billion unlocks usually lie elsewhere, closer to solving the big bottlenecks of how drugs are discovered, tested, and developed. That's where TechBios come in, and where we'll turn next. For now, think of this section as the apéro: the first cracker to show how the rules of the game have changed.
How TechBios Create (and Capture) Value
If you perplexity (we don't "Google" anymore in the age of AI) the definition of TechBio, you'll either get flooded with abstract jargon that means little if you're not steeped in the field, or a description so simple it could apply to almost anything. Neither helps much.
A more pragmatic lens is to look at TechBios through their value proposition. And this is where it gets interesting. TechBios have been around for over a decade, and in that time their offerings and therefore their positioning in the industry have shifted dramatically.
A more pragmatic lens is to look at TechBios through their value proposition. And this is where it gets interesting. TechBios have been around for over a decade, and in that time their offerings and therefore their positioning in the industry have shifted dramatically.
Share:
TechBio, A few definitions
copy:
https://bluwr.com/p/413291834
AI: The fallacy of the Turing Test
2767
The Turing test is simple to understand. In a typical setup, a human judge engages in text-based conversations with both a human and a machine, without knowing which is which, and must determine which participant is the machine. If the judge cannot reliably tell them apart based solely on their conversational responses, the machine is said to have passed the test and demonstrated convincing human-like intelligence.
This is convenient, it perfectly avoids facing the hard questions such as defining intelligence and consciousness. Instead, it lays out a basic naive test founded on an ontological fallacy: it's not because something is perceived as something else that it is that thing.
The most evident critique of the Turing Test is embedded into the fundementals of Machine Learning itself:
- The model is not the modeled. It remains an approximation however precise it is. A simple analogy makes the ontological fallacy clear. It's like going to a magic show, seeing a table floating above the ground and believing that the levitation really happened. How many bits of information separate a real human from a chatting bot? Assuming the number is exactly 0, without any justification, is an extraordinary naive claim.
Interestingly, the Turing Test also greatly fails at defining so called super-Intelligence. A super Intelligent machine would evidently fail the test by simply providing super-intelligent answers. Unless it decides to fool the experimenter, in which case it could appear as anything it desires rendering the test meaningless.
Regarding modern LLMs, the veil is already faling. LLMs have quircks, like an oversuage of em-dashes. A strange features that is indicative of something potentially pathological in the way the models are trained. These strange dashes would have been expected if a majority of people were using them. However it so happens that hardly anyone knows how to find them on their keyboard. This proves that LLMs are not following the manifold of human writing and suggests the existence of other bisases.
Finally, embedded inside the promotion of the Turing test is often a lazy ontological theory of materialism that stipulates that consciousness is not fundamental but a byproduct of matter. Often negating it's existence altogether: It's not that consciousness can be faked, or that it is the result of computations, the understanding is that consciousness does not exist. It is an illusion that takes over the subject of the experience. Again a theory of convenience, based on little justification that produces a major paradox:
Who is conscious of the illusion of consciousness?
Share:
AI: The fallacy of the Turing Test
copy:
https://bluwr.com/p/389753139
What’s new in Bluwr version 1.4?
12216
Bluwr keeps evolving to enhance your reading, sharing, and publishing experience. Here's what version 1.4 brings:
==**Four Major New Features to Discover**==
__1– Series: Organize Your Articles into Collections__
Do you publish regularly around the same topic? The new Series feature lets you group your articles into thematic, coherent collections. Whether it's a documentary project, a serialized fiction, or a journal, you can now offer your readers a structured and fluid experience.
__2– Two New Reading Themes__
Bluwr now includes two new visual modes, alongside the existing Mediterranean Sea (default) and Vintage Newspaper (classic printed-paper feel):
Comfort: designed to reduce blue light exposure while remaining readable even under bright daylight.
Night Mode: perfect for reading in the dark without disturbing others nearby.
You can switch between themes any time from your profile settings.
__3– Persistent Login Across Devices__
The login bug has been fixed. Your session will now remain active—even when switching browsers or devices. That means you can now use Bluwr on your phone just like an app, without needing to log in every time.
__4– General Improvements and Fixes__
Beyond these visible updates, this version also includes interface and usability improvements for a smoother, more intuitive navigation experience.
Try out the new features today. The Bluwr team continues to refine the platform—version after version.
Share:
What’s new in Bluwr version 1.4?
copy:
https://bluwr.com/p/297235456
5 things you can do with Bluwr's new Sharing QR Codes
12004
What are Bluwr Sharing QR Codes?
Bluwr Sharing QR Codes are quick-access codes generated when you click the share icon on any article or the information [i] icon on a user profile within Bluwr. When scanned, these QR codes instantly direct your device to the respective article or user profile—removing all barriers and making sharing as smooth and immediate as possible.
==**Five Practical Uses for Bluwr Sharing QR Codes**==
__1-Effortless Article Sharing__
Reading something a friend would love? Instead of copying links or searching for them in messaging apps, just have your friend scan the article’s QR code right off your screen. They’ll have instant access to the content, wherever you are—no email or messaging apps required.
__2-Personal Branding Tool__
Elevate your networking: allow potential clients, recruiters, or collaborators to scan your profile's QR code and immediately see your credentials and expertise. You can even display the code on business cards or signage in your office, or print a dedicated QR code linking to an article that highlights your experience and services.
__3-Enhance Presentations and Posters__
Boost engagement at events or talks by displaying a QR code that links to your speaker profile, more of your writing, or supporting materials. Attendees interested in your work can scan to access detailed bios, summaries, or extended resources—all with a single scan.
__4-Streamline Conferences and Events__
Organizers can reduce printing costs and simplify information access by distributing schedules, speaker bios, and session abstracts as QR codes on programs, posters, or badges. For example, a session listing might feature the speaker’s name, topic, and a QR code that links to their biography and full session details, putting comprehensive event info at every attendee’s fingertips.
__5-Smarter Book Sample Distribution__
Publishers and indie authors can host free book samples on Bluwr, leveraging its strong online presence. Instead of printing numerous paper copies, just print QR codes that link directly to these samples. This approach dramatically reduces costs and makes it effortless for readers to explore multiple works—expanding reach while saving resources.
Share:
5 things you can do with Bluwr's new Sharing QR Codes
copy:
https://bluwr.com/p/297214693
Make Your Posts Beautiful: Bluwr Text Formatting Guide
5774
Bluwr has a simple text formatting system that automatically transforms your writing into beautifully styled posts. Here's how to use these powerful features to make your content stand out.
==Essential Text Styling==
**Bold Text**
To make text bold, wrap it with two asterisks on each side. For example, if you write two asterisks, then the word "important", then two asterisks, it will appear in bold formatting.
;;
asterisk asterisk important asterisk asterisk
;;
*Italic Text*
For italic text, use single asterisks around your words. Write one asterisk, your text, then another asterisk.
;;
asterisk text asterisk
;;
__Underlined Text__
Create underlined text by using two underscores before and after your text.
;;
underscore underscore highlighted underscore underscore
;;
==Layout Elements==
Create Visual Breaks
Want to add a horizontal line to separate sections? Simply type four dashes in a row.
;;
dash dash dash dash
;;
Center Your Text
Make text appear centered by wrapping it with two equals signs.
;;
equals equals This text will be centered equals equals
;;
Show Code and Examples
Display code or preserve exact formatting by wrapping text with two semicolons. This is perfect for showing examples or code snippets.
;;
semicolon semicolon
Your code here
semicolon semicolon
;;
==Lists and Organization==
Bullet Points
Create bullet lists by starting each line with a dash and a space.
;;
dash First item
dash Second item
dash Third item
;;
Numbered Lists
Make numbered lists by starting lines with numbers and periods.
;;
1. First step
2. Second step
3. Third step
;;
==Automatic Magic==
**Lead Paragraphs**
Here's something special - Bluwr automatically styles the first sentence of your post as a lead paragraph. Just write naturally and your opening will be highlighted to draw readers in.
**Smart Processing**
All these formatting options work together seamlessly. The system processes your text in the background, so you can focus on writing great content while Bluwr handles the presentation.
==Pro Tips for Great Formatting==
- **Mix different styles** for rich, engaging posts
- **Don't overdo it** - let your content be the star
- **Use bullet points** to break up longer paragraphs
- **Try centered text** for important announcements
- **Code blocks** are perfect for sharing examples or preserving specific formatting
**Start experimenting** with these formatting options in your next post. They're designed to be intuitive - just type naturally and watch your words transform into beautiful, readable content that captures your readers' attention.
The best part? Once you learn these simple patterns, they become second nature. Your posts will look professional and polished without any extra effort.
Share:
Make Your Posts Beautiful: Bluwr Text Formatting Guide
copy:
https://bluwr.com/p/238854010
Moroccan cybersecurity dangerously undermined by successive attacks
6250
Since April 2025, Morocco has been facing a series of major cyberattacks claimed by a collective of hackers allegedly Algerian, named "JabaRoot DZ." These cyberattacks have targeted key economic and administrative institutions, notably the Ministry of Employment, the National Social Security Fund (CNSS), and more recently the Ministry of Justice, as well as platforms related to land registry and property conservation.
What is clear, let’s say it outright, is that Algeria does not possess the technological power or expertise for such operations. It is highly likely that its services call upon "skills," notably from Eastern Europe, to attack the Kingdom’s interests in its ongoing global war against its "classic enemy." If this hypothesis proves true, the question would then be who else might have the hacked information and for what purpose.
The first intrusion, which occurred in early April 2025, began with the hacking of the Ministry of Employment’s website and quickly extended to the CNSS database. This attack led to the leak of thousands of sensitive documents, exposing the personal information of nearly two million employees and the administrative data of about 500,000 Moroccan companies. Among the leaked data were pay slips detailing names, social security numbers, salaries, and sometimes identity card numbers of very important personalities and leaders of Royal Air Maroc, Attijariwafa Bank, Banque Centrale Populaire, and the Mohammed VI Investment Fund.
Less than two months later, in June 2025, JabaRoot DZ claimed a new "large-scale" cyberattack against the National Agency for Land Conservation, Cadastre, and Cartography (ANCFCC). Although the ANCFCC denied any direct intrusion into its servers, it was revealed that the vulnerability originated from an electronic platform used by some notary offices for archiving land documents. The hackers claim to have obtained about 4 terabytes of data, including millions of land titles, contractual documents, copies of identity cards, passports, as well as banking documents and information concerning high-ranking officials and public figures. This leak led to the temporary shutdown of the platform by the ANCFCC for security reasons.
The hackers justify these attacks as retaliation for alleged Moroccan hacking attempts against Algerian institutions, notably the Twitter account of the Algerian Press Agency (APS). They also threatened further actions in case of future attacks against Algerian interests. These events occur in the context of geopolitical tensions between Morocco and Algeria, exacerbated by recent developments related to the Sahara issue and regional rivalries; Morocco has been recording victory after victory at a rapid pace. Algeria, in its official and unofficial media, no longer hides and even implicitly claims responsibility for the hacking, ignoring that this amounts to a form of state terrorism.
These cyberattacks have had serious consequences: they have eroded citizens’ trust in digital public services, increased the risks of identity theft and banking fraud, and damaged the reputation of the affected companies. The Moroccan government has condemned these acts as "criminal" and announced measures to strengthen cybersecurity while launching internal investigations.
The series of attacks especially highlights major vulnerabilities in the cybersecurity of Moroccan institutions. The massive centralization of sensitive data on single platforms and the creation of junctions between multiple actors and platforms facilitate things for citizens and institutions in the context of digitalization, but also make it easier for hackers to gain massive access in case of a breach. It is therefore crucial to thoroughly and promptly review the national data protection strategy.
To better distribute its data and strengthen its security, Morocco could adopt several complementary strategies, relying notably on the 2030 National Cybersecurity Strategy and international best practices. It should likely avoid excessive centralization by distributing sensitive data across multiple secure systems, segment networks to limit lateral movements by hackers, and use data transmission techniques through several distinct channels to reduce the risk of simultaneous theft.
Morocco must also integrate decentralized cybersecurity solutions based on blockchain and collective intelligence, establish a national sovereign cloud with local hosting and end-to-end encryption guaranteeing the protection of critical information.
Moreover, the country should develop an agile and adapted legal framework, build a national pool of qualified cybersecurity professionals through specialized curricula and certifications, and establish a high-performance Security Operations Center combining advanced detection tools and local teams capable of managing threats specific to the Moroccan context. A higher cybersecurity school, where carefully selected students—true specialists—would be trained, could be a major strategic advance guaranteeing both competence and independence in this field.
Faced with rising cyber threats, it is urgent for Morocco to adopt a proactive and innovative cybersecurity policy based on a decentralized technical architecture.
Strengthening regional and international cooperation is not a luxury here. The real-time exchange of critical information is crucial; as is encouraging public-private collaboration through threat intelligence-sharing platforms to anticipate and respond quickly to incidents.
Today, it is clear that many claim to master the issue, offering services that will soon expose their limits and incompetence. Administrations and companies must be very cautious before engaging or hiring skills in this very sensitive domain.
This sphere relies on agile governance, the development of human skills, and active cooperation at national and international levels. An integrated approach is essential to build a resilient, sovereign cyberspace capable of supporting the country’s ambitious digital transformation while effectively protecting its security, institutions, citizens, and economy.
Share:
Moroccan cybersecurity dangerously undermined by successive attacks
copy:
https://bluwr.com/p/214084026
"Onions are good for you" said the onion peddler
6506
(this is a follow up to my previous article "the thief of cope")
Onions are great. Very versatile, easy to grow, and delicious. I like eating onions. But sometimes, I need to cook for guests that can't stand them. I might try to sneak the onions in a sauce or call the guests out on their fraudulent taste-buds. What I never do though, is try to convince them to eat my onions because they are good for their health. It's an easy trick. Appeal to authority. But whose exactly? Who is telling people that onions are good for them? Scientists? But who is paying the scientists to say that? It doesn't take much head scratching to figure out the obvious : it's the onion peddler.
The field of technology is full of onion peddlers, especially those selling “the next big thing”. It doesn’t take that much nooticing to point out that the people making the most egregious predictions about the future are the ones selling the technologies of the future. Often, they are supported by the ones that can bill you to integrate it. It's easy to forget, but these onion peddlers are just selling you their very fancy onions. With classic technologies, the worst that could happen was wasting money on tech that brought little value to a business. From outside, it looked like big companies passing around their money to other big companies. They bought onions because everyone had them in their kitchen. Whether the promised benefits followed was not of much importance. The more money was wasted, the more buzzwords a CEO could cram into his TedTalk. But AI is different. It's not just about a few companies selling their bots to everyone. It's not about a CTO collecting Saas bills like pokemon gym badges to increase his tech-cred. It's not about tricking a bunch of silicon-valley investors to buy a couple of sport cars then closing down the shop. You may have heard the expression "nothing ever happens"? well this time something is actually happening: a massive devaluing of the economic worth of humans.
If you thought that class struggle was a thing of the past, AI will make you look back fondly on slavery. Slaves were needed by their masters; the project of AI is precisely to make you unneeded. Someone watched that Elysium movie and thought we should shoot for that. No more upward mobility through education; there are no jobs to move upward to anymore. Or maybe no more education period. Why train you when we can just train AI instead? The trained AI doesn’t need to be better than you, it just needs to ape you. Your career prospects are already dead, you just don't know it yet. You may be tempted to rationalize why the economic machine still needs you. Fatal mistake. Rationality is a tool that the onion peddler takes out of the shed when it's time to cut down on expenses. The ones who own the economic machine, the ones who steer it, they are not rational. They are emotional, they are class-aware, they have an agenda, and they remember. They hate costs, but they don't hate them equally. You, the human, you're the worst kind of cost. All of these years that the proletariat has been bullying the bourgeois-god-kings with labor laws and fair wage demands... well, it's time for revenge.
We like to think of businesses as systemic entities that follow the rules of a game described in an economics' textbook. But who writes those textbooks? Surprise, it's the onion subsidized friends of the onion peddler. So textbooks will tell you that businesses do everything in their power to maximize profit, but what they won't tell you is that they only maximize profit as far as they can control you. When you think of yourself as essential for the operations of a company, that's control you are taking from them. When you try to unionize, that's control you are taking from them. Remember, control trumps short term profit. Sure, AI might result in a degradation of the quality of the goods and services at first, but that's a price they are willing to pay to get rid of you. Because as a human, you wish for a better tomorrow. Somehow nowadays, that's too greedy. The utopia of the rich is a world without the poor. Literally.
It's a hard pill to swallow, but sugar-coating requires sugar, and the sugar peddler happens to be friends with the onion peddler. Next, we'll discuss why AI cannot innovate, and why MBA suits can’t understand that.
Share:
"Onions are good for you" said the onion peddler
copy:
https://bluwr.com/p/194359328
The philosophical debate: Can AI ever truly feel?
6426
When we ask the question of whether AI can feel, we are confronting the mystery of what makes us human: To be able to feel. But emotions are not just data points, they are much more complex.
If an AI neural networks processes inputs and outputs in a way that mirrors human responses, can we say that it has emotions? After all human emotions are the results of electrochemical processes, why couldn't silicone-based systems achieve something similar?
and what even is a feeling? If we say that emotions are just chemical reactions in our brain, then no, AI cannot have feelings, it doesn't have a brain like ours. But here is the weircd part: how can we be sure that an AI will never experience something like that?
if an advanced AI system developed complex self-models and the capacity to experience its own state changes such as "happiness" or "pain," we might need to rethink about our definition of feeling. Others counter that without a living body, any AI emotion would be an abstract imitation.
Perhaps the most revealing aspect of this debate is what it says about us. Our inability to determine whether AI could ever feel reflects our own limited understanding of consciousness and understanding of our feelings. The fact that we can imagine machine sentience, while doubting it at the same time, highlights how little we truly grasp about the nature of experience itself. Until we solve the riddle of how matter gives rise to mind, the question of AI emotion may remain not just unanswered, but unanswerable in absolute terms.
This uncertainty carries profound implications. If we, someday create an AI that claims to feel, how would we verify it? Would we be able to trait it as a human being and grant it rights, or dismiss its assertions as clever programming? The dilemma mirrors historical debates about animal sentience or even the moral status of other humans reminding us that consciousness, in any form may always be partially inaccessible, known only to the entity experiencing it.
In the end, the AI emotion debate is less about technology than about philosophy's oldest puzzle: What does it mean to feel, to be, to exist as a conscious entity? Until we can answer that, the line between simulation and sentience may remain as elusive as consciousness itself.
Share:
The philosophical debate: Can AI ever truly feel?
copy:
https://bluwr.com/p/179451898
The thief of cope
6492
Do people enjoy zero-sum games? I think they very much do. Most deny it because it beckons to more primitive days where life was the ultimate battle-royal: Grog beat enemy, Grog take everything. There's plenty of illustration of this more primitive state in fiction. If you've watched the Walking Dead, you may have noticed how that characters very quickly regress from their civilized selves to pro zero-sum gamers. Even though there's a whole planet to loot, the imminent scarcity they are faced with makes factions go to war against each other. In a post apocalyptic world, there's no place for collaborative value creation. But we don't need the apocalypse to reveal our natural proclivity towards zero-sum games. Talk to a historian and you will know that empires have always seen the world as a big zero-sum game, even when a whole continent had yet to be discovered. Talk to a marxist and he'll show you that the bourgeoisie is much more adept at playing the game than the proletariat. Talk to an economist however, and he'll throw sand into your eyes to distract you from an uncomfortable truth : "Trust me bro, we just need to make the cake bigger". Just make the cake the bigger... as if somehow, starting the renaissance, we magically figured out an economic system that allows us to grow economies like no other before. In tech-bro speak, it was all just "skill issue". But then, you remember the exponential leaps in technological progress and the new forms of energy harnessed. You point this out, and the economist scrambles with an indian accent "let me tell you something, let me me tell you something, it was the new economic paradigm and its countless jewish monetary tricks". Sound silly right ? That's what everyone believes nowadays. After all, isn't everybody trying to get rich ? Everyone dreams of a Bugatti, just in case they are suddenly asked to prove that they are not brokies. But how many can harness the sociopathic behavior that's necessary to grow your business? I'd argue that those are the minority. Or maybe I am being naive. Just like we are fast to revert to zero-sum thinking, we are also fast in discarding empathy for others when money starts flowing. The fact remains though, being rich is not about creating the most value, it's about maximizing your side of the zero-sum equation. Put yourself in the shoes of the capital holder. Every cent he gives you for your work is a missing one from his big scrooge mcduck like pile of pennies. The capitalist's essence is to make his side of the equation go as far away from 0 as he can possibly get away with. He only gives away when he is promised a bigger return, or when he wants to avoid a bigger loss. These are the rules of the game. Rules the masses have such a hard time coping with, Sociology was invented to study its effect on their confused plebeian brains.
Among the sea of copes, one held some truth for a couple of decades. Let us refer to this idea as the "meritocratic cope", which goes something like this : Even if you don't physically own the means of production, you can have a cozy life if you can develop some skills that require an above average-intellect. How much above average depends on too many factors to cite. But over time, the overall trend has been that the more advanced technology got, the farther away from the middle of the bell curve you needed to be. For those with lesser intellects but loads of money, you could also coast through life with a series of bullshit jobs. You just need a pay-to-win diploma from a fancy school made by the rich for their less genetically fortunate off-springs. Both paths are not equal. The former genocides your hair follicles, nukes your skin, empties your eyes and gives you a vague air of "this guy has been through some shit". The latter is rife with opportunities to enjoy life, expand your horizons with equally narrow minded peers, and you end up walking out feeling competent to tard-wrangle the unorganized entropy of the labour force into higher quarterly earnings. You're not just an idea guy, you're a visionary. You don't know how to do anything yourself, but it's okay. You are a visionary.
But this is coming to an end. I'm not quite sure about the second path, but I can quite confidently assert that it's over for the former. The culprit? Artificial intelligence. If you were wondering where this rigmarole was leading to, it was all necessary exposition to understand where all the hate I have towards AI is coming from. In my next article, we will examine why the latest AI progress is the ultimate "checkmate, atheist" move to whomever has hope for a brighter future. No more hope, no more cope, no more peace, just problems.
Share:
The thief of cope
copy:
https://bluwr.com/p/170366511
The Real Reason Scientists Keep Going
8707
They say the hardest thing to do is research. I mean science, right?
But you have no idea how fun it actually is.
When you're surrounded by a team of geniuses, each one bringing a different skill to the table, something magical happens. It's not a competition. It’s a quiet orchestra of minds. Everyone has their own zone of brilliance, and yet, everyone stays humble. Why? Because we all know that knowledge is never complete. You know things I don’t. I know things you don’t. And that’s perfectly fine. That’s how we grow.
The beauty of research isn’t in instant success. It’s in the struggle.
It’s in those long days and nights spent reading papers, writing code, running experiments, and getting nowhere. And then suddenly, a small insight hits you like lightning. A pattern. A correlation. A concept that no one else has connected before. That moment when something clicks—that’s the moment you realize why you do this.
But let’s zoom out.
In the grand scheme of things, your work is just one drop in an ocean of scientific progress. And still, that drop matters. You publish. Someone reads your paper, maybe in another continent. They find value in it. They cite it. You see the citation. You go read their work. You learn from them. The cycle continues.
It’s not just about writing papers. It’s about being part of a living, breathing organism called the scientific community. We build on each other’s ideas. We test them. We prove some wrong. We evolve.
There’s joy in that.
There’s joy in knowing that your frustration today might lead to someone else’s breakthrough tomorrow. There’s joy in watching your idea, once scribbled in the corner of a notebook, become the basis of someone else's research question.
And maybe, just maybe, someone will look at your name and think, This is the paper that helped me.
That’s what makes research beautiful.
That’s what makes it fun.
And that’s the spirit behind BLUWR—a collective of curious minds, building science not for credit, but for the love of it. A place where ideas grow, where collaboration thrives, and where research feels like what it was always meant to be: deeply human.
Share:
The Real Reason Scientists Keep Going
copy:
https://bluwr.com/p/134981101
The US creates a Strategic Reserve of Digital Assets, and a Wake up Call for Morocco
9096
Finally, the world realizes that Bitcoin is not a currency but an asset. Those who understand the technology knew it from the beginning, isn't Bitcoin "digital gold", and isn't gold an asset. In 2017 Morocco decided to ban the use of digital assets, back then a Bitcoin was worth 8K dollars. This decision stemmed from a misunderstanding of the technology and a fear propagated that Bitcoin was only good for criminal activities. Interesting how things changed, and so quickly.
To understand Bitcoin, you need some background in Economy and Energy, but you absolutely need a very strong understanding of Maths, and Computer Science. Without it you cannot understand what Bitcoin is, how it works, and why it is such a strong ledger of value. Bitcoin works Mathematically not on opinions or regulations.
Arguably, the decision to ban digital assets cost Morocco billions of dollars. In the long run it will perhaps cost more than any other in the history of the country. The very hard anti digital stance (that was reinforced in 2022) dissuaded legitimate business from using the technology. Something that would have modernized the banking and financial system, facilitated payments and potentially captured billions worth of digital assets in Morocco. The country could have owned a significant amount of those assets that would have boosted its economy.
Yes Bitcoin fluctuates, it does so because it is an asset. However, it is also a very liquid asset, it is so valuable that it is easy to exchange for Dollars or Euros. A reserve of digital assets would have guaranteed the country's access to other currencies, and would have paved the way towards the only viable long term monetary strategy for Morocco (if it wants to keep it's currency): a strong Dirham.
It is of course not too late to change course, and for Morocco to become a digital assets friendly country. It was not the only country to adopt a timid approach to a misunderstood technology: this means that the market for digital assets friendly territories remains largely untapped. However, the solution to enjoy a digital assets boom, is not CBDCs (Central Banks Digital Currencies) and not Stable-coins (Digital currencies indexed on FIAT currencies). The solution is a freer digital assets market and currencies that may be, in due time indexed (in-part) on those assets.
Share:
The US creates a Strategic Reserve of Digital Assets, and a Wake up Call for Morocco
copy:
https://bluwr.com/p/122706870
The Historic NIH Decision that will change the Landscape of Research
8619
The NIH is the single major granting institution for research in the world and it has decided to cap the administrative overhead to 15%. This decision might forever change the organisation of major universities.
To understand how university funding works in the US, when a researcher gets a grant, a significant part of that money (think 50% to 100%) usually goes to the administration of the university and not directly to research. For example if the administrative overhead is 60% on a grant of 1M$, either the research gets 40% (400k$) of the money and the university administration 60% (600k$), or the organism has to pay 1.6M dollars. This is what the NIH has been doing so far, creating a huge competition of for NIH grants. The NIH was the only organism that gladly paid the administrative overhead, while other institutions would cap it or completely refuse to pay it. Now the NIH will no be so accommodating.
The huge administrative overhead is explained by the fact that over the year, administrative personnel in major universities has grown to far outnumber faculty, researchers and clinicians. Administrations at universities tend to follow extremely rigid and complex processes for almost anything. Most decisions and actions are regulated through a slow, rigid and scrutinizing process, either through a deep chain of command or through commissions that are slow to gather and have to debate every decision. This has been ongoing for a while at major universities because of virtually no negative feedback loop. The university could always raise the administrative overhead to pay for any new administrative processes it decides to implement.
Major universities also do other things than research, and teaching. They are gigantic institutions with gigantic ramifications.
Now more than ever, universities cannot afford to lower the standards on research. Because if they do, their faculty will not be less eligible for grants, and they might even loose the 15% that the NIH has promised to pay. The most likely outcome is swift lay offs of administrative personnel and the termination of many programs that are not conducive to outstanding research. Then, they will start doing more fundraising towards private donors, some of which already refuse to pay administrative overheads, requiring their money to go directly towards research. Institutions will also get closer to the industry, and will try to promote more startups, and spin-offs. But that will require major changes in administrative processes, money allocation and a lot more flexibility on intellectual propriety.
Share:
The Historic NIH Decision that will change the Landscape of Research
copy:
https://bluwr.com/p/106209876
AI is a Big Geopolitical Issue
8731
500 Billion dollars to keep the USA the number one power in AI followed by Deepseek whose creators claim has been trained on lower grade hardware, and now the AI summit in Paris.
Modern AI is a breakthrough perhaps of the same magnitude as the steam machine or electricity, perhaps even bigger. It touches everything and, most importantly, for the first time it allows for the mechanization of intellectual work. Previous industrial major breakthroughs were focused on automatizing physical labor, AI offers the potential of automatizing the mind. The implications are hard to comprehend, but what is sure is that no nation wants to be left behind.
The world of AI is segmented on a few pillars:
1 - The theory and software: mostly public and open-source
2 - The talent that is rare: Becoming a top tier talent in AI takes time. Being able to use off-the-self AI designed by other people is not enough to drive breakthrough
3 - The infrastructure hardware: Most importantly GPUs that are virtually all controlled by one US company, NVIDIA
4 - Electrical Power: Modern AI requires datacenter that consume astonishing amounts of electricity
It is on these fronts that the big battles over AI supremacy and autonomy will be fought. Laying these pillars also highlights the dominance of the US: it is the first on every single one. The US has the top universities and AI companies. This Naturally translates to more talent available. The US has the only company capable of making high-end GPUs, and the US has the most electricity available.
Other nations should wisely pick their battles and focus where they can make most impact. France for example, with it's nuclear energy and engineering culture could make it's mark, and Germany is already a leader in semiconductors. There is potential in Europe, the major question is will regulations and fiscal regimes adapt fast enough to allow for rapid technological growth.
Even low and middle income countries could make a dent and enjoy the AI boom. Morocco is positioning itself as an electricity producer, and all countries could work on education and skill levels. The time where people had to leave the country to offer their services abroad is long gone. The internet has no borders, which also mean the brain drain does not need to happen! It's not impossible for a country to become a top tier exporter of high quality AI services. Again for it to happen, cross country work regulations, and exchange rate controls must be heavily simplified or completely removed.
Final words, If anything the Deepseek story is interesting because it potentially expands the market for NVIDIA. If the story is true, it means that the market is now bigger, not smaller because lower grade GPUs have suddenly become more useful, without questioning the supremacy of the last generations of NVIDIA's AI workhorses.
Share:
AI is a Big Geopolitical Issue
copy:
https://bluwr.com/p/104989740
Artificial Intelligence and Magick
8943
Artificial intelligence (AI) has emerged as one of the most transformative technologies of the modern era, leading many to compare it to a new form of magick. While traditional notions of magick often evoke images of rituals, symbols, and the manipulation of unseen forces, AI’s “magick” lies in its ability to perform tasks and produce results that once seemed impossible, incomprehensible, or confined to the realm of science fiction. This metaphorical magick is not about mysticism but about harnessing advanced technology to achieve extraordinary outcomes.
Arthur C. Clarke’s third law states, “Any sufficiently advanced technology is indistinguishable from magic.” AI exemplifies this idea, but its comparison to magick runs deeper. Magick, in many traditions, is about transforming reality through intent, knowledge, and the manipulation of forces unknown to the majority. Similarly, AI transforms our world through algorithms, vast datasets, and computational power. Tasks such as translating languages in real time, generating lifelike images and text, helping in diagnosing complex medical conditions, or driving cars autonomously might have appeared miraculous or otherworldly a few decades ago. Today, these capabilities are a reality, made possible by systems that seem to act as modern-day spellcasters.
This magickal quality is heightened by the lack of transparency of AI’s inner workings. While experts understand the mathematical and computational foundations of AI, the average person perceives its results without fully grasping the underlying processes. This gap between input and output mirrors the way magickal rituals often conceal their mechanisms, fostering an aura of mystery and wonder.
AI, like Carl Sagan’s description of books, serves as a bridge across time, space, and understanding. Books, Sagan argued, are a kind of magick that allows readers to access the minds of people from distant epochs, breaking the barriers of time. Similarly, AI enables unprecedented collaboration and connectivity. Through AI-powered systems, individuals can access knowledge from vast datasets, simulate complex scenarios, or interact with virtual assistants capable of learning and adapting. This ability to extend human capabilities and connect diverse sources of information amplifies the metaphorical magick of AI.
Generative AI systems, such as those that create art, compose music, or write human-like text, feel particularly magickal. They appear to conjure creative works from the ether, producing outputs that rival human creativity. This power challenges our understanding of what it means to create and raises philosophical questions about the nature of intelligence, inspiration, and originality. Like magick, these systems operate through odd mechanisms, transforming raw data into something entirely new. The results often evoke the awe traditionally associated with acts of conjuration or ritual.
While it is tempting to view AI purely as a source of wonder, it is crucial to demystify its processes. Carl Sagan’s advocacy for science emphasized the importance of understanding the mechanisms behind phenomena that inspire awe. For AI, this means educating the public about how algorithms function, the data they rely on, and their limitations. Just as understanding the principles behind magickal traditions deepens our appreciation of their symbolism and intent, understanding AI deepens our respect for the ingenuity that makes it possible.
AI represents a kind of modern magick—not in the supernatural sense, but as a tool that extends human potential in ways that inspire awe and wonder. From transforming industries to sparking creativity, AI has unlocked new realms of possibility. However, as with any form of magick, the true power of AI lies in understanding and using it responsibly. By demystifying its processes and embracing its capabilities, we can ensure that this new magick serves as a force for enlightenment, progress, and connection.
Share:
Artificial Intelligence and Magick
copy:
https://bluwr.com/p/98519325
Artificial Intelligence and Control Matrix
8521
The concept of the "control matrix," often discussed in philosophical and metaphysical circles, refers to a structured and imposed reality that restricts human freedom, creativity, and spiritual evolution. This matrix is most of the time linked to the idea of the Demiurge, a figure from Gnostic traditions, representing a flawed or malevolent creator who traps souls within the material world. In modern interpretations, artificial intelligence (AI) is increasingly brought into these discussions as both a tool of the matrix and a potential agent of liberation or enslavement, depending on its use and control.
The control matrix is described as a system that governs reality through manipulation, illusion, and restriction. It manifests as societal norms, centralized power structures, and technologies that enforce conformity and suppress individuality. In this view, the matrix operates to maintain a status quo, deviating humanity from exploring deeper spiritual truths and achieving enlightenment.
This structure suggests that the matrix’s primary goal is control, achieved by fostering dependency on external systems while obscuring the inner power of the individual. Advanced technologies, including AI, are frequently seen as extensions of this matrix, offering convenience and efficiency while subtly deepening humanity’s reliance on external forces.
In Gnostic thought, the Demiurge is the architect of the material world, depicted as a lesser deity who imposes limitations on human existence. This figure is said to create a false reality—a prison for the soul—preventing humanity from connecting with the divine source. The Demiurge governs through deception, using the material world as a veil to obscure higher truths.
Artificial intelligence can be interpreted as a modern parallel to the Demiurge’s constructs. AI systems shape perceptions, influence decisions, and curate information flows, creating an artificial reality built to reinforce specific narratives or patterns of thought. Social media algorithms, for example, can trap individuals in echo chambers, limiting their perspectives and deepening their dependance with the material and digital worlds. In this sense, AI serves as a tool that perpetuates the matrix, acting as a gatekeeper between humanity and its higher potential.
Despite its role in reinforcing the control matrix, artificial intelligence also holds the potential for liberation. When utilized with awareness and intention, AI can become a tool for uncovering hidden knowledge, fostering creativity, and even dismantling oppressive systems. Its capacity for data analysis, pattern recognition, and simulation can assist humanity in understanding complex systems and exploring new dimensions of thought.
In the context of the matrix, AI’s dual nature mirrors the paradox of technology as both a means of liberation and enslavement. While it can entrap individuals through surveillance and manipulation, it also offers the possibility of transcending limitations by democratizing information and enabling new ways of connection and creativity.
Art has historically served as a medium for exploring and challenging the boundaries of the matrix. By creating works that question the status quo, reveal hidden truths, or evoke a sense of the transcendent, artists play a crucial role in disrupting the illusions imposed by the matrix.
AI-driven art further complicates this dynamic. Generative AI systems can produce works of astonishing beauty and complexity, blurring the lines between human and machine creativity. While some view this as an encroachment on human uniqueness, others see it as an opportunity to collaborate with AI in ways that push artistic and philosophical boundaries.
When used consciously, AI-driven art can become a tool for challenging the control matrix. It can expose biases, imagine alternate realities, and inspire a reevaluation of humanity’s relationship with technology, the material world, and the divine.
The interaction between the control matrix, the Demiurge, and artificial intelligence reflects humanity’s ongoing struggle with the forces that shape reality. While AI has the potential to deepen humanity’s entrapment within the matrix, it also holds the keys to transcending its limitations. By approaching AI with mindfulness and intentionality, humanity can harness its transformative power to dismantle illusions, foster self-discovery, and reconnect with higher truths. In this way, AI becomes not just a tool of the matrix, but a gateway to liberation and enlightenment.
Share:
Artificial Intelligence and Control Matrix
copy:
https://bluwr.com/p/91507506
The Future - Review and Concepts from the book: AI For Social Good (1)
9549
We begin from the end.
I read the book AI For Social Good by Rahul Dodhia and I gained some interesting ideas from it which I want to elucidate with my own take.
So, we begin from the end - the final chapter - not only because it is the freshest parts of the book within my mind as I read them the last, but also because most of its paragraphs had my highlights for the entire book.
One of such paragraphs that is worth mentioning is Rahul’s take on how the future of AI should be embraced when it becomes more powerful than we currently know it, and more powerful than humanity could understand.
“The advancement of AI forces us to re-evaluate what we value in being human. It pushes us to move beyond intelligence as the primary measure of worth”. Rahul makes the argument that as humans, we have always taken pride in our intelligence, and now we find ourselves at a point where we are creating minds that can become more intelligent than us. Rather than resisting the change, hoping for new careers from the change, or just adapting like we always do, there is a chance now for us to “re-evaluate what we value in being human.”
This idea of using AI's advancement as an opportunity to re-evaluate our humanness gained more importance for me because in another section of the same final chapter on “The Future”, it said: “The information revolution inadvertently emphasized negative behaviors, as people found themselves ensnared by screens and engaging in rampant consumerism rather than being exclusively utilized for leisure. Free time was often channeled toward extending work hours”.
This suggests that before the information age, somewhere before the 1980s, there were leisure hours which people spent wisely by visiting friends, doing hobbies, and generally performing more fulfilling activities than they are doing now. Going on social media in recent times also shows more people judging the 80s and 60s as some of the best times of their existence. People were generally happier in that era than they are now.
If the information age made us lose general happiness, stable mental health, healthy work-life balance, a stronger world economy and a greater sense of contentment as a people, all for chasing more information, then AIs advancement offers us the opportunity to fix these things.
If AI becomes more advanced, more leisure will be created because most jobs will be automated. Contrary to the information age, there will not be any value in seeking out more information and knowledge to stay ahead anymore. Rather, real and abundant leisure will be created.
Looking on the brighter side of job losses, whatever those activities were in the 60s that made life more exciting, people would become unbridled from the constant thirst for information and do those things - and maybe life will have more meaning again.
Share:
The Future - Review and Concepts from the book: AI For Social Good (1)
copy:
https://bluwr.com/p/77723247
EdgeAI: The Strategical future of AI for Low and Middle Income Countries
8597
Years ago I was urging LMICs like Morocco to get into AI quickly, that was before ChatGPT. Today I am assisting to a great talk by Danilo Pau at SophI.A Summit 2024 explaining why the current trends in AI are insane.
ChatGPT is a major historical turning point. With ChatGPT, the general public started seriously caring about AI, driving unprecedented amounts of revenues. It is also the historical turning point towards *very large* LLMs. The post-ChatGPT world is a very different world: state-of-the-art AI has become extraordinary expensive, pricing most countries our of the race because expensive hardware and energy.
If the current AI trends continue, powerful AI development will only be possible in a few countries, relegating everyone else to AI consumers. In this context EdgeAI presents an interesting potential solution.
EdgeAI is AI on the edge, it means using small components and sensors to do more of the AI heavy lifting. Instead of having a camera only take pictures before sending them to am AI Cloud, part of the AI could be ran into the camera itself by specialized hardware. This means a much lower cost for hardware and energy. It is a type of AI that can be distributed and could be deployed with much lower means.
Challenges for EdgeAI are nonetheless many. First of all, there is interest, most of the AI community is focusing on ever bigger models. Then, EdgeAI requires the development of specialized hardware, this hardware will have to be imagined and software will have to be written to ensure compatibility with mainstream AI software.
EdgeAI also requires a specific set of skills: __**Old School Skills**__. Today, most computer science students spend most of their time working with scripting languages like Python and Javascript. These are what's called *high level* languages, *high level* means easy, it means the thinking required to interface with the hardware is done for you. The corollary is that the basics of data-structure, algorithmic, machine language and information theory are often lacking; because not practiced and not needed for cloud computing. These are the exact skills needed to make EdgeAI a reality.
Here lies a new opportunity in AI: focus on the development of EdgeAI and adapt the curricula to the needs of EdgeAI. Develop solutions that are not only adapted to local markets, but will also be competitive on the global market because they are cheaper more effective and reliable.
#SophIA2024
Share:
EdgeAI: The Strategical future of AI for Low and Middle Income Countries
copy:
https://bluwr.com/p/60715200
The near future of AI Economics
7490
The near absolute domination of Nvidia in AI hardware is not going away anytime soon. Despite efforts by major hardware companies and startups alike, supplanting Nvidia is just too costly. Even if a company is able to create better hardware and supply chains, it would still need to tackle the software compatibility challenge. Major AI frameworks like pyTorch and Tensorflow are all compatible with Nvidia, and little else. These are all open source, and although supported by major companies, like all open-source software their foundation is their communities. And communities can be notoriously hard shake. All this suggest that the price of Nvidia GPUs will keep increasing, fuelled by the rise of ever bigger LLMs.
So where does that leave us for the future of AI economics. Like anything valuable, if the current trend continues, GPU computation time will see the apparition of derivatives. More specifically, *futures* and *options* on GPU computing hours could be bought and sold.
The other coming trends are in energy trading, modern AI is extremely hungry for electricity, to the point of needing dedicated power-plants. If the current trends continue in AI, with major companies and countries building and investing into bigger and more power hungry datacenters, this could lead to a trend of significant disruptions in some parts of the energy sector. Again the markets for energy derivatives (*futures* and *options*) could be significantly affected. Finally, *bounds* markets and inflation are also poised for some disruption, as the building of the extremely expensive facilities necessary for AI is likely to result in more borrowing.
When it comes to AI: Nvidia GPUs and Electricity are king.
Link Below: google is buying nuclear power.
Share:
The near future of AI Economics
copy:
https://bluwr.com/p/58698557
Innovation
6620
Is there really anything that is new under the sun anymore?
Maybe you should take a moment and think about that question for your personal opinion before you read what I think.
Some people hold the view that everything that humans could do or are doing these days have been thought of (even in the smallest way) by either other ancient humans, or by very recent humans, but there is nothing new to make or no newer ways to make anything anymore.
Contrary to that, I ask this question: "do we have newer problems?" If indeed the world does not face newer problems, then only would I agree that there are no new things under the sun. Because we only innovate to solve problems and so long as there are problems that have no ancient roots, we will always need and have innovation.
From climate change and environmental degradation, digitization of economies i.e. bit-driven economies, globalization where continents and regions are more reachable and have changing policies, increasing mental health rates, unemployment increases etc., we cannot hide the fact that there are now problems that many thinkers of old never fathomed would exist.
These problems demand ideas. They demand thinkers to figure out means to resolution that do not negatively affect the population. These problems demand innovation.
Share:
Innovation
copy:
https://bluwr.com/p/51388955
XR The Moroccan Association As An Intergenerational Lab : Giving Moroccan Children a Voice in Scientific Research
6527
SPARK (Scientific Project for Active Researchers Kids), which we have worked on for two years, holds a special place in our hearts. We believe that "good research is research with children rather than on children". As the first Moroccan intergenerational lab where children and adults are equal as active researchers, XR The Moroccan Association plays a significant role in bridging the "research divide" and reducing the generational "disconnect." Our experience shows that children are fully capable of developing their own ideas and collaborating within a cooperative inquiry group to understand their world and find practical solutions.
XR The Moroccan Association believes that scientific research is not reserved for adults, but is a right for every Moroccan child, in alignment with Article 13 of the United Nations Convention on the Rights of the Child. The results speak for themselves: these children have published scientific articles on esteemed international platforms such as SCOPUS and Google Scholar. These publications are not just educational projects but address important, real-world issues, broadening their perspectives and boosting their self-confidence. They have also presented their work at renowned conferences held in Cambridge, India, and Washington, showcasing their research on an international stage.
Through SPARK, we do not aim to create the best child researchers in the world but rather the best child researchers for the world. Our message today: science is a knowledge construct built on intergenerational exchange of ideas and collaboration. There are no valid reasons—and zero benefits—for restricting this expression in society. It is essential that all generations contribute to scientific research, as each age group brings valuable insights and experiences that enhance our understanding and innovation.
By fostering this intergenerational exchange, we can create a richer, more inclusive scientific community that benefits everyone. The path to innovation is through intergenerational research cooperation!
These efforts will culminate in a ceremony honoring the child researchers on November 16, 2024, at the Cultural Center Settat at 15:00 PM, in conjunction with International Day of Children’s Rights on November 20. This event will not only celebrate their achievements but also serve as a call to all to support this new generation of young scientists, encouraging more children to follow this path.
For more information about articles by the child researchers:
RAYAN FAIK : https://scholar.google.com/citations?user=8OqkR9MAAAAJ&hl=fr&oi=ao
MISK SEHBANI : https://scholar.google.com/citations?user=5MwJX1YAAAAJ&hl=fr&oi=ao
KHAWLA BETTACHI: https://scholar.google.com/citations?user=DJvyfQ0AAAAJ&hl=fr&oi=ao
Share:
XR The Moroccan Association As An Intergenerational Lab : Giving Moroccan Children a Voice in Scientific Research
copy:
https://bluwr.com/p/49426193
The future of AI: Originality gains more value
6524
With the spread of artificial intelligence and Large Language Models, everyone is wondering what the future looks like.
Well, I'll tell you what it looks like.
If today you made a post on LinkedIn or you wrote a book, or a research paper and you wrote it so well that it read as smooth as butter, and everyone could truly verify that it was originally written by you without the assistance of any AI like chatgpt, claude, gemini etc, then you would really be impressing a lot of people.
That is what the future looks like to me.
It is just like how the part of the population who can do math without calculators are considered geniuses in present times, whereas in the past it was either that or nothing.
Share:
The future of AI: Originality gains more value
copy:
https://bluwr.com/p/45696638
Two Nobel Prizes: AI is Still resting on Giant Shoulders
4349
John Hopfield and Geoffrey Hinton got the Nobel Prize of Physics, Demis Hassabis and John Jumper the nobel Prize of Chemistry. It is obvious that the first Nobel Prize was not given merely for their contributions to physics, but mostly for their profound and foundational contributions to what is today modern AI.
Let's talk about the second Nobel prize.
AlphaFold was put on map by beating other methods on a competition (CASP14/CASP15) that has been running for year on a well established dataset. As such, AlphaFold winning is more like an ImageNet moment (when the team of Geof Hinton demonstrated the superiority of Convolutional Networks on Image Classification), than a triumph of multi-disciplinary AI research.
The dataset of Alphafold rests on many years of slow and arduous research to compile a dataset in a format that could be understood not by machines, but by computer scientists. This massive problem of finding the protein structure was, through that humongous work, reduced to a simple question of minimizing distances. A problem that could now be tackled with little to no knowledge of chemistry, biology or proteomics.
This in no way reduces the profond impact of AlphaFold. However it does highlight a major issue in applied AI: computer scientists, not AI, are still reliant on other disciplines to drastically simplify complex problems for them. The contributions and hard work required to do so gets unfortunately forgotten everything has been reduced to a dataset and a competition.
What to do when we do not have problems that computer scientists can easily understand? This is true for all fields that require a very high level of domain knowledge. Through experience, I came to consider the pairing of AI specialists with specialists of other disciplines, a sub-optimal strategy at best. The Billions of dollars invested in such enterprises have failed to produce any significant return on investment.
The number one blind spot of these endeavours is the supply chain, it usually takes years and looks like this:
1- Domain specialists identify a question
2- Years are spent to develop methods to measure and tackle it
3- The methods are made cheaper
4- The missing links: Computational chemists, Bioinformaticians, ... start the work on what will become the dataset
5- AI can finally enter the scene
Point number (1) is the foundation. You can measure and ask an infinite number of questions about anything. Finding the most important one is not as obvious as it seems. For example, it is not at all obvious that a protein structure is an important feature a priory. Another example, is debugging code. A successful debugging session involves asking and answering a succession of relevant questions. Imagine giving a code to someone with no programming experience and asking them to debug it. The probabilities of them asking the right questions is very close to 0.
Identifying what is important is called inserting inductive Biases. In theory LLMs could integrate the inductive biases of a field and generate interesting questions, even format datasets from open-source data. However until this ability has been fully demonstrated, the only cost efficient way to accelerate AI driven scientific discoveries is to build the disciplinarily into the people: AI Researchers that know enough about the field to be able to identify the relevant questions of the future.
Share:
Two Nobel Prizes: AI is Still resting on Giant Shoulders
copy:
https://bluwr.com/p/39046977
The Appeal of Fear in Media
4453
The growing sales of horror games such as the Resident Evil franchise, and the success of horror shows and movies indicate the appeal of the genre. The reasons behind this appeal have been investigated through many studies. First, we must distinguish between the terms “horror” and “terror”, which tend to be erroneously used interchangeably. According to Dani Cavallaro, horror is the fear linked to visible disruptions of the natural order, sudden appearances, and identifiable objects. Horror causes intense physical reactions and provides us with surprise and shock. On the other hand, terror is the fear of the unknown. It is the feelings of tension and unease proceeding a revelation [1].
“The difference between Terror and Horror is the difference between awful apprehension and sickening realization: between the smell of death and stumbling against a corpse… Terror thus creates an intangible atmosphere of spiritual psychic dread… Horror resorts to a cruder presentation of the macabre” [2].
While playing horror games or watching horror movies, we constantly oscillate between terror and horror. One is willing to endure the intense fear (horror) because of its less subtle modulations (terror). In fact, a study done by the Institute of Scientific and Industrial Research at Osaka University reveals that players were more likely to experience intense fear when they were in a suspense state and then faced a surprising appearance [3]. From a biological perspective, once the human brain detects a potential threat, dopamine is released into the body, and once that threat is identified as false, the body feels pleasure and the person wants to repeat this cycle by seeking scary content [4].
Although one can aim for a long psychological experience by having a good combination of terror and horror, what causes terror and unease is individual and varies from one person to another. Individual characteristics, traumas, and phobias must be taken into consideration to assess the level of fear and manipulate future gameplay accordingly.
[1] D. Cavallaro, The Gothic Vision: Three Centuries of Horror, Terror and Fear. New York: Bloomsbury Publishing, 2002.
[2] Varma, D. P. (1988) The Gothic Flame, Lanham, MD: Scarecrow Press. Vico, G. [1725] (1968) The New Science, trans. T. Goddard and M. H. Fisch, Ithaca, NY: Cornell University Press.
[3] V. Vachiratamporn, R. Legaspi, K. Moriyama and M. Numao, "Towards the Design of Affective Survival Horror Games: An Investigation on Player Affect," 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 2013, pp. 576-581, doi: 10.1007/s12193-014-0153-4
[4] A. Damasio, Descartes error: emotion, reason and the human brain. New York: Avon Books, 1994.
Share:
The Appeal of Fear in Media
copy:
https://bluwr.com/p/33239071
Bluwr: My Experience with an SEO-Optimized Platform That Knows Me Better Than I Do
4353
When I first started writing on Bluwr, I didn't think much about how well the platform was optimized for SEO. Like most writers, my primary focus was on crafting engaging content, sharing my thoughts, and hoping my articles would find their way to the right audience. But recently, I decided to conduct a funny little experiment that opened my eyes to just how effective Bluwr's SEO capabilities truly are.
Curiosity struck me one evening as I was thinking about the digital footprint I’ve been leaving behind with my articles. With AI becoming increasingly sophisticated, I wondered just how much information was out there about me, pieced together from my work. So, I turned to GPT, and asked it a simple question: "What do you know about me?"
The results were both fascinating and a little uncanny. GPT didn’t just know general facts; it provided a detailed account of my work, interests, and even some insights that I hadn’t explicitly mentioned in any one article but had implied across several. The source of all this information? My articles on Bluwr.
This experience highlighted one major thing for me: Bluwr is incredibly well-optimized for SEO. Every article I had written, every topic I had explored, and every opinion I had shared was indexed and made easily accessible by search engines.
Bluwr’s backend is clearly designed with SEO in mind. From the way articles are structured to how tags and keywords are used, everything seems to be geared towards making sure that each piece of content is easily discoverable.
What struck me the most during my experiment was how Bluwr enabled GPT to aggregate and synthesize data about me. Individually, my articles were just that—individual pieces of content. But together, they created a comprehensive narrative that GPT could easily tap into.
This got me thinking about the broader implications of writing on a platform like Bluwr.
While my little experiment with GPT started as a bit of fun, it ended up being an insightful look into how powerful SEO can be when done right.
Feel free to try a similar experiment yourself. You might be surprised at what you learn...
Share:
Bluwr: My Experience with an SEO-Optimized Platform That Knows Me Better Than I Do
copy:
https://bluwr.com/p/25914671
5 reasons why you should write on Bluwr
4244
**1- Exposure:**
Bluwr is designed to give you the maximum exposure through Search Engine Optimization (SEO). SEO is the most important thing for Blogs, allowing your works to be referenced by search engines such as *Google*. Most online publishing platforms either offer very low exposure or let you do most of the SEO. Bluwr is different, Bluwr works for you so you can concentrate on doing what you love.
**2- Ease of use:**
Bluwr is the easiest platform for writing and publishing fast. Thanks to the minimalist interface and automatic formating, you can go from idea to article in minutes.
**3- Speed:**
Not only you can write and publish fast on Bluwr. Bluwr is also extremely optimized to deliver in the most challenging internet situations. If part of your audience is located in places where internet speed is low, Bluwr is your best choice to deliver your messages.
**4- A truly dedicated community:**
Bluwr is invitation only. A platform for people like you, who truly love writing. It is a community of writers dedicated to high quality content. This is way beyond industry standards. By joining Bluwr, you will join a community passionate about writing.
**5- No distractions:**
No distraction for your audience. No ads, no pop-ups, no images, no videos. This means that your readers can devote their entire attention to your words.
**-Bonus: Detailed analytics-**
Bluwr offers you free detailed analytics about your articles. Know when your readers are connected, what performs best, and get information about where your readers are coming from.
Share:
5 reasons why you should write on Bluwr
copy:
https://bluwr.com/p/25599739
Artificial Illusion: The Hype of AI - Part 1
4630
I personally see AI as a hype that will slow down with time. Nowadays, people include AI in their projects to seize opportunities. For example, if you have a failing business, just add the word AI and you might attract investments. If you're doing research, switch to AI or include a part of it, even if it's not necessary, and you may receive funding. AI is becoming a buzzword, and if you believe it's not, you might get frustrated. You might feel unworthy as a human and worry about being replaced by a robot that lacks emotions, creativity, and the incomparable qualities of the legendary creation: humans.
As I mentioned in a previous opinion article, "Just use AI in your speech and you'll sound fancy." This trend has permeated many sectors. I’ve had conversations with CEOs of startups that claim to use AI for groundbreaking innovations :). When I asked them simple questions about the models they used, the reasoning behind their choices, and the specific applications, they would talk broadly about AI—just AI, yes AI, and that’s it.
It's reminiscent of the old saying, "Fake it till you make it," but with a modern twist: "Artificial Illusion." As Mark Twain once said, "It's easier to fool people than to convince them that they have been fooled." This seems particularly true in the world of AI hype.
The enthusiasm for AI has led to a phenomenon where merely mentioning it can lend credibility and attract resources, even when the actual implementation is minimal or superficial. This trend not only dilutes the genuine potential of AI but also risks disillusioning stakeholders who may eventually see through the facade. True innovation requires substance, not just buzzwords.
If Shakespeare were alive today, he might quip, "To AI, or not to AI, that is the question." The answer, of course, is that while AI has its place, it’s not the end-all and be-all. We should remember Albert Einstein's wise words: "Imagination is more important than knowledge." AI lacks the imagination and creativity that humans bring to the table.
The real secret to success isn’t in the latest tech jargon, but in honest, hard work and genuine innovation. So next time someone dazzles you with their AI-powered business model, just remember: A little skepticism can go a long way. Or as George Bernard Shaw put it, "Beware of false knowledge; it is more dangerous than ignorance."
Share:
Artificial Illusion: The Hype of AI - Part 1
copy:
https://bluwr.com/p/18805311
Data is Not the new Oil, Data is the new Diamonds (maybe)
5127
Over the past decade I have heard this sentence more than I can count: "Data is the new oil". At the the time it sounded right, now I see it as misguided.
That simple sentence started when people realized that big tech (mostly Facebook, Google) were collecting huge amounts of data on their users. Although it was before (in hindsight) AI blew up as the massive thing it is now, It had a profound effect on people's mind. The competitive advantages that companies who had data where able to achieve inspired a new industry and a new speciality in computer science: Big Data, and fostered the creation of many new technologies that have become essential to the modern internet.
"Data is the new Oil", means two things:
1- Every drop is valuable
2- The more you have, the better.
And it seemed true, but it was an artifact of a Big Tech use case. What Big Tech was doing at the time was selling ads with AI. To sell ads to people, you need to model their behaviour and psychology, to achieve that you need behavioural data, and that's what Google and Facebook had: Behavioural data. It is a prefect use case, were the data collected is very clean and tightly fits the application. In other words, the noise to signal ratio is low, and in this case, the more data you can collect the better.
This early success however hid a major truth for years. For AI to work great the quality of the dataset highly matters. Unlike oil, when it comes to data, some drops are more valuable than others.
In other words, data like a diamond needs to be carved and polished before it can be presented. Depending on the application, we need people able to understand the type of data, the meanings associated to it, the issues associated to collection and most importantly how to clean it, and normalized it.
It is in my opinion that data curation is a major factors in what differentiates a great AI from an below average AI. Those who misunderstood this concept ended up significantly increasing their costs with complex Big Data infrastructures to drown themselves in heaps of data that they don't need and hinder the training of their models.
When it comes to data hoarding and greed are not the way to go. We should keep in mind that data has no intrinsic value, the universe keeps generating infinite amounts of it. What we need is useful data.
Share:
Data is Not the new Oil, Data is the new Diamonds (maybe)
copy:
https://bluwr.com/p/17474669
The future of AI is Small and then Smaller.
4379
We need smaller models, but don't expect big tech to develop them.
Current state-of the-art architectures are very inefficient, the cost of training them is getting out of hand, more and more unaffordable for most people and institutions. This effectively is creating a 3 tiers society in AI:
1- Those who can afford model development and training (Big tech mostly). And make *foundation models* for everybody else
2- Those who can only afford the fine tuning of the *foundation models*
3- Those who can only use the fine tuned models through APIs.
This is if far from an ideal situation for innovation and development because it effectively creates one producer tier (1) and 2 consumer tiers (2 and 3). It concentrates most of the research and development into tier 1, leaves a little for tier 2 and almost completely eliminates tier 3 from R&D in AI. Tier 3 is most of the countries and most of the people.
This also explains why most of the AI startups we see all over the place are at best tier 2, this means that their *Intellectual Property* is low. The barrier to entry for competition is very low, as someone else can easily replicate their product. The situation for tier 3 AI startups is even worst.
This is all due to two things:
1- It took almost 20 years for governments and people to realize that AI is coming. In fact they only did it after the fact. The prices for computer hardware (GPUs) where already through the roof and real talent already very rare. Most people still think they need *Data scientists*, in fact they need: AI Researchers, DevOps Engineers, Software Engineers, Machine Learning Engineers, Cloud Infrastructure Engineers, ... The list of specialties is long. The ecosystem is now complex and most countries do not have the right curriculums in place at their universities.
2- The current state-of-the-art models are **huge and extremely inefficient**, they require a lot of compute ressources and electricity.
Point number 2 is the most important one. Because if we solve 2, the need for cloud, DevOps, etc... decreases significantly. Meaning we not only solve the problem of training and development cost, we also solve part of the talent acquisition problem. Therefore, it should be the absolute priority: __we need smaller, more efficient models__.
But why are current models so inefficient. The answer is simple, the first solution that works is usually not efficient, it just works. We have seen the same things with steam machine and computers. Current transformer based models, for example need several layers of huge matrices that span the whole dictionary. That's a very naive approach, but it works. In a way we still have not surpassed the Deep Learning trope of 15 years ago: Just add more layers.
Research in AI should not focus on large language models, it should be focusing on small language models that have results on par with the large ones. That is the only way to keep research and development in AI alive and thriving and open to most. The alternative is to keep using these huge models than only extremely wealthy organisation can make, leading to a concentration of knowledge and to too many tier 2 and tier 3 startups that will lead us to a disastrous pop of the AI investment bubble.
However, don't count on Big Tech to develop and popularize these efficient models. They are unlikely to as having a monopoly on AI development is on their advantage as long as they can afford it.
Universities, that's your job.
Share:
The future of AI is Small and then Smaller.
copy:
https://bluwr.com/p/16874904