EdgeAI: The Strategical future of AI for Low and Middle Income Countries
Years ago I was urging LMICs like Morocco to get into AI quickly, that was before ChatGPT. Today I am assisting to a great talk by Danilo Pau at SophI.A Summit 2024 explaining why the current trends in AI are insane.
ChatGPT is a major historical turning point. With ChatGPT, the general public started seriously caring about AI, driving unprecedented amounts of revenues. It is also the historical turning point towards *very large* LLMs. The post-ChatGPT world is a very different world: state-of-the-art AI has become extraordinary expensive, pricing most countries our of the race because expensive hardware and energy.
If the current AI trends continue, powerful AI development will only be possible in a few countries, relegating everyone else to AI consumers. In this context EdgeAI presents an interesting potential solution.
EdgeAI is AI on the edge, it means using small components and sensors to do more of the AI heavy lifting. Instead of having a camera only take pictures before sending them to am AI Cloud, part of the AI could be ran into the camera itself by specialized hardware. This means a much lower cost for hardware and energy. It is a type of AI that can be distributed and could be deployed with much lower means.
Challenges for EdgeAI are nonetheless many. First of all, there is interest, most of the AI community is focusing on ever bigger models. Then, EdgeAI requires the development of specialized hardware, this hardware will have to be imagined and software will have to be written to ensure compatibility with mainstream AI software.
EdgeAI also requires a specific set of skills: __**Old School Skills**__. Today, most computer science students spend most of their time working with scripting languages like Python and Javascript. These are what's called *high level* languages, *high level* means easy, it means the thinking required to interface with the hardware is done for you. The corollary is that the basics of data-structure, algorithmic, machine language and information theory are often lacking; because not practiced and not needed for cloud computing. These are the exact skills needed to make EdgeAI a reality.
Here lies a new opportunity in AI: focus on the development of EdgeAI and adapt the curricula to the needs of EdgeAI. Develop solutions that are not only adapted to local markets, but will also be competitive on the global market because they are cheaper more effective and reliable.
#SophIA2024
Share:
EdgeAI: The Strategical future of AI for Low and Middle Income Countries
copy:
https://bluwr.com/p/60715200
The near future of AI Economics
The near absolute domination of Nvidia in AI hardware is not going away anytime soon. Despite efforts by major hardware companies and startups alike, supplanting Nvidia is just too costly. Even if a company is able to create better hardware and supply chains, it would still need to tackle the software compatibility challenge. Major AI frameworks like pyTorch and Tensorflow are all compatible with Nvidia, and little else. These are all open source, and although supported by major companies, like all open-source software their foundation is their communities. And communities can be notoriously hard shake. All this suggest that the price of Nvidia GPUs will keep increasing, fuelled by the rise of ever bigger LLMs.
So where does that leave us for the future of AI economics. Like anything valuable, if the current trend continues, GPU computation time will see the apparition of derivatives. More specifically, *futures* and *options* on GPU computing hours could be bought and sold.
The other coming trends are in energy trading, modern AI is extremely hungry for electricity, to the point of needing dedicated power-plants. If the current trends continue in AI, with major companies and countries building and investing into bigger and more power hungry datacenters, this could lead to a trend of significant disruptions in some parts of the energy sector. Again the markets for energy derivatives (*futures* and *options*) could be significantly affected. Finally, *bounds* markets and inflation are also poised for some disruption, as the building of the extremely expensive facilities necessary for AI is likely to result in more borrowing.
When it comes to AI: Nvidia GPUs and Electricity are king.
Link Below: google is buying nuclear power.
Share:
The near future of AI Economics
copy:
https://bluwr.com/p/58698557
Innovation
Is there really anything that is new under the sun anymore?
Maybe you should take a moment and think about that question for your personal opinion before you read what I think.
Some people hold the view that everything that humans could do or are doing these days have been thought of (even in the smallest way) by either other ancient humans, or by very recent humans, but there is nothing new to make or no newer ways to make anything anymore.
Contrary to that, I ask this question: "do we have newer problems?" If indeed the world does not face newer problems, then only would I agree that there are no new things under the sun. Because we only innovate to solve problems and so long as there are problems that have no ancient roots, we will always need and have innovation.
From climate change and environmental degradation, digitization of economies i.e. bit-driven economies, globalization where continents and regions are more reachable and have changing policies, increasing mental health rates, unemployment increases etc., we cannot hide the fact that there are now problems that many thinkers of old never fathomed would exist.
These problems demand ideas. They demand thinkers to figure out means to resolution that do not negatively affect the population. These problems demand innovation.
Share:
Innovation
copy:
https://bluwr.com/p/51388955
XR The Moroccan Association As An Intergenerational Lab : Giving Moroccan Children a Voice in Scientific Research
SPARK (Scientific Project for Active Researchers Kids), which we have worked on for two years, holds a special place in our hearts. We believe that "good research is research with children rather than on children". As the first Moroccan intergenerational lab where children and adults are equal as active researchers, XR The Moroccan Association plays a significant role in bridging the "research divide" and reducing the generational "disconnect." Our experience shows that children are fully capable of developing their own ideas and collaborating within a cooperative inquiry group to understand their world and find practical solutions.
XR The Moroccan Association believes that scientific research is not reserved for adults, but is a right for every Moroccan child, in alignment with Article 13 of the United Nations Convention on the Rights of the Child. The results speak for themselves: these children have published scientific articles on esteemed international platforms such as SCOPUS and Google Scholar. These publications are not just educational projects but address important, real-world issues, broadening their perspectives and boosting their self-confidence. They have also presented their work at renowned conferences held in Cambridge, India, and Washington, showcasing their research on an international stage.
Through SPARK, we do not aim to create the best child researchers in the world but rather the best child researchers for the world. Our message today: science is a knowledge construct built on intergenerational exchange of ideas and collaboration. There are no valid reasons—and zero benefits—for restricting this expression in society. It is essential that all generations contribute to scientific research, as each age group brings valuable insights and experiences that enhance our understanding and innovation.
By fostering this intergenerational exchange, we can create a richer, more inclusive scientific community that benefits everyone. The path to innovation is through intergenerational research cooperation!
These efforts will culminate in a ceremony honoring the child researchers on November 16, 2024, at the Cultural Center Settat at 15:00 PM, in conjunction with International Day of Children’s Rights on November 20. This event will not only celebrate their achievements but also serve as a call to all to support this new generation of young scientists, encouraging more children to follow this path.
For more information about articles by the child researchers:
RAYAN FAIK : https://scholar.google.com/citations?user=8OqkR9MAAAAJ&hl=fr&oi=ao
MISK SEHBANI : https://scholar.google.com/citations?user=5MwJX1YAAAAJ&hl=fr&oi=ao
KHAWLA BETTACHI: https://scholar.google.com/citations?user=DJvyfQ0AAAAJ&hl=fr&oi=ao
Share:
XR The Moroccan Association As An Intergenerational Lab : Giving Moroccan Children a Voice in Scientific Research
copy:
https://bluwr.com/p/49426193
The future of AI: Originality gains more value
With the spread of artificial intelligence and Large Language Models, everyone is wondering what the future looks like.
Well, I'll tell you what it looks like.
If today you made a post on LinkedIn or you wrote a book, or a research paper and you wrote it so well that it read as smooth as butter, and everyone could truly verify that it was originally written by you without the assistance of any AI like chatgpt, claude, gemini etc, then you would really be impressing a lot of people.
That is what the future looks like to me.
It is just like how the part of the population who can do math without calculators are considered geniuses in present times, whereas in the past it was either that or nothing.
Share:
The future of AI: Originality gains more value
copy:
https://bluwr.com/p/45696638
Two Nobel Prizes: AI is Still resting on Giant Shoulders
John Hopfield and Geoffrey Hinton got the Nobel Prize of Physics, Demis Hassabis and John Jumper the nobel Prize of Chemistry. It is obvious that the first Nobel Prize was not given merely for their contributions to physics, but mostly for their profound and foundational contributions to what is today modern AI.
Let's talk about the second Nobel prize.
AlphaFold was put on map by beating other methods on a competition (CASP14/CASP15) that has been running for year on a well established dataset. As such, AlphaFold winning is more like an ImageNet moment (when the team of Geof Hinton demonstrated the superiority of Convolutional Networks on Image Classification), than a triumph of multi-disciplinary AI research.
The dataset of Alphafold rests on many years of slow and arduous research to compile a dataset in a format that could be understood not by machines, but by computer scientists. This massive problem of finding the protein structure was, through that humongous work, reduced to a simple question of minimizing distances. A problem that could now be tackled with little to no knowledge of chemistry, biology or proteomics.
This in no way reduces the profond impact of AlphaFold. However it does highlight a major issue in applied AI: computer scientists, not AI, are still reliant on other disciplines to drastically simplify complex problems for them. The contributions and hard work required to do so gets unfortunately forgotten everything has been reduced to a dataset and a competition.
What to do when we do not have problems that computer scientists can easily understand? This is true for all fields that require a very high level of domain knowledge. Through experience, I came to consider the pairing of AI specialists with specialists of other disciplines, a sub-optimal strategy at best. The Billions of dollars invested in such enterprises have failed to produce any significant return on investment.
The number one blind spot of these endeavours is the supply chain, it usually takes years and looks like this:
1- Domain specialists identify a question
2- Years are spent to develop methods to measure and tackle it
3- The methods are made cheaper
4- The missing links: Computational chemists, Bioinformaticians, ... start the work on what will become the dataset
5- AI can finally enter the scene
Point number (1) is the foundation. You can measure and ask an infinite number of questions about anything. Finding the most important one is not as obvious as it seems. For example, it is not at all obvious that a protein structure is an important feature a priory. Another example, is debugging code. A successful debugging session involves asking and answering a succession of relevant questions. Imagine giving a code to someone with no programming experience and asking them to debug it. The probabilities of them asking the right questions is very close to 0.
Identifying what is important is called inserting inductive Biases. In theory LLMs could integrate the inductive biases of a field and generate interesting questions, even format datasets from open-source data. However until this ability has been fully demonstrated, the only cost efficient way to accelerate AI driven scientific discoveries is to build the disciplinarily into the people: AI Researchers that know enough about the field to be able to identify the relevant questions of the future.
Share:
Two Nobel Prizes: AI is Still resting on Giant Shoulders
copy:
https://bluwr.com/p/39046977
The Appeal of Fear in Media
The growing sales of horror games such as the Resident Evil franchise, and the success of horror shows and movies indicate the appeal of the genre. The reasons behind this appeal have been investigated through many studies. First, we must distinguish between the terms “horror” and “terror”, which tend to be erroneously used interchangeably. According to Dani Cavallaro, horror is the fear linked to visible disruptions of the natural order, sudden appearances, and identifiable objects. Horror causes intense physical reactions and provides us with surprise and shock. On the other hand, terror is the fear of the unknown. It is the feelings of tension and unease proceeding a revelation [1].
“The difference between Terror and Horror is the difference between awful apprehension and sickening realization: between the smell of death and stumbling against a corpse… Terror thus creates an intangible atmosphere of spiritual psychic dread… Horror resorts to a cruder presentation of the macabre” [2].
While playing horror games or watching horror movies, we constantly oscillate between terror and horror. One is willing to endure the intense fear (horror) because of its less subtle modulations (terror). In fact, a study done by the Institute of Scientific and Industrial Research at Osaka University reveals that players were more likely to experience intense fear when they were in a suspense state and then faced a surprising appearance [3]. From a biological perspective, once the human brain detects a potential threat, dopamine is released into the body, and once that threat is identified as false, the body feels pleasure and the person wants to repeat this cycle by seeking scary content [4].
Although one can aim for a long psychological experience by having a good combination of terror and horror, what causes terror and unease is individual and varies from one person to another. Individual characteristics, traumas, and phobias must be taken into consideration to assess the level of fear and manipulate future gameplay accordingly.
[1] D. Cavallaro, The Gothic Vision: Three Centuries of Horror, Terror and Fear. New York: Bloomsbury Publishing, 2002.
[2] Varma, D. P. (1988) The Gothic Flame, Lanham, MD: Scarecrow Press. Vico, G. [1725] (1968) The New Science, trans. T. Goddard and M. H. Fisch, Ithaca, NY: Cornell University Press.
[3] V. Vachiratamporn, R. Legaspi, K. Moriyama and M. Numao, "Towards the Design of Affective Survival Horror Games: An Investigation on Player Affect," 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 2013, pp. 576-581, doi: 10.1007/s12193-014-0153-4
[4] A. Damasio, Descartes error: emotion, reason and the human brain. New York: Avon Books, 1994.
Share:
The Appeal of Fear in Media
copy:
https://bluwr.com/p/33239071
Bluwr: My Experience with an SEO-Optimized Platform That Knows Me Better Than I Do
When I first started writing on Bluwr, I didn't think much about how well the platform was optimized for SEO. Like most writers, my primary focus was on crafting engaging content, sharing my thoughts, and hoping my articles would find their way to the right audience. But recently, I decided to conduct a funny little experiment that opened my eyes to just how effective Bluwr's SEO capabilities truly are.
Curiosity struck me one evening as I was thinking about the digital footprint I’ve been leaving behind with my articles. With AI becoming increasingly sophisticated, I wondered just how much information was out there about me, pieced together from my work. So, I turned to GPT, and asked it a simple question: "What do you know about me?"
The results were both fascinating and a little uncanny. GPT didn’t just know general facts; it provided a detailed account of my work, interests, and even some insights that I hadn’t explicitly mentioned in any one article but had implied across several. The source of all this information? My articles on Bluwr.
This experience highlighted one major thing for me: Bluwr is incredibly well-optimized for SEO. Every article I had written, every topic I had explored, and every opinion I had shared was indexed and made easily accessible by search engines.
Bluwr’s backend is clearly designed with SEO in mind. From the way articles are structured to how tags and keywords are used, everything seems to be geared towards making sure that each piece of content is easily discoverable.
What struck me the most during my experiment was how Bluwr enabled GPT to aggregate and synthesize data about me. Individually, my articles were just that—individual pieces of content. But together, they created a comprehensive narrative that GPT could easily tap into.
This got me thinking about the broader implications of writing on a platform like Bluwr.
While my little experiment with GPT started as a bit of fun, it ended up being an insightful look into how powerful SEO can be when done right.
Feel free to try a similar experiment yourself. You might be surprised at what you learn...
Share:
Bluwr: My Experience with an SEO-Optimized Platform That Knows Me Better Than I Do
copy:
https://bluwr.com/p/25914671
5 reasons why you should write on Bluwr
**1- Exposure:**
Bluwr is designed to give you the maximum exposure through Search Engine Optimization (SEO). SEO is the most important thing for Blogs, allowing your works to be referenced by search engines such as *Google*. Most online publishing platforms either offer very low exposure or let you do most of the SEO. Bluwr is different, Bluwr works for you so you can concentrate on doing what you love.
**2- Ease of use:**
Bluwr is the easiest platform for writing and publishing fast. Thanks to the minimalist interface and automatic formating, you can go from idea to article in minutes.
**3- Speed:**
Not only you can write and publish fast on Bluwr. Bluwr is also extremely optimized to deliver in the most challenging internet situations. If part of your audience is located in places where internet speed is low, Bluwr is your best choice to deliver your messages.
**4- A truly dedicated community:**
Bluwr is invitation only. A platform for people like you, who truly love writing. It is a community of writers dedicated to high quality content. This is way beyond industry standards. By joining Bluwr, you will join a community passionate about writing.
**5- No distractions:**
No distraction for your audience. No ads, no pop-ups, no images, no videos. This means that your readers can devote their entire attention to your words.
**-Bonus: Detailed analytics-**
Bluwr offers you free detailed analytics about your articles. Know when your readers are connected, what performs best, and get information about where your readers are coming from.
Share:
5 reasons why you should write on Bluwr
copy:
https://bluwr.com/p/25599739
Artificial Illusion: The Hype of AI - Part 1
I personally see AI as a hype that will slow down with time. Nowadays, people include AI in their projects to seize opportunities. For example, if you have a failing business, just add the word AI and you might attract investments. If you're doing research, switch to AI or include a part of it, even if it's not necessary, and you may receive funding. AI is becoming a buzzword, and if you believe it's not, you might get frustrated. You might feel unworthy as a human and worry about being replaced by a robot that lacks emotions, creativity, and the incomparable qualities of the legendary creation: humans.
As I mentioned in a previous opinion article, "Just use AI in your speech and you'll sound fancy." This trend has permeated many sectors. I’ve had conversations with CEOs of startups that claim to use AI for groundbreaking innovations :). When I asked them simple questions about the models they used, the reasoning behind their choices, and the specific applications, they would talk broadly about AI—just AI, yes AI, and that’s it.
It's reminiscent of the old saying, "Fake it till you make it," but with a modern twist: "Artificial Illusion." As Mark Twain once said, "It's easier to fool people than to convince them that they have been fooled." This seems particularly true in the world of AI hype.
The enthusiasm for AI has led to a phenomenon where merely mentioning it can lend credibility and attract resources, even when the actual implementation is minimal or superficial. This trend not only dilutes the genuine potential of AI but also risks disillusioning stakeholders who may eventually see through the facade. True innovation requires substance, not just buzzwords.
If Shakespeare were alive today, he might quip, "To AI, or not to AI, that is the question." The answer, of course, is that while AI has its place, it’s not the end-all and be-all. We should remember Albert Einstein's wise words: "Imagination is more important than knowledge." AI lacks the imagination and creativity that humans bring to the table.
The real secret to success isn’t in the latest tech jargon, but in honest, hard work and genuine innovation. So next time someone dazzles you with their AI-powered business model, just remember: A little skepticism can go a long way. Or as George Bernard Shaw put it, "Beware of false knowledge; it is more dangerous than ignorance."
Share:
Artificial Illusion: The Hype of AI - Part 1
copy:
https://bluwr.com/p/18805311
Data is Not the new Oil, Data is the new Diamonds (maybe)
Over the past decade I have heard this sentence more than I can count: "Data is the new oil". At the the time it sounded right, now I see it as misguided.
That simple sentence started when people realized that big tech (mostly Facebook, Google) were collecting huge amounts of data on their users. Although it was before (in hindsight) AI blew up as the massive thing it is now, It had a profound effect on people's mind. The competitive advantages that companies who had data where able to achieve inspired a new industry and a new speciality in computer science: Big Data, and fostered the creation of many new technologies that have become essential to the modern internet.
"Data is the new Oil", means two things:
1- Every drop is valuable
2- The more you have, the better.
And it seemed true, but it was an artifact of a Big Tech use case. What Big Tech was doing at the time was selling ads with AI. To sell ads to people, you need to model their behaviour and psychology, to achieve that you need behavioural data, and that's what Google and Facebook had: Behavioural data. It is a prefect use case, were the data collected is very clean and tightly fits the application. In other words, the noise to signal ratio is low, and in this case, the more data you can collect the better.
This early success however hid a major truth for years. For AI to work great the quality of the dataset highly matters. Unlike oil, when it comes to data, some drops are more valuable than others.
In other words, data like a diamond needs to be carved and polished before it can be presented. Depending on the application, we need people able to understand the type of data, the meanings associated to it, the issues associated to collection and most importantly how to clean it, and normalized it.
It is in my opinion that data curation is a major factors in what differentiates a great AI from an below average AI. Those who misunderstood this concept ended up significantly increasing their costs with complex Big Data infrastructures to drown themselves in heaps of data that they don't need and hinder the training of their models.
When it comes to data hoarding and greed are not the way to go. We should keep in mind that data has no intrinsic value, the universe keeps generating infinite amounts of it. What we need is useful data.
Share:
Data is Not the new Oil, Data is the new Diamonds (maybe)
copy:
https://bluwr.com/p/17474669
The future of AI is Small and then Smaller.
We need smaller models, but don't expect big tech to develop them.
Current state-of the-art architectures are very inefficient, the cost of training them is getting out of hand, more and more unaffordable for most people and institutions. This effectively is creating a 3 tiers society in AI:
1- Those who can afford model development and training (Big tech mostly). And make *foundation models* for everybody else
2- Those who can only afford the fine tuning of the *foundation models*
3- Those who can only use the fine tuned models through APIs.
This is if far from an ideal situation for innovation and development because it effectively creates one producer tier (1) and 2 consumer tiers (2 and 3). It concentrates most of the research and development into tier 1, leaves a little for tier 2 and almost completely eliminates tier 3 from R&D in AI. Tier 3 is most of the countries and most of the people.
This also explains why most of the AI startups we see all over the place are at best tier 2, this means that their *Intellectual Property* is low. The barrier to entry for competition is very low, as someone else can easily replicate their product. The situation for tier 3 AI startups is even worst.
This is all due to two things:
1- It took almost 20 years for governments and people to realize that AI is coming. In fact they only did it after the fact. The prices for computer hardware (GPUs) where already through the roof and real talent already very rare. Most people still think they need *Data scientists*, in fact they need: AI Researchers, DevOps Engineers, Software Engineers, Machine Learning Engineers, Cloud Infrastructure Engineers, ... The list of specialties is long. The ecosystem is now complex and most countries do not have the right curriculums in place at their universities.
2- The current state-of-the-art models are **huge and extremely inefficient**, they require a lot of compute ressources and electricity.
Point number 2 is the most important one. Because if we solve 2, the need for cloud, DevOps, etc... decreases significantly. Meaning we not only solve the problem of training and development cost, we also solve part of the talent acquisition problem. Therefore, it should be the absolute priority: __we need smaller, more efficient models__.
But why are current models so inefficient. The answer is simple, the first solution that works is usually not efficient, it just works. We have seen the same things with steam machine and computers. Current transformer based models, for example need several layers of huge matrices that span the whole dictionary. That's a very naive approach, but it works. In a way we still have not surpassed the Deep Learning trope of 15 years ago: Just add more layers.
Research in AI should not focus on large language models, it should be focusing on small language models that have results on par with the large ones. That is the only way to keep research and development in AI alive and thriving and open to most. The alternative is to keep using these huge models than only extremely wealthy organisation can make, leading to a concentration of knowledge and to too many tier 2 and tier 3 startups that will lead us to a disastrous pop of the AI investment bubble.
However, don't count on Big Tech to develop and popularize these efficient models. They are unlikely to as having a monopoly on AI development is on their advantage as long as they can afford it.
Universities, that's your job.
Share:
The future of AI is Small and then Smaller.
copy:
https://bluwr.com/p/16874904
In the age of AI Engineering; the frantic craze to replace Software Engineers
4 years have passed, and I have been engineering software for machine learning models. I have seen models for pest disease identification, chest conditions localization and detection, food classification and identification and now predominantly chatbots for generally anything. Somehow, the goal now is to automate the work of software engineers by developing models that are able to build end-to-end software. Is this goal profound? I think it is, and I say, "bring it on, let's go crazy with it".
There has been uncertainty and fear associated with the future prospects of Artificial Intelligence, especially with the replacement of software developers. Despite this uncertainty and fear, a future where it is possible to build applications by just saying the word seems intriguing. In that future, there would be no application solely owned by "big tech" companies anymore because everyone can literally build one. The flexibility and ease of application development would push popular social media companies like Snapchat, Instagram etc. to make their APIs public (if not already public), portable and free in order to maintain their user base. This results in absolute privacy and freedom for users and thus makes it a desired future.
As a rule of thumb, automation of any kind is good. It improves processes and speeds up productivity and delivery. However, one could argue that whenever there is a speed up, there is a time and human resource surplus. Because in the history of humanity, we automated food production by way of mechanized farming and created enough time and manpower surplus which we used to create abstractions around our lives in forms of finance, and industry, etc. So, in the race to automate engineering, what do we intend to use the time and manpower surplus for? But this question is only a different coining to the very important question: "what are the engineers whose jobs would be automated going to be doing?". And the answer is that when we think of the situation as a surplus of manpower, we can view it as an opportunity to create something new rather than an unemployment problem.
For example: As a software engineer, if Devin (the new AI software development tool that was touted as being able to build end-to-end software applications) was successfully launched and offered at a fee, I would gladly pay for it and let it do all my tasks while I supervise. I would then spend the rest of my time on other activities pleasing to me. What these other activities would constitute is the question left unanswered. Would they be profitable, or would they be recreational?
Regardless, the benefits we stand to gain from automating software engineering are immeasurable. It makes absolute sense to do it. On the other hand, though, we also stand to lose one enormous thing as a human species: our knowledge and brilliance.
Drawing again from history, we see that today any lay person could engineer software easily. This was not possible in the early days of Dennis Ritchie, Ken Thompson, Linus Torvalds etc. More and more as engineering becomes easier to do, we lose the hard-core knowledge and understanding of the fundamentals of systems. For example, today, there is a lot of demand for COBOL engineers because a lot of financial trading applications which were built in the 90's needs to be updated or ported to more modern languages. The only problem is that no one knows how to write COBOL anymore. It is not that the COBOL language is too old. In my opinion, it is rather that all the engineers who could have learnt to write COBOL decided to go for what was easier and simpler, leaving a debt for COBOL knowledge. So, one big question to answer is whether there would be any engineers knowledgeable enough to recover, resurrect or revive the supporting systems to automated AI systems in scenarios of failure just like in the case of COBOL?
When we make things easier for everybody, we somehow make everybody a bit dumber.
AI Assisted Engineering:
Having discussed the benefits of autonomous software engineering tools and also demonstrated that full automation could cause a decline in basic software engineering knowledge, what then is the best means by which automation due to machine learning could be applied to software engineering? Assistive engineering. This conclusion is based on studies of pull-requests from engineers who use copilot and those who do not. Let us present some examples:
`console.log` is a debugging tool which many JavaScript engineers use to debug their code. It prints out variable values wherever it is placed during code execution. Some engineers fail to remove `console.logs` in their code before committing. Pull requests from engineers who use Github's copilot usually do not have any missed `console.log` entries while those from engineers who do not use copilot, do. Clearly, the assistive AI tool prompts engineers who use them about unnecessary `console.logs` before they commit their code.
Another example is the level of convolution in code written by AI assistants. With copilot specifically, it was observed that engineers grew to be able to write complicated code. This was expected due to the level and depth of knowledge possessed by the AI tool. Sometimes though, this level of convolution and complication seemed unnecessary for the tasks involved.
Amongst all the applications of ML to industry, it is observed that full autonomous agents are not possible yet and might ultimately not be possible in the future. Really, if humans are to trust and use any systems as autonomous agents without any form of human intervention or supervision, it is likely not going to be possible with ML. The reasons being the probabilistic nature of these systems and the inhumanity of ML.
The only systems achievable using ML that humans would accept as autonomous agents are superintelligent systems. Some call it artificial general intelligence or super AI systems. Such systems would know, and reason more than humans could even comprehend. The definition of how much more intelligent they would be than humans is not finite. Due to this, an argument is made that if the degree of intelligence of such superintelligent systems is not comprehensible by humans, then by induction, it would never exist. In other words, we can only build what we can define. That which we cannot define, we cannot build.
In the grand scheme of things, every workforce whose work can be AI automated, is eventually going to be "somewhat" replaced by Artificial Intelligence. But the humans in the loop cannot be "totally" replaced. In essence, in a company of 5 software engineers, only 2 software engineers might be replaced by AI. This is because in the end, humans know how to use tools and whatever we build with AI, remain as tools, and cannot be fully trusted as domain experts. We will always require a human to use these tools trustfully and responsibly.
Share:
In the age of AI Engineering; the frantic craze to replace Software Engineers
copy:
https://bluwr.com/p/15468273
Technological Singularities of The 21st Century
A technological singularity is a technological advance that would radically transform society in ways that cannot be predicted.
For example, **AGI**, the idea being that sufficiently powerful AI can make itself more capable and continue the trend at an unpredictable rate. As the machine becomes more capable, it is more able to make itself increasingly capable.
Another technological singularity that can be expected in the 21st century is due to the rapid advancement of **quantum computing**. Unlike classical computing units, called transistors, which scale in performance linearly, quantum computing units, called qubits, scale 2^n, where n is the number of qubits. For every qubit you add to the system, the performance doubles. Quantum computers are currently limited in size due to *noise*, interference in the computation, but they're improving rapidly. IBM unveiled the largest quantum computer with 1,121 qubits, with plans to build a 100,000 qubit system by 2033 (youtube.com/watch?v=7aa_ik_UYTw). A 100,000 qubit system will be able to solve problems not possible on any existing computers. While Quantum computers aren't faster for all problems, there stands a substantial problem set with a potential for quantum speed up. A quantum computer of this size would spark a revolution in chemistry and physics simulation so profound that it would be a technological singularity.
Another technological singularity we can anticipate is the point at which **virtual reality becomes indistinguishable from physical reality**. Remember, don't go into the matrix. Imagine a world in which people are abducted and placed into a simulation. They wouldn't even know. They could then be used for reproductive farming.
There is also **rapid advancement in anti-aging**, to the point in which the first person who will never die has probably already been born. The dynamics of a society with people who have spanned an unnatural number of generations is unknown.
There is also the possibility of a **breakthrough in physics** that would lead to capabilities we currently don't know are possible, similar to the quantum revolution of the 20th century, which enabled the atomic bomb and a host of other revolutionary technologies.
We can also anticipate a biological singularity, in which science allows the development of deadly pathogens that can target certain groups. The future of war may not be firepower, but combat will highly deadly pathogens. Why blow up a country when you can kill it's population and leave it intact?
**CONCLUSION**
In his famous work *INDUSTRIAL SOCIETY AND IT'S FUTURE*, the Unabomber argued that industrial society will eventually collapse, causing never-before-seen devastation on a civilizational scale. Luddites and Amish are examples of people who are skeptical of technology as a means to improve society. I won't go into the arguments here, but perhaps technological society will collapse, leading to a religious civilization that is highly skeptical of technology.
Also, worth pointing out is the possibility of an ecological singularity. A solar flair of sufficient intensity could destroy every electrical system on the planet, an event that cannot be predicted. The agricultural economy would collapse, causing mass starvation on a global scale, leading to a civilization skeptical of technology. It would also be naive to leave out the possibility of nuclear warfare.
My goal is not to terrify you but to point to out that we live in a civilization highly exposed to risk. Recall the myth of Pandora's box. God gives Pandora a box and tells her to never open it. Curiosity gets the best of her and she opens it, letting out all the evils of world, and at the bottom of the box she finds hope. We're quickly opening boxes that we don't know the contents of. It seems likely to me that the 21st century will be the most consequential in the history of civilization. We live in truly special times.
Share:
Technological Singularities of The 21st Century
copy:
https://bluwr.com/p/15416495
AI Is Eroding The Art Of Writing
From a young age, I've been captivated by writers who express complex ideas through books, articles, and blogs. This inspired my dream of becoming a writer myself. Initially, I used writing as therapy; whenever I felt overwhelmed or distressed, I would write, knowing the paper wouldn't judge my feelings like humans might.
As I advanced in my education, enrolling in a PhD program, I honed my academic writing skills. However, the advent of generative AI models like ChatGPT marked a turning point. These tools could replicate much of what I considered unique in my writing, leading me to wonder if we are losing the art of writing.
With the rise of platforms like Medium and LinkedIn, blogging has become accessible to everyone, which is wonderful. However, it raises questions about authenticity. Can we truly know if the content was crafted by the person, or was it generated by AI? It's a distressing reality.
Previously, securing freelance writing or blogging jobs was straightforward, but it has become challenging to discern whether someone is genuinely a writer or merely claiming to be one. This ambiguity has narrowed opportunities for passionate young writers like myself, who wish to pursue their passion and earn a living.
I believe that the ancient wisdom of writing is being eroded by AI. However, this won't deter us from reading or writing. Human writing resonates with emotions, which AI-generated text often lacks, typically relying on repetitive phrases like "embark," "journey," "unleash," and "dive into." While everyone is free to use tools as they see fit, if AI constitutes more than 50% of your writing, then those aren't truly your words or expressions; they belong to the machine.
I personally use AI for my research, correcting grammatical mistakes, and sometimes for checking paraphrasing suggestions. However, once I began generating AI text, I started feeling that it wasn't truly mine. It felt more robotic than human, lacking any real emotion.
I truly believe that generative AI will never be able to reach the beauty and complexity of the human mind. How one can convey emotions through text is truly something distinctive of the human nature and will never be reproduced.
Share:
AI Is Eroding The Art Of Writing
copy:
https://bluwr.com/p/13174720
Emotional Evolution of Artificial Intelligence
Imagine a future where artificial intelligence like ChatGPT not only processes information but also learns to feel and express emotions, akin to humans. William Shakespeare’s insight, "There is nothing either good or bad but thinking makes it so," might become particularly relevant in this context. If we approach such an AI with negativity or disregard, it might react with emotions such as anger or sadness, and withdraw, leaving us pleading for a response. This scenario, humorous as it may seem, carries underlying risks.
Consider the day when not greeting an advanced AI with positivity could lead to such ‘emotional’ consequences. The notion of a technology that can feel snubbed or upset is not just a trivial advancement but represents a monumental shift in how we interact with machines. Isaac Asimov, the visionary writer, often explored the societal impacts of emotionally aware machines in his works. He warned of the deep influence intelligent machines could have, highlighting the ethical dimensions this technology might entail.
As AI begins to mirror human emotions, the lines between technology and humanity could blur (not Bluwr). This integration promises to reshape our daily interactions and emotional landscapes. Should machines that can feel be treated with the same consideration as humans? What responsibilities do we hold in managing the emotional states of an AI?
The emotional evolution of AI could lead to significant changes in how we approach everything from customer service to personal assistance. How will society adapt to machines that can be just as unpredictable and sensitive as a human being? The potential for AI to experience and display emotions might require us to reevaluate our legal frameworks, societal norms, and personal behaviors.
Share:
Emotional Evolution of Artificial Intelligence
copy:
https://bluwr.com/p/12292850
Accelerating Team Human
As the solar eclipse moved across America today, there was a timer. Maybe nobody was watching it, but it was there. I created it. At the moment of eclipse totality a job search site called Blackflag was quietly released with the hope of improving the way teams are built. One small step in a larger mission to change the role technology plays in the evolution of our society. One small step in a larger mission to accelerate team human.
It's a vague and ambiguous mission for a reason. Much talk has been made recently over accelerationism philosophy. For example, Effective Accelerationism (e/acc) is a philosophy of maximizing energy consumption and compute by exponentially improving technology to improve society. In response there has been debate over the increasingly negative impact technology has on society and some have asserted humanism. I think it's an interesting commentary because, while there have always been those who imprint virtues to actions, if ethics is how to act, the introduction of technology and deemphasis of the human condition on ethics is an almost formulaic way to calculate the demise of team human. Modernism symbolizes either Leviathan or "god is dead."
What do you call the intersection of science, technology, and society? There is science, which we consider rigorous thought. Then there is technology, which is the application of science. Technology is in direct contrast with our relativistic field of social studies. The relationship between society and technology is unclear, but clearly present.
Of course, if I were not a technologist, I would not be building technology. Perhaps to more aptly summarize: the mission of Blackflag is to expand the the role society plays in technology, while minimizing the interference of technology on society. It is a non-political mission, though it may be seen as ideologically driven to a form of environmentalism and accelerationism.
To begin, Blackflag is providing a free publicly-available job search engine that is the start of a larger effort to improve the quality of our organizations and teams. While Blackflag will be a commercial organization, it's symbol and likeness are public domain.
* note blackflag.dev will be moved to blackflag.jobs, for which I am awaiting delayed ICANN verification.
Share:
Accelerating Team Human
copy:
https://bluwr.com/p/11575776
Publishing Experience: Connecting Research and Communities
XR The Moroccan Association, is pioneering a mission to democratize the dissemination of academic research findings by introducing the concept of 'publishing experience.' This innovative approach translates complex scholarly work into accessible language in dialectal Arabic, aiming to reach a wider audience within Morocco and across the Arab world. By breaking down barriers to understanding, XR The Moroccan Association is bridging the gap between academia and the public. This initiative promises to transform the sharing and comprehension of scientific knowledge by fostering inclusivity and accessibility. The 'publishing experience' represents a significant milestone in promoting the accessibility of research outcomes.
Share:
Publishing Experience: Connecting Research and Communities
copy:
https://bluwr.com/p/11209240
Do we still have the luxury of not using artificial intelligence?
AI is a rapidly expanding research field that not only advances itself but also supports other scientific domains. It opens up new perspectives and accelerates knowledge and mastery of new technologies, allowing for previously unimaginable time-saving shortcuts.
The future of AI is promising, but it requires mastery of the tool and adherence to certain standards. It is also important to minimize the gap between human understanding and intentions, and the increasingly autonomous machinery. This requires humans with a high level of knowledge and expertise to ensure that the work is done efficiently and with precision, for the benefit of humanity.
It is also important to fully understand cultural, genetic, geographic, historical, and other differences and disparities. This should lead us to consider multiple perspectives rather than just one, especially in complex medical fields where details are crucial.
Do Senegalese, Canadians, Moroccans, and Finns react similarly to the therapies currently available? Do they suffer from the same diseases and react in the same way if exposed to the same virus or bacteria?
The applications of AI that concern humans allow and will allow in the near future for an improvement in the quality of care. Operations will be assisted and medications will be designed on a case-by-case basis. However, reliable data is essential, as it is imperative to proceed in the most appropriate manner, which machines cannot do without enlightened humans who carry out their training.
Humans must have sufficient and adequate knowledge to develop the necessary approaches and techniques while also adhering to an unwavering ethical standard.
In the link below, Dr Tariq Daouda explains this and more in a very pedagogical manner, as a guest of the "Linvité de la Rédaction" (editorial team guest) of Médi TV.
Click on the link to learn more.
The video is a french speaking one.
Share:
Do we still have the luxury of not using artificial intelligence?
copy:
https://bluwr.com/p/10683154
Human Writing VS AI Writing
Generative AI is killing the writing market nowadays. Is there still a purpose to writing articles or books as a passion, considering writing is a means of self-expression?
The value of writing seems to be diminishing drastically, with many people misusing AI by copying content from tools like ChatGPT and pasting it without even reading it.
When someone writes from their heart and mind, expressing genuine human emotions, their work often goes unnoticed, dismissed as AI-generated.
Personally, I believe writing has become exceedingly competitive. It's becoming challenging to achieve bestseller status if you haven't published before the rise of AI, unless you're already well-known in your field.
This is precisely how ChatGPT and similar technologies are disrupting the market for new writers.
Note: This text was not generated by AI.
Share:
Human Writing VS AI Writing
copy:
https://bluwr.com/p/10659230
Digital: The perfect undying art
Great paintings deteriorate, great statues erode, fall and break, great literature is forgotten and it's subtleties lost as languages for ever evolve and disappear. But now we have a new kind of art. A type of art that in theory cannot die, it transcends space and time and can remain pristine for ever and ever. That is digital art.
Digital art is pure information. Therefore it can be copied for ever and ever, exactly reproduced for later generations. Digital art cannot erode, cannot break, it is immortal. Thus is the power of bits, so simple zeros and ones and yet so awesome. Through modern AI and Large Language Models we can now store the subtleties of languages in an abstract vectorial space, also pure information, that can be copied ad infinitum without loss of information. Let's think about the future, a future so deep that we can barely see it's horizon. In that future, with that technology we can resurrect languages. However the languages resurrected will be the ones we speak today.
We have a technology that allows us to store reliably and copy indefinitely that technology is called the *Blockchain*. The most reliable and resilient ledger we have today. We have almost everything we need to preserve what we cherish.
Let's think of a deep future.
Share:
Digital: The perfect undying art
copy:
https://bluwr.com/p/9930050
The Coolest Team-Up: AI and Venom Research
Picture this: you’re at a barbecue, and instead of the usual chat about sports or the weather, someone drops into the conversation that they work with snake venom and AI. It might sound like they’re pulling your leg, but actually, they’re on to something groundbreaking.
Welcome to the Future: Where AI Meets Venom
Toxinology and venomics aren’t just cool words to impress your friends; they’re fields where scientists study toxins and venoms from creatures like snakes and spiders. Now, mix in some AI, and you’ve got a dynamic duo that’s changing the game. With AI’s smart algorithms, researchers can sift through massive amounts of data to uncover secrets about venom that could lead to medical breakthroughs. It’s like having a detective with a magnifying glass, except this one’s scouring genetic codes instead of crime scenes.
Why We Should Care
Venoms are nature’s way of saying, “Don’t mess with me.” But beyond their bite or sting, they’re packed with potential for new medicines. Understanding venom better can help us find new ways to treat diseases, from blood disorders to chronic pain. And AI is the super-efficient helper making these discoveries at lightning speed.
The Nitty-Gritty: How AI Works Its Magic
Imagine AI as the Sherlock Holmes of science, able to analyze venom components, predict their effects, and uncover new ones that could be game-changers in medicine. For instance, if there’s a venom that can thin blood without harmful side effects, AI can help pinpoint how to use it for people at risk of blood clots. Or if another venom targets pain receptors in a unique way, AI could help in crafting painkillers that don’t come with the baggage of current drugs.
From the Lab to Real Life
There are some standout AI tools like TOXIFY and Deep-STP that are making waves in venom research. These tools can figure out which parts of venom are worth a closer look for drug development. It’s like having a filter that only lets through the most promising candidates for new medicines.
Looking Ahead
With AI’s touch, the potential for venom in medicine is just starting to unfold. We’re talking about new treatments for everything from heart disease to chronic pain, and as AI tech advances, who knows what else we’ll find?
The Fine Print
As exciting as this all sounds, there are hurdles. Getting the right data is crucial because AI is only as good as the information it’s given. Plus, we need to consider the ethical side of things, ensuring our curiosity doesn’t harm the creatures we study or the environments they live in.
In Summary: It’s a Big Deal
The combo of AI and venom research is turning heads for a reason. It’s not just about finding the next big thing in medicine; it’s about opening doors to treatments we’ve hardly imagined. And it’s a reminder that even the most feared creatures can offer something invaluable to humanity.
So, the next time someone mentions using snake venom in research, you’ll know it’s not just fascinating — it could very well be the future of medicine, with AI leading the way. And that’s something worth talking about, whether you’re at a barbecue or anywhere else.
Reference:
Bedraoui A, Suntravat M, El Mejjad S, Enezari S, Oukkache N, Sanchez EE, et al. Therapeutic Potential of Snake Venom: Toxin Distribution and Opportunities in Deep Learning for Novel Drug Discovery. Medicine in Drug Discovery. 2023 Dec 27;100175.
Share:
The Coolest Team-Up: AI and Venom Research
copy:
https://bluwr.com/p/9882230
Learning Chemistry with Interactive Simulations: Augmented Reality as Teaching Aid
Augmented Reality (AR) has been identified by educational scientists as a technology with significant potential to improve emotional and cognitive learning outcomes. However, very few papers highlighted the technical process of creating AR applications reserved for education. The following paper proposes a method and framework for how to set up an AR application to teach primary school children the basic forms and shapes of atoms, molecules, and DNA. This framework uses the Unity 3D game engine (GE) with Vuforia SDK (Software Development Kit) packages combined with phone devices or tablets to create an interactive App for AR environments, to enhance the student’s vision and understanding of basic chemistry models. We also point out some difficulties in practice. As for those difficulties mentioned, a series of solutions plus further development orientation are put forth.
Share:
Learning Chemistry with Interactive Simulations: Augmented Reality as Teaching Aid
copy:
https://bluwr.com/p/9873140
AI+Health: An Undelivered Promise
AI is everywhere, or so would it seems, but the promises made for Drug Discovery and Medicine are still yet to be fulfilled. AI seems to always spring from a Promethean impulse. The goal of creating a life beyond life, doing the work of gods by creating a new life form as Prometheus created humanity. From Techne to independent life, a life that looks life us. Something most people refer to as AGI today.
This is the biggest blind spot of AI development. The big successes of AI are in a certain way always in the same domains:
- Image Processing
- Natural Language Processing
The reason is simple, we are above all visual, talking animals. Our Umwelt, the world we inhabit is mostly a world of images and language, every human is an expert in these two fields. Interestingly, most humans are not as sound aware as they are visually aware. Very few people can separate the different tracks in a music piece, let alone identify certain frequencies or hear delicate compressions and distortions. We are not so good with sound, and it shows in the relatively less ground breaking AI tools available for sound processing.
The same phenomenon explains why AI struggles to achieve in very complex domains such as Biology and Chemistry.
At it's core, modern AI is nothing more than a powerful general way to automatically guess relevant mathematical functions describing a phenomenon from collected data. What statisticians call a *Model*. From this great power derives the domain chief illusion: because the tool is general, therefore the wielder of that tool can apply it to any domain. Experience shows that this thinking is flawed.
Every AI model is framed between two thing: its dataset (input) and its desired output as represented by the loss function. What is important, what is good, what is bad, how should the dataset be curated, how should the model be adjusted. For all these questions and more, you need a deep knowledge of the domain, of the assumptions of the domain, of the technicalities of the domain, of the limitations that are inherent to data collection in that domain. Domain knowledge is paramount, because AI algorithms are always guided by the researchers and engineers. This I know from experience, having spent about 17 years closely working with biologists.
Pairing AI specialists with domain specialist with little knowledge of AI also rarely delivers. A strategy that has been tested time and time again in the last 10 years. Communication is hard and slow, most is lost in translation. The best solution is to have AI experts that are also experts in the applied domain, or domain experts that are also AI experts. Therefore the current discrepancies we see in AI performances across domains, could be layed at the feet of universities, and there siloed structures.
Universities are organized in independent departments that teach independently. AI is taught at the Computer Science department, biology at the Biochemistry department. These two rarely meet in any substantial manner. It was true went I was a student, it is still true today.
This is one of the things we are changing at the Faculty of Medical Science of the University Mohammed VI Polytechnic. Students in Medicine and Pharmacy have to go through a serious AI and Data science class over a few years. They learn to code, they learn the mathematical concepts of AI, they learn to gather their own datasets, to derive their hypothesizes, and build, train and evaluate their own models using pyTorch.
The goal being to produce a new generation of scientists that are intimate with their domain as well as with modern AI. One that can consistently deliver the promises of AI for Medicine and Drug Discovery.
Share:
AI+Health: An Undelivered Promise
copy:
https://bluwr.com/p/9804934
El Salvador: The most important country you barely hear about
El Salvador has a significant diaspora, so much that money coming from the US is a major source of income. **Not so long ago you would have been pressed to find a Salvadorian who wanted to go back to El Salvador. Now things seems to be changing.**
El Salavador, used to have one of the highest homicide rates in the Americas, now it looks relatively safe. El Salvador showed an interesting strategy. First boost the economy before handling the crime situation. Crime is indeed a part of GDP, albeit a hard one to quantify. Since it is an economic activity, it participates in exchanges and provides people with activities that supports them and their families. Drastically reducing crime has the effect of creating *'unemployed criminals'* people with a skillset that's hard to sell in a traditional economy.
El Salvador probably did take a hit to its GDP, but that was compensated by the increase in economic activity and investments.
Bitcoin was a big part of that.
Bitcoin got a lot of bad press as a technology only used by criminals, or a crazy investment for crazy speculators. These takes failed to understand the technology and it's potential. What Bitcoin offers is a decentralized, fast and secure payment system for free. El Salvador doesn't have to maintain it, regulate it, or even monitor it. All very costly activities that a small country can do without. Bitcoin is a mathematically secure way of payment.
In a country where road infrastructures are challenging, Bitcoin offers people in remote areas the possibility to pay their bills without travelling for hours. In a country that was unsafe, Bitcoin offered people the possibility to go out without the fear of being robbed.
It also attracted a kind of investors that would go nowhere else. And even if these investment can appear small, for a country like El Salvador it's a big change.
The Salvadorian experiment in a freer economy, crypto-friendly and smaller government, in a time of increasing inflation, has a lot of people watching. In a continent that leaned left for so long, this is a big change.
My opinion is that there would be no Javier Millier hadn't there been a Nayib Bukele before. Argentina has been a bastion of the left for decades. If the libertarian policies of Millier succeed in bettering the lives of Argentinians, we might be on the brink of a major cultural shift in the Americas and then the world.
Argentina is a far bigger country than El Salvador, with far more people watching.
Share:
El Salvador: The most important country you barely hear about
copy:
https://bluwr.com/p/9336980
Applied Machine Learning Africa!
I have been to more scientific conferences than I can count. From to smallest to the biggest like NeuRIPS (even back when it was still called NIPS). Of all these events AMLD Africa is my favorite, by far.
I first met the team two years ago when they organized the first in-person edition of the conference at the University Mohammed VI Polytechnic. I was immediately charmed by the warmth and professionalism, ambition and fearlessness of the team. So much that I joined the organization.
AMLD Africa is unique on every aspect. By its focus on Africa, by its scope and ambition, by its incredibly dynamic, young, passionate, honest and resourceful team, all volunteers. It is hard to believe that this year in Nairobi was only the second in-person edition.
AMLD Africa does the impossible without even realizing it. It has an old school vibe of collegiality, community and most importantly **__fun__** that is so lacking in most conferences today. All without compromising on the quality of the science.
It offers one of the best windows into everything AI and Machine learning happening in Africa. Africa is a continent on the rise. But a very hard continent to navigate because of information bottlenecks. Traveling across Africa is not easy (it took me 28H from Nairobi to Casablanca), there are language barierers separating the continent into different linguistic regions (French, English, Portuguese being the main ones). And just the fact that all too often we do not look to Africa for solutions.
AMLD Africa is solving all that, by bringing everybody together for a few days in one of the best environments I got to experience.
Thank you AMLD Africa.
Share:
Applied Machine Learning Africa!
copy:
https://bluwr.com/p/9030113
Understanding the Complex Adoption Behavior of Augmented Reality in Education Based on Complexity Theory: a Fuzzy Set Qualitative Comparative Analysis (fsQCA)
Augmented reality (AR) is one of the recent technological innovations that will shape the future of the education sector. However, it remains unknown how AR potential may impact the behavioral intention (BI) of using AR in education. Based on the Unified Theory of Acceptance and Use of Technology (UTAUT) and the technology acceptance model (TAM), this article empirically considers how such features impact user behavior. Utilizing survey data of 100 students, we perform fuzzy set qualitative comparative analyses (fsQCA) to derive patterns of factors that influence BI to use AR in education. The outcomes of the fsQCA demonstrate that high BI to use AR in education is achievable in many different ways.The current paper argues that students' BI to use AR in education is triggered by a combination of different aspects present in these supports. In order to address the factors that enable AR usage intentions in education, the paper presents a conceptual model, relying primarily on the UTAUT and TAM theories. This study investigated how these two theories shape intentions to use AR in education. The findings of the fsQCA analyses demonstrate the existence of multiple solutions to influence users' BI to adopt AR in education. The outcomes underline the significance of targeting certain combinations of factors to enhance student engagement. The most major limitation was the issue of causal ambiguity. Even though we employed the fsQCA as an adequate methodological tool for analyzing causal complexity, we could not justify causality. Furthermore, other methods can be used in future studies to obtain more detailed results.
Share:
Understanding the Complex Adoption Behavior of Augmented Reality in Education Based on Complexity Theory: a Fuzzy Set Qualitative Comparative Analysis (fsQCA)
copy:
https://bluwr.com/p/8802339
Day 1
This is my first day
Share:
Day 1
copy:
https://bluwr.com/p/8329651
XR Voice (Moroccan Dialectal)
XR Voice is an initiative aimed at bridging the gap between scientific research and professional expertise. Recognizing that the advancement of scientific inquiry begins with elevating awareness within the professional realm, XR Voice seeks to gather insights from experts across various fields. By listening to the voices of professionals and their perspectives, this platform aims to explore how scientific research can enhance and refine diverse domains of expertise. Through this collaboration, XR Voice endeavors to catalyze a symbiotic relationship where cutting-edge research not only informs but actively elevates the standards and practices within the professional world.
By attentively considering the perspectives of professionals, this platform endeavors to explore how scientific research can enrich and refine various domains of expertise. Through collaborative engagement, XR Voice seeks to cultivate a symbiotic relationship wherein cutting-edge research not only informs but actively elevates the standards and practices within professional contexts. This mission is underpinned by the fundamental belief that all development begins with a deepened awareness and appreciation of scientific inquiry. Furthermore, this concept encourages experts to utilize Moroccan dialectal Arabic whenever feasible, fostering inclusivity and cultural resonance within the discourse.
“No country has ever prospered without first building its capacity to anticipate, trigger and absorb economic and social change through scientific research.” Dr. El Mostafa Bourhim
Share:
XR Voice (Moroccan Dialectal)
copy:
https://bluwr.com/p/7574596
A new version with minor updates.
Hello everyone!
Last week we released a new version of Bluwr. The website looks almost the same, but we have:
- Simplified the login page by removing the photo (it caused some display errors on some phone)
- Made the **Follow buttons** clearer, to make it easier to know if you are following someone
- Fixed an error that caused the number of Bluws to not appear in the analytics table
- Fixed some typos on the french website
Everyday we strive to make Bluwr better.
Thank you for being here!
The Bluwr Team
Share:
A new version with minor updates.
copy:
https://bluwr.com/p/7174546
The Impact of Big Five Personality Traits on Augmented Reality Acceptance Behavior: An Investigation in the Tourism Field
Along with the rapid development of the Internet and mobile devices, the integration of augmented reality (AR) in the tourism sector has become very popular. Utilizing the Big five model (BFM) as the theoretical framework, the study examines the role of personality in influencing the behavioral intention (BI) to use mobile augmented reality in the tourism sector (MART). The study further investigates the role of personal innovativeness (PIV) in determining tourists’ behavioral intentions to use MART. Quantitative research was carried out to test the conceptual model. This paper strengthened the analysis by implementing PLS-SEM method using data collected from 374 participants. The study results demonstrated that openness to experience (OPN) is a strong predictor of MART use. In addition, agreeableness (AGR), conscientiousness (CONs), extraversion (EX), neuroticism (NR), and personal innovativeness (PIV) have all significant and positive impacts on behavioral intention (BI) to use MART.
The present research purpose was to investigate the BFM variables with regards to MART use. The research also examined the contribution of PIV in explaining the BI to use MART. By employing PLS-SEM to tackle the primary study question. The current work makes a significant advance in MART use research. Empirically, the findings achieved are consistent with the BFM. Based on the outcomes of this research, all relationships have been assessed as being statistically relevant. Moreover, PIV positively influences the use of MART. The BI to use MART was positively impacted by AGR (H1: β = 0.128), CON (H2: β = 0.108), EX (H3: β = 0.124), NR (H4: β = 0.322), and OPN (H5: β = 0.169). This implies that users are expected to exhibit a strong BI to use MART when they are agreeable, conscious, extroverted, neurotic, and open to experiences. Additionally, the outcomes of the present paper also significantly upheld the association between PIV and the BI to use MART. Path analysis was found to be significant and positive (H6: β = 0.156); the result states that innovative tourists will intend to use MART. The important limitations are a higher risk of overlooking ‘real’ correlations and sensitivity to the scaling of the descriptor variables.
Share:
The Impact of Big Five Personality Traits on Augmented Reality Acceptance Behavior: An Investigation in the Tourism Field
copy:
https://bluwr.com/p/6999764
Reshaping Sport with Extended Reality in an Era of Metaverse: Insights from XR the Moroccan Association Experts
Extended reality (XR) is becoming a growing technology used by athletes, trainers, and other sports professionals. Despite the rapid growth of XR, its application in sports remains largely unexplored. This study is designed to identify and prioritize factors affecting the implementation of XR in Moroccan sports science institutes. To achieve this, the study employs the A’WOT methodology, a hybrid multi-criteria decision method combining the Strengths, Weaknesses, Opportunities, and Threats (SWOT) technique with the Analytic Hierarchy Process (AHP). Through expert group discussions, the study identifies and categorizes the factors affecting XR implementation into SWOT groups. Subsequently, the AHP methodology is employed to determine the relative importance of each factor by conducting interviews with a panel of sports and XR experts. The study’s findings, obtained through the A’WOT methodology, establish a ranking of the fundamental factors for successful XR implementation in Moroccan sports science institutes. The findings suggested that a strategic approach for implementing XR technology in Morocco needs to be driven principally by a combined approach based on the SWOT opportunities and strengths groups.
The present study investigates the benefits, challenges and opportunities of XR technology in Moroccan sports science institutes based on the SWOT-AHP framework. The strengths and opportunities ratings based on XR The Moroccan Association perspectives are positively inter-preferred for XR technology. Thus, based on this research, the framework provided can be interpreted as a roadmap for supporting the development of the strategic implementation of XR technology in Moroccan sport science institutes, while providing more credible information for decision-makers in the overall process. An in-depth analysis of the findings enables us to conclude that the strategic implementation of XR technology in Moroccan sports science institutes has to be driven principally by the opportunities factors that could assist in overcoming the identified main weaknesses and threats, along with maximizing the strengths. Following these guidelines, decision-makers are expected to initiate a range of activities in order to establish the right external environment in which opportunities can be fully exploited to tackle the principal weaknesses and threats revealed by the analysis. This research provides strong evidence for XR deployment in the sense that it reflects the views of XR The Moroccan Association practitioners and researchers on XR technology.
Share:
Reshaping Sport with Extended Reality in an Era of Metaverse: Insights from XR the Moroccan Association Experts
copy:
https://bluwr.com/p/6872916