Think Forward.

About: AI

by Tariq Daouda
65885
Chapters: 10 14.0 min read
This series compiles works on AI and Machine Learning, offering a panoramic view of the rapid advances, ethical debates, geopolitics and real-world impact. Articles range from the geopolitical stakes of artificial intelligence to AI software engineering and infrastructure. Pieces explore the rapid growth in compute power and data. Exploring critical areas of application such healthcare, sustainability, arts, and manufacturing, where AI solutions are challenging creativity. optimizing processes, diagnosing illness, and pushing the boundaries of what's possible

1: AI: The fallacy of the Turing Test 3935

The Turing test is simple to understand. In a typical setup, a human judge engages in text-based conversations with both a human and a machine, without knowing which is which, and must determine which participant is the machine. If the judge cannot reliably tell them apart based solely on their conversational responses, the machine is said to have passed the test and demonstrated convincing human-like intelligence. This is convenient, it perfectly avoids facing the hard questions such as defining intelligence and consciousness. Instead, it lays out a basic naive test founded on an ontological fallacy: it's not because something is perceived as something else that it is that thing. The most evident critique of the Turing Test is embedded into the fundementals of Machine Learning itself: - The model is not the modeled. It remains an approximation however precise it is. A simple analogy makes the ontological fallacy clear. It's like going to a magic show, seeing a table floating above the ground and believing that the levitation really happened. How many bits of information separate a real human from a chatting bot? Assuming the number is exactly 0, without any justification, is an extraordinary naive claim. Interestingly, the Turing Test also greatly fails at defining so called super-Intelligence. A super Intelligent machine would evidently fail the test by simply providing super-intelligent answers. Unless it decides to fool the experimenter, in which case it could appear as anything it desires rendering the test meaningless. Regarding modern LLMs, the veil is already faling. LLMs have quircks, like an oversuage of em-dashes. A strange features that is indicative of something potentially pathological in the way the models are trained. These strange dashes would have been expected if a majority of people were using them. However it so happens that hardly anyone knows how to find them on their keyboard. This proves that LLMs are not following the manifold of human writing and suggests the existence of other bisases. Finally, embedded inside the promotion of the Turing test is often a lazy ontological theory of materialism that stipulates that consciousness is not fundamental but a byproduct of matter. Often negating it's existence altogether: It's not that consciousness can be faked, or that it is the result of computations, the understanding is that consciousness does not exist. It is an illusion that takes over the subject of the experience. Again a theory of convenience, based on little justification that produces a major paradox: Who is conscious of the illusion of consciousness?

2: AI is a Big Geopolitical Issue 9785

500 Billion dollars to keep the USA the number one power in AI followed by Deepseek whose creators claim has been trained on lower grade hardware, and now the AI summit in Paris. Modern AI is a breakthrough perhaps of the same magnitude as the steam machine or electricity, perhaps even bigger. It touches everything and, most importantly, for the first time it allows for the mechanization of intellectual work. Previous industrial major breakthroughs were focused on automatizing physical labor, AI offers the potential of automatizing the mind. The implications are hard to comprehend, but what is sure is that no nation wants to be left behind. The world of AI is segmented on a few pillars: 1 - The theory and software: mostly public and open-source 2 - The talent that is rare: Becoming a top tier talent in AI takes time. Being able to use off-the-self AI designed by other people is not enough to drive breakthrough 3 - The infrastructure hardware: Most importantly GPUs that are virtually all controlled by one US company, NVIDIA 4 - Electrical Power: Modern AI requires datacenter that consume astonishing amounts of electricity It is on these fronts that the big battles over AI supremacy and autonomy will be fought. Laying these pillars also highlights the dominance of the US: it is the first on every single one. The US has the top universities and AI companies. This Naturally translates to more talent available. The US has the only company capable of making high-end GPUs, and the US has the most electricity available. Other nations should wisely pick their battles and focus where they can make most impact. France for example, with it's nuclear energy and engineering culture could make it's mark, and Germany is already a leader in semiconductors. There is potential in Europe, the major question is will regulations and fiscal regimes adapt fast enough to allow for rapid technological growth. Even low and middle income countries could make a dent and enjoy the AI boom. Morocco is positioning itself as an electricity producer, and all countries could work on education and skill levels. The time where people had to leave the country to offer their services abroad is long gone. The internet has no borders, which also mean the brain drain does not need to happen! It's not impossible for a country to become a top tier exporter of high quality AI services. Again for it to happen, cross country work regulations, and exchange rate controls must be heavily simplified or completely removed. Final words, If anything the Deepseek story is interesting because it potentially expands the market for NVIDIA. If the story is true, it means that the market is now bigger, not smaller because lower grade GPUs have suddenly become more useful, without questioning the supremacy of the last generations of NVIDIA's AI workhorses.

3: The future of AI is Small and then Smaller. 5248

We need smaller models, but don't expect big tech to develop them. Current state-of the-art architectures are very inefficient, the cost of training them is getting out of hand, more and more unaffordable for most people and institutions. This effectively is creating a 3 tiers society in AI: 1- Those who can afford model development and training (Big tech mostly). And make *foundation models* for everybody else 2- Those who can only afford the fine tuning of the *foundation models* 3- Those who can only use the fine tuned models through APIs. This is if far from an ideal situation for innovation and development because it effectively creates one producer tier (1) and 2 consumer tiers (2 and 3). It concentrates most of the research and development into tier 1, leaves a little for tier 2 and almost completely eliminates tier 3 from R&D in AI. Tier 3 is most of the countries and most of the people. This also explains why most of the AI startups we see all over the place are at best tier 2, this means that their *Intellectual Property* is low. The barrier to entry for competition is very low, as someone else can easily replicate their product. The situation for tier 3 AI startups is even worst. This is all due to two things: 1- It took almost 20 years for governments and people to realize that AI is coming. In fact they only did it after the fact. The prices for computer hardware (GPUs) where already through the roof and real talent already very rare. Most people still think they need *Data scientists*, in fact they need: AI Researchers, DevOps Engineers, Software Engineers, Machine Learning Engineers, Cloud Infrastructure Engineers, ... The list of specialties is long. The ecosystem is now complex and most countries do not have the right curriculums in place at their universities. 2- The current state-of-the-art models are **huge and extremely inefficient**, they require a lot of compute ressources and electricity. Point number 2 is the most important one. Because if we solve 2, the need for cloud, DevOps, etc... decreases significantly. Meaning we not only solve the problem of training and development cost, we also solve part of the talent acquisition problem. Therefore, it should be the absolute priority: __we need smaller, more efficient models__. But why are current models so inefficient. The answer is simple, the first solution that works is usually not efficient, it just works. We have seen the same things with steam machine and computers. Current transformer based models, for example need several layers of huge matrices that span the whole dictionary. That's a very naive approach, but it works. In a way we still have not surpassed the Deep Learning trope of 15 years ago: Just add more layers. Research in AI should not focus on large language models, it should be focusing on small language models that have results on par with the large ones. That is the only way to keep research and development in AI alive and thriving and open to most. The alternative is to keep using these huge models than only extremely wealthy organisation can make, leading to a concentration of knowledge and to too many tier 2 and tier 3 startups that will lead us to a disastrous pop of the AI investment bubble. However, don't count on Big Tech to develop and popularize these efficient models. They are unlikely to as having a monopoly on AI development is on their advantage as long as they can afford it. Universities, that's your job.