Think Forward.

About: AI

by Tariq Daouda
65885
Chapters: 10 14.0 min read
This series compiles works on AI and Machine Learning, offering a panoramic view of the rapid advances, ethical debates, geopolitics and real-world impact. Articles range from the geopolitical stakes of artificial intelligence to AI software engineering and infrastructure. Pieces explore the rapid growth in compute power and data. Exploring critical areas of application such healthcare, sustainability, arts, and manufacturing, where AI solutions are challenging creativity. optimizing processes, diagnosing illness, and pushing the boundaries of what's possible

6: AI+Health: An Undelivered Promise 5635

AI is everywhere, or so would it seems, but the promises made for Drug Discovery and Medicine are still yet to be fulfilled. AI seems to always spring from a Promethean impulse. The goal of creating a life beyond life, doing the work of gods by creating a new life form as Prometheus created humanity. From Techne to independent life, a life that looks life us. Something most people refer to as AGI today. This is the biggest blind spot of AI development. The big successes of AI are in a certain way always in the same domains: - Image Processing - Natural Language Processing The reason is simple, we are above all visual, talking animals. Our Umwelt, the world we inhabit is mostly a world of images and language, every human is an expert in these two fields. Interestingly, most humans are not as sound aware as they are visually aware. Very few people can separate the different tracks in a music piece, let alone identify certain frequencies or hear delicate compressions and distortions. We are not so good with sound, and it shows in the relatively less ground breaking AI tools available for sound processing. The same phenomenon explains why AI struggles to achieve in very complex domains such as Biology and Chemistry. At it's core, modern AI is nothing more than a powerful general way to automatically guess relevant mathematical functions describing a phenomenon from collected data. What statisticians call a *Model*. From this great power derives the domain chief illusion: because the tool is general, therefore the wielder of that tool can apply it to any domain. Experience shows that this thinking is flawed. Every AI model is framed between two thing: its dataset (input) and its desired output as represented by the loss function. What is important, what is good, what is bad, how should the dataset be curated, how should the model be adjusted. For all these questions and more, you need a deep knowledge of the domain, of the assumptions of the domain, of the technicalities of the domain, of the limitations that are inherent to data collection in that domain. Domain knowledge is paramount, because AI algorithms are always guided by the researchers and engineers. This I know from experience, having spent about 17 years closely working with biologists. Pairing AI specialists with domain specialist with little knowledge of AI also rarely delivers. A strategy that has been tested time and time again in the last 10 years. Communication is hard and slow, most is lost in translation. The best solution is to have AI experts that are also experts in the applied domain, or domain experts that are also AI experts. Therefore the current discrepancies we see in AI performances across domains, could be layed at the feet of universities, and there siloed structures. Universities are organized in independent departments that teach independently. AI is taught at the Computer Science department, biology at the Biochemistry department. These two rarely meet in any substantial manner. It was true went I was a student, it is still true today. This is one of the things we are changing at the Faculty of Medical Science of the University Mohammed VI Polytechnic. Students in Medicine and Pharmacy have to go through a serious AI and Data science class over a few years. They learn to code, they learn the mathematical concepts of AI, they learn to gather their own datasets, to derive their hypothesizes, and build, train and evaluate their own models using pyTorch. The goal being to produce a new generation of scientists that are intimate with their domain as well as with modern AI. One that can consistently deliver the promises of AI for Medicine and Drug Discovery.

7: Two Nobel Prizes: AI is Still resting on Giant Shoulders 5385

John Hopfield and Geoffrey Hinton got the Nobel Prize of Physics, Demis Hassabis and John Jumper the nobel Prize of Chemistry. It is obvious that the first Nobel Prize was not given merely for their contributions to physics, but mostly for their profound and foundational contributions to what is today modern AI. Let's talk about the second Nobel prize. AlphaFold was put on map by beating other methods on a competition (CASP14/CASP15) that has been running for year on a well established dataset. As such, AlphaFold winning is more like an ImageNet moment (when the team of Geof Hinton demonstrated the superiority of Convolutional Networks on Image Classification), than a triumph of multi-disciplinary AI research. The dataset of Alphafold rests on many years of slow and arduous research to compile a dataset in a format that could be understood not by machines, but by computer scientists. This massive problem of finding the protein structure was, through that humongous work, reduced to a simple question of minimizing distances. A problem that could now be tackled with little to no knowledge of chemistry, biology or proteomics. This in no way reduces the profond impact of AlphaFold. However it does highlight a major issue in applied AI: computer scientists, not AI, are still reliant on other disciplines to drastically simplify complex problems for them. The contributions and hard work required to do so gets unfortunately forgotten everything has been reduced to a dataset and a competition. What to do when we do not have problems that computer scientists can easily understand? This is true for all fields that require a very high level of domain knowledge. Through experience, I came to consider the pairing of AI specialists with specialists of other disciplines, a sub-optimal strategy at best. The Billions of dollars invested in such enterprises have failed to produce any significant return on investment. The number one blind spot of these endeavours is the supply chain, it usually takes years and looks like this: 1- Domain specialists identify a question 2- Years are spent to develop methods to measure and tackle it 3- The methods are made cheaper 4- The missing links: Computational chemists, Bioinformaticians, ... start the work on what will become the dataset 5- AI can finally enter the scene Point number (1) is the foundation. You can measure and ask an infinite number of questions about anything. Finding the most important one is not as obvious as it seems. For example, it is not at all obvious that a protein structure is an important feature a priory. Another example, is debugging code. A successful debugging session involves asking and answering a succession of relevant questions. Imagine giving a code to someone with no programming experience and asking them to debug it. The probabilities of them asking the right questions is very close to 0. Identifying what is important is called inserting inductive Biases. In theory LLMs could integrate the inductive biases of a field and generate interesting questions, even format datasets from open-source data. However until this ability has been fully demonstrated, the only cost efficient way to accelerate AI driven scientific discoveries is to build the disciplinarily into the people: AI Researchers that know enough about the field to be able to identify the relevant questions of the future.
nobelprize.org/all-nobel-prizes-...

8: The near future of AI Economics 8518

The near absolute domination of Nvidia in AI hardware is not going away anytime soon. Despite efforts by major hardware companies and startups alike, supplanting Nvidia is just too costly. Even if a company is able to create better hardware and supply chains, it would still need to tackle the software compatibility challenge. Major AI frameworks like pyTorch and Tensorflow are all compatible with Nvidia, and little else. These are all open source, and although supported by major companies, like all open-source software their foundation is their communities. And communities can be notoriously hard shake. All this suggest that the price of Nvidia GPUs will keep increasing, fuelled by the rise of ever bigger LLMs. So where does that leave us for the future of AI economics. Like anything valuable, if the current trend continues, GPU computation time will see the apparition of derivatives. More specifically, *futures* and *options* on GPU computing hours could be bought and sold. The other coming trends are in energy trading, modern AI is extremely hungry for electricity, to the point of needing dedicated power-plants. If the current trends continue in AI, with major companies and countries building and investing into bigger and more power hungry datacenters, this could lead to a trend of significant disruptions in some parts of the energy sector. Again the markets for energy derivatives (*futures* and *options*) could be significantly affected. Finally, *bounds* markets and inflation are also poised for some disruption, as the building of the extremely expensive facilities necessary for AI is likely to result in more borrowing. When it comes to AI: Nvidia GPUs and Electricity are king. Link Below: google is buying nuclear power.

9: Applied Machine Learning Africa! 5967

I have been to more scientific conferences than I can count. From to smallest to the biggest like NeuRIPS (even back when it was still called NIPS). Of all these events AMLD Africa is my favorite, by far. I first met the team two years ago when they organized the first in-person edition of the conference at the University Mohammed VI Polytechnic. I was immediately charmed by the warmth and professionalism, ambition and fearlessness of the team. So much that I joined the organization. AMLD Africa is unique on every aspect. By its focus on Africa, by its scope and ambition, by its incredibly dynamic, young, passionate, honest and resourceful team, all volunteers. It is hard to believe that this year in Nairobi was only the second in-person edition. AMLD Africa does the impossible without even realizing it. It has an old school vibe of collegiality, community and most importantly **__fun__** that is so lacking in most conferences today. All without compromising on the quality of the science. It offers one of the best windows into everything AI and Machine learning happening in Africa. Africa is a continent on the rise. But a very hard continent to navigate because of information bottlenecks. Traveling across Africa is not easy (it took me 28H from Nairobi to Casablanca), there are language barierers separating the continent into different linguistic regions (French, English, Portuguese being the main ones). And just the fact that all too often we do not look to Africa for solutions. AMLD Africa is solving all that, by bringing everybody together for a few days in one of the best environments I got to experience. Thank you AMLD Africa.
appliedmldays.org/events/amld-af...

10: Digital: The perfect undying art 5746

Great paintings deteriorate, great statues erode, fall and break, great literature is forgotten and it's subtleties lost as languages for ever evolve and disappear. But now we have a new kind of art. A type of art that in theory cannot die, it transcends space and time and can remain pristine for ever and ever. That is digital art. Digital art is pure information. Therefore it can be copied for ever and ever, exactly reproduced for later generations. Digital art cannot erode, cannot break, it is immortal. Thus is the power of bits, so simple zeros and ones and yet so awesome. Through modern AI and Large Language Models we can now store the subtleties of languages in an abstract vectorial space, also pure information, that can be copied ad infinitum without loss of information. Let's think about the future, a future so deep that we can barely see it's horizon. In that future, with that technology we can resurrect languages. However the languages resurrected will be the ones we speak today. We have a technology that allows us to store reliably and copy indefinitely that technology is called the *Blockchain*. The most reliable and resilient ledger we have today. We have almost everything we need to preserve what we cherish. Let's think of a deep future.