8: The near future of AI Economics
12553
The near absolute domination of Nvidia in AI hardware is not going away anytime soon. Despite efforts by major hardware companies and startups alike, supplanting Nvidia is just too costly. Even if a company is able to create better hardware and supply chains, it would still need to tackle the software compatibility challenge. Major AI frameworks like pyTorch and Tensorflow are all compatible with Nvidia, and little else. These are all open source, and although supported by major companies, like all open-source software their foundation is their communities. And communities can be notoriously hard shake. All this suggest that the price of Nvidia GPUs will keep increasing, fuelled by the rise of ever bigger LLMs.
So where does that leave us for the future of AI economics. Like anything valuable, if the current trend continues, GPU computation time will see the apparition of derivatives. More specifically, *futures* and *options* on GPU computing hours could be bought and sold.
The other coming trends are in energy trading, modern AI is extremely hungry for electricity, to the point of needing dedicated power-plants. If the current trends continue in AI, with major companies and countries building and investing into bigger and more power hungry datacenters, this could lead to a trend of significant disruptions in some parts of the energy sector. Again the markets for energy derivatives (*futures* and *options*) could be significantly affected. Finally, *bounds* markets and inflation are also poised for some disruption, as the building of the extremely expensive facilities necessary for AI is likely to result in more borrowing.
When it comes to AI: Nvidia GPUs and Electricity are king.
Link Below: google is buying nuclear power.
Partager:
The near future of AI Economics
copier:
https://bluwr.com/p/58698557
9: Applied Machine Learning Africa!
9849
I have been to more scientific conferences than I can count. From to smallest to the biggest like NeuRIPS (even back when it was still called NIPS). Of all these events AMLD Africa is my favorite, by far.
I first met the team two years ago when they organized the first in-person edition of the conference at the University Mohammed VI Polytechnic. I was immediately charmed by the warmth and professionalism, ambition and fearlessness of the team. So much that I joined the organization.
AMLD Africa is unique on every aspect. By its focus on Africa, by its scope and ambition, by its incredibly dynamic, young, passionate, honest and resourceful team, all volunteers. It is hard to believe that this year in Nairobi was only the second in-person edition.
AMLD Africa does the impossible without even realizing it. It has an old school vibe of collegiality, community and most importantly **__fun__** that is so lacking in most conferences today. All without compromising on the quality of the science.
It offers one of the best windows into everything AI and Machine learning happening in Africa. Africa is a continent on the rise. But a very hard continent to navigate because of information bottlenecks. Traveling across Africa is not easy (it took me 28H from Nairobi to Casablanca), there are language barierers separating the continent into different linguistic regions (French, English, Portuguese being the main ones). And just the fact that all too often we do not look to Africa for solutions.
AMLD Africa is solving all that, by bringing everybody together for a few days in one of the best environments I got to experience.
Thank you AMLD Africa.
Partager:
Applied Machine Learning Africa!
copier:
https://bluwr.com/p/9030113
10: Digital: The perfect undying art
9212
Great paintings deteriorate, great statues erode, fall and break, great literature is forgotten and it's subtleties lost as languages for ever evolve and disappear. But now we have a new kind of art. A type of art that in theory cannot die, it transcends space and time and can remain pristine for ever and ever. That is digital art.
Digital art is pure information. Therefore it can be copied for ever and ever, exactly reproduced for later generations. Digital art cannot erode, cannot break, it is immortal. Thus is the power of bits, so simple zeros and ones and yet so awesome. Through modern AI and Large Language Models we can now store the subtleties of languages in an abstract vectorial space, also pure information, that can be copied ad infinitum without loss of information. Let's think about the future, a future so deep that we can barely see it's horizon. In that future, with that technology we can resurrect languages. However the languages resurrected will be the ones we speak today.
We have a technology that allows us to store reliably and copy indefinitely that technology is called the *Blockchain*. The most reliable and resilient ledger we have today. We have almost everything we need to preserve what we cherish.
Let's think of a deep future.
Partager:
Digital: The perfect undying art
copier:
https://bluwr.com/p/9930050
11: AI development has reached a limit and it is not hardware
5198
There is a shortage of GPUs, there is a shortage of RAM, there is a shortage of electricity. Still, none of the above is the real limiting factor: it's a skill and research issue.
For more than a decade now, the AI world has been dominated by an open-source arms race whose effect has been a near total focus on engineering to the detriment of research and meaningful developments. The result has been over engineered proof-of-concepts, chief amongst them being Transformers. The original paper mostly demonstrated that if you put attention over everything, and several of them, you can beat LSTMs. Is it a surprising result, not so much. This is somewhat morally similar to Res-nets, that showed that the more you connect layers the better the results. That's also not very surprising. Both significantly increased the size of models.
These are mostly engineering innovations. Although they did open interesting theoretical questions, they did not come from strong theoretical foundations. They come from trial and errors copy-pasting existing technologies and connecting them in new ways.
And then, these technologies got themselves copy-pasted and reconnected. Fast forward today we have massive behemoths that are draining the computational ressources of the world.
Even AI curricula followed this trend. Today, most only very quickly skim over the mathematical and theoretical foundations. Focusing more and more on building pieces of increasing complexity while dodging explanations of their inner workings. This has culminated in today's "AI builders" trend, where fully trained LLM assembly lines are stringed together.
Here is the true limitation of AI. This mindset has been pushed so far that we have reached a physical limit. Now we can either build a much bigger Nvidia, produce a 100X more RAM, lower the price of KW/h to unseen levels. Or, go back to the theory and design models that are more optimal.
Optimal not because they are distilled, not because they use lower precision, but because they don't rely on Transformers, nor diffusion, or any of the very costly paradigms currently use, in the shape and form are currently used.
Just like physical computers have been shrinked to sit in the palm of your hand. Immaterial AI models can also be made smaller.
Partager:
AI development has reached a limit and it is not hardware
copier:
https://bluwr.com/p/963785951