Your Essential AI Picks of the Week - Nov 15th, 2024
Fascinating AI articles, papers, and books I discovered this week
Every Friday, I aim to share a list of AI papers, blog articles, books, and videos that I find worth reading or watching. While many will be recent, I'll also include older but equally, if not more, significant works.
Two neurons connecting …
As a contemplative entry for this week's 'Essential AI Picks, take a moment to watch two neurons approaching each other slowly and finally form a connection. A great inspiration to ponder the profound effect of such a simple interaction!
What happens to AI in the U.S. ? The election of Trump and its potential effects on AI regulation
The recent U.S. elections stirred global waves, hinting at unpredictable changes. On AI and regulation, we see some early directions:
In October 2023 the Biden-Harris government introduced the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Fact Sheet) in order to prepare for AI regulation and the development of standards (e.g. NIST).
Trump, however, announced, that “We will repeal Joe Bidens’s Dangerous Executive Order That Hinders AI Innovation and imposes Radical Left-wing Ideas on the Development of the Technology.“
Futurism.com: Trump Planning to Unleash Artificial Intelligence by Repealing Restrictions
Last week, I had planned to release a full blog on Trump and AI, but a bout with COVID meant I had to pause. Meanwhile, I came across a video by Matt Wolfe that captures some of my thoughts on the topic. Learn more about Trumps’ plans and potential outcome for AI regulation in the U.S. here: Youtube - AI News: what does Trump means for AI ?
Large Language Models
Is scaling current LLM architectures coming to an end ?
With several companies including OpenAI delaying the release of their next LLM upgrades, experts speculate that scaling with more GPUs on pre-training the same decoder architecture may be reaching a plateau.
In my experience with large-scale simulations on top-tier supercomputers, I’ve seen similar shifts—where scaling alone stops yielding gains and sparks a period of theoretical exploration.
As Ilya Sutskever puts it:
“The 2010s were the age of scaling, now we're back in the age of wonder and discovery once again. Everyone is looking for the next thing. Scaling the right thing matters more now than ever.”
However, he is not yet willing to share more details on how his team is addressing the issue, other than saying his company SSI is working on an alternative approach to scaling up pre-training.
Read more here:
With Transformer architectures, including the decoder models, nearing the 7-year mark, it’s really exciting to explore what advancements lie ahead. I’m looking forward to sharing more about innovations like Kolmogorov-Arnold Networks, xLSTMs, and Mamba, which promise new directions and capabilities beyond current models.
LLMs and the truth
Do LLMs have a legal duty to tell the truth ? Prof. Sandra Wachter introduces a new measure called “careless speach”, which she defines against ‘ground truth’ in LLMs and related risks including hallucinations, misinformation and disinformation. She also investigates the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act, Digital Services Act, Product Liability Directive and Artificial Intelligence Liability Directive. The paper concludes that LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes.
Enjoy your read !