Resonance Calendar for 2023-03-20

What got my attention last week?

As anyone with an internet connection knows, the country’s primary pastime of the past week has been writing explanations of LLMs and generative AI (when they weren’t explaining the SBV bank failure and weighing in on its implications). A good deal of the output was not worth the expenditure of electrons, but a few have been jewels worth reading.

ChatGPT and Generative AI

Ted Chiang has won a serious number of awards for his science fiction. What always knocks me out about his work is the breadth of his knowledge and the depth of his thinking. Last week, I read his February 9 article in the New Yorker, ChatGPT is a Blurry JPEG of the Web. It’s brilliant.

Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

No less brilliant, and worth reading is What is ChatGPT Doing … and Why Does it Work? by Stephen Wolfram. He introduces “reasonable continuation” as the principle that ‘drives’ an LLM like ChatGPT. And proceeds to discuss the voodoo of GPT’s “temperature” parameter that determines how often ChatGPT, rather than using the absolutely best word in its assessment, will use lower ranked words. From there, he goes into increasing depth like the statistician / mathematician he is. Again, it is really well done.

Neural Networks

I recently ran across 200-Year-Old Math Opens Up AI’s Mysterious Black Box from February 25. It describes the work of Pedram Hassanzadeh, a fluid dynamicist at Rice University, in Houston. He and his team are hoping that their use of Fourier analysis can lead to better neural networks and help them better understand the underlying physics of climate and turbulence. This work is worth following.

“For years, we heard that neural networks are black boxes and there are too many parameters to understand and analyze. And sure, when we just looked at some of these parameters, they did not make much sense, and they all looked different,” Hassanzadeh says. However, after Fourier analysis of all these kernels, he says, “we realized they are … spectral filters.”

Scientists have for years tried combining these filters to analyze climate and (atmospheric) turbulence. However, these combinations often did not prove successful in modeling these complex systems. Neural networks learned ways to combine these filters correctly, Hassanzadeh says.

“… a song of lamentation”

My friend Per Bergman recently sent me a post by Mark Johnson entitled Meditations: A Requiem for Descartes Labs. It was all too familiar, and heart-breaking.

… here’s the story of Descartes Labs, how a remarkable geospatial startup went from flying high in orbit to end in a fiery crash back to Earth.

Rich Miller @rhm2k