State of AI with Google Notebook: will we escape the big enshittification?

I am not very much sharing all the AI tools I've been testing these last months, nor the ones I currently use. That being said, I found the latest Google tool, Notebook, quite illustrative of the current state of AI.

Notebook is designed as a sort of mood board for your ideas. Collect texts, links, and various data within your personal space, and Google will help you digest them, organize them, ask questions about them, etc. Pretty much a full-on demo of all the AI use cases you could think of in late 2024. One of the most glamorous features involves giving a document or a link to Notebook and asking it to create an AI-generated podcast with two hosts presenting and discussing the content.

As LinkedIn is getting more and more enshittified by the day by crappy content and AI-generated nonsense, I was morbidly fascinated by the offer of turning one of my articles into a lively discussion by two podcasts host.

So, I went down the rabbit hole.

I took my August 28th article on the corporate IETS-ratio and asked Notebook to do its magic. Give it a listen:

IETS notebook
0:00
/271.32

What do you think?

It's amazing right? Lively. Engaging. Perfectly on par with what you would expect from Bloomberg podcasts with the usual cues and back-and-forths to break down a business concept. This is clearly a breaking point where 80% of the media content, whether text, audio, or video, can be perfectly AI-generated. And when I write perfectly, I mean perfectly, given the level of expectations we have about it.

And what is the level of expectations for most of the content we deal with? Pretty low. See above: enshittification.

In this little experiment, I'm not even going to try to understand why the AI changed my name to Philippe August (getting my proper last name was a bridge too far 🙃), but mostly, it didn't understand what the article was discussing. The IETS ratio, clearly explained as the Internal/External Time Spent ratio for innovation teams, was transformed with superb confidence into "Ideas, Experiment, Test and Scale."

Not only that but armed with this concept that made much more statistical sense for the large language model (as it probably sounded like the venerable plan-do-act concept of the eighties or anything design thinking), the 4 min 30 podcasts that was generated completely rewrote the ideas and narrative of the original text around this new "IETS."

So, OK, OK... no one is surprised anymore that AI tools are not currently very reliable and that when hitting a conceptual wall, they will make up stuff that sounds "good." But what is interesting here is that, when listening for the first time to what was generated by the AI, I instantly was certain of one core truth: this bullshit would have created much more engagement on Linkedin. The content was perfect—perfectly reassuring, perfectly meeting the average platform expectations, and perfectly in tune with what should be said about innovation.

I can confidently predict that many (not most) of us will not be able to use LinkedIn meaningfully before the end of 2025. The feedback loop of massively mediocre content that fed AI tools, which now gives us back the same statistically poor results, is inescapable. More crap in, more crap out, at exponential speed.

All that being said, this very anecdotal experiment with Google Notebook still leaves me with a question: which tech players are going to have an incentive to break the statistical crap barrier and push AI to a stage where we have 99% reliable results, not just 80% ones? Probably not Google or Apple (indeed not Facebook)... but will Microsoft go the extra mile to deal with their B2B customers?

And most importantly, will we have to pay a premium for 99% accurate AI-generated content vs. just 80%-OK ones? Will we have to become digital refugees fleeing open forums, all social media, and platforms where average AI content will flood everything else?