AI and journalism might not be that complicated after all
Zach Seward (editorial director of AI initiatives at The New York Times) just gave a a cogent and educated deep-dive on how artificial intelligence has been (mostly) ill-used by media up to now, and (in a few cases) remarkably well.
One of the "good" use cases is what the New York Times has been experimenting with in its "Visual Investigations". Things like feeding a large volume of easily accessible satellite images of the war in Gaza and asking an AI to provide extensive analysis of the data to provide facts (📌):
The Times programmed an artificial-intelligence tool to analyze satellite imagery of South Gaza to search for bomb craters. The AI tool detected over 1,600 possible craters. We manually reviewed each one to weed out the false, like shadows, water towers, or bomb craters from a previous conflict. We measured the remaining craters to find ones that spanned roughly 40 feet across or more, which experts say are typically formed only by 2,000-pound bombs. Ultimately, we identified 208 of these craters in satellite imagery and drone footage, indicating 2,000-pound bombs posed a pervasive threat to civilians seeking safety across South Gaza.
The core application of AI in journalism for Zach does indeed fit with my own picture of a Photoshop for the Mind as an augmentation of our analytical bandwidth; not a replacement:
(...) using generative AI, give us a sense of the technology's greatest promise for journalism (and, I'd argue, lots of other fields). Faced with the chaotic, messy reality of everyday life, LLMs are useful tools for summarizing text, fetching information, understanding data, and creating structure.
These first insights from the front-runners of the AI wave are for the rest of us in 2024 tremendously important as they map out the possible future of adjacent markets and where the bottlenecks will be.
Meanwhile... your coding job? Yeah, it might completely go away soon: