Link Roundup: August 26, 2024

An image of a timeline graph on a computer.

August 26, 2024

This week, I write about some articles full of (smart, correct) technical skepticism and the benefits of removing, rather than adding, components to systems.

As always, I generate summaries using AI and edit those summaries for accuracy and usefulness. Then, I offer some thoughts of my own.

David Schmudde, “What if Data is a Bad Idea?”

Link to Article

Generated Summary (ChatGPT)

The article explores Alan Kay's provocative idea that “data” might be a fundamentally flawed concept. It discusses the ethical and political implications of data, particularly how it objectifies people and consolidates power in large organizations. The author suggests that data should be more flexible and context-aware, likening it to “ambassadors” in negotiations. The piece calls for rethinking data's role in representing human identity, emphasizing the need for a human-centric approach to data management that resists power consolidation.

My Thoughts

Part of me feels that, sometimes, computer scientists and software engineers are better at identifying the problems and stakes of media and digital tools than the academic humanists who purport to be necessary for ethical development. Really, though, I think those problems are just becoming so clear that industry practitioners and academics are growing towards each other.

In any case, this article does an excellent job of laying out the dangers of data as an objectifying force in a way that coheres with some of the best academic scholarship on the topic. Schmudde’s observations that “data naturally reduces complex conceptions of identity into coarse representations” and “data about identity is generally held in systems far away from the people they identify” essentially summarize the two broad tendencies of media theory—representationalist and materialist perspectives (a spectrum, not a binary)—in an accessible manner. Definitely worth a read.

Justine Tunney, “AI Training Shouldn’t Erase Authorship”

Link to Article

Generated Summary (ChatGPT)

Justine Tunney argues that AI training often strips authorship from open-source code, erasing the contributions and identities of developers. She highlights the importance of acknowledging creators to maintain respect and inspiration within the tech community. Tunney warns that failing to recognize individual contributions could undermine the collaborative culture that has driven scientific and technological progress. She calls for a future where AI enhances attribution rather than erases it, preserving the connection between creators and their work.

My Thoughts

There are several ideas in this article that piqued my humanistic interest. What are the stakes of authorship? What does AI remember about us? Who are we in the now-literal machine of history?

While there are several things I like about scholars like Barthes and Foucault (and even more things I dislike about Foucault), their critique of authorship and, moreso, the lazy way in which many humanists read it is so obviously problematic and wrong. It enabled a cascade of bad takes from across ideologies based on the assumed good of separating ideas from their authors. Another phrase for this: erasing context.

Tunney’s article here highlights one example of this glaring error and how it is, sadly, one of the clear accidents (in Virlio’s sense) of AI tools. Tunney writes:

Open source is a gambit where you give up your leverage to make people want your thing. So why would anyone do it full time? It's because what I get paid in, is respect. Folks see my name at the top of each source code file, and they remember that I was someone who helped them. It's because if I'm respected and people are paying attention to me, that it becomes easy to find an honest way to survive in the modern economy. In the future, this might in fact become the only way to survive.

“Complicating authorship,” often (not always) cast as a positive phenomenon in new media and digital scholarship, is a mistake.

Greg Kogan, “Removing stuff is never obvious yet often better”

Link to Article

Generated Summary (ChatGPT)

The article discusses the benefits of removing unnecessary elements from products, projects, or companies to reduce complexity and improve results. It uses the example of a confusing pricing calculator on a company's website, which, when removed, led to higher user engagement and fewer misunderstandings. The author argues that people tend to add rather than remove things, even when subtraction could provide more value, and encourages questioning the necessity of each component to achieve simplicity and effectiveness.

My Thoughts

This one is easy! I agree.