• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle


  • There’s this podcast I used to enjoy (I still enjoy it, but they stopped making new episodes) called Build For Tomorrow (previously known as The Pessimists Archive).

    It’s all about times in the past where people have freaked out about stuff changing but it all turned out okay.

    After having listened to every single episode — some multiple times — I’ve got this sinking feeling that just mocking the worries of the past misses a few important things.

    1. The paradox of risk management. If you have a valid concern, and we collectively do something to respond to it and prevent the damage, it ends up looking as if you were worried over nothing.
    2. Even for inventions that are, overall, beneficial, they can still bring new bad things with them. You can acknowledge both parts at once. When you invent trains, you also invent train crashes. When you invent electricity, you also invent electrocution. That doesn’t mean you need to reject the whole idea, but you need to respond to the new problems.
    3. There are plenty of cases where we have unleashed horrors onto the world while mocking the objections of the pessimists. Lead, PFAS, CFCs, radium paint, etc.

    I’m not so sure that the concerns about AI “killing culture” actually are as overblown as the worry about cursive, or record players, or whatever. The closest comparison we have is probably the printing press. And things got so weird with that so quickly that the government claimed a monopoly on it. This could actually be a problem.


  • If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.

    Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.

    Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.


  • I recommend listening to the episode. The crash is the overarching story, but there are smaller stories woven in which are specifically about AI, and it covers multiple areas of concern.

    The theme that I would highlight here though:

    More automation means fewer opportunities to practice the basics. When automation fails, humans may be unprepared to take over even the basic tasks.

    But it compounds. Because the better the automation gets, the rarer manual intervention becomes. At some point, a human only needs to handle the absolute most unusual and difficult scenarios.

    How will you be ready for that if you don’t get practice along the way?



  • Basically this: Flying Too High: AI and Air France Flight 447

    Description

    Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.

    Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…

    In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn’t - and that the consequences of asking the wrong question are disastrous.












  • I’m sympathetic to the reflexive impulse to defend OpenAI out of a fear that this whole thing results in even worse copyright law.

    I, too, think copyright law is already smothering the cultural conversation and we’re potentially only a couple of legislative acts away from having “property of Disney” emblazoned on our eyeballs.

    But don’t fall into their trap of seeing everything through the lens of copyright!

    We have other laws!

    We can attack OpenAI on antitrust, likeness rights, libel, privacy, and labor laws.

    Being critical of OpenAI doesn’t have to mean siding with the big IP bosses. Don’t accept that framing.



  • That’s the reason we got copyright, but I don’t think that’s the only reason we could want copyright.

    Two good reasons to want copyright:

    1. Accurate attribution
    2. Faithful reproduction

    Accurate attribution:

    Open source thrives on the notion that: if there’s a new problem to be solved, and it requires a new way of thinking to solve it, someone will start a project whose goal is not just to build new tools to solve the problem but also to attract other people who want to think about the problem together.

    If anyone can take the codebase and pretend to be the original author, that will splinter the conversation and degrade the ability of everyone to find each other and collaborate.

    In the past, this was pretty much impossible because you could check a search engine or social media to find the truth. But with enshittification and bots at every turn, that looks less and less guaranteed.

    Faithful reproduction:

    If I write a book and make some controversial claims, yet it still provokes a lot of interest, people might be inclined to publish slightly different versions to advance their own opinions.

    Maybe a version where I seem to be making an abhorrent argument, in an effort to mitigate my influence. Maybe a version where I make an argument that the rogue publisher finds more palatable, to use my popularity to boost their own arguments.

    This actually happened during the early days of publishing, by the way! It’s part of the reason we got copyright in the first place.

    And again, it seems like this would be impossible to get away with now, buuut… I’m not so sure anymore.

    Personally:

    I favor piracy in the sense that I think everyone has a right to witness culture even if they can’t afford the price of admission.

    And I favor remixing because the cultural conversation should be an active read-write two-way street, no just passive consumption.

    But I also favor some form of licensing, because I think we have a duty to respect the integrity of the work and the voice of the creator.

    I think AI training is very different from piracy. I’ve never downloaded a mega pack of songs and said to my friends “Listen to what I made!” I think anyone who compares OpenAI to pirates (favorably) is unwittingly helping the next set of feudal tech lords build a wall around the entirety of human creativity, and they won’t realize their mistake until the real toll booths open up.