Got bills to pay and mouths to feed, there ain’t no
thin in this world for free. No I can’t slow down, I can’t hold back, though you know I wish I could. There ain’t no rest for the wicked… til we close our eyes for good.
Got bills to pay and mouths to feed, there ain’t no
thin in this world for free. No I can’t slow down, I can’t hold back, though you know I wish I could. There ain’t no rest for the wicked… til we close our eyes for good.
There’s this podcast I used to enjoy (I still enjoy it, but they stopped making new episodes) called Build For Tomorrow (previously known as The Pessimists Archive).
It’s all about times in the past where people have freaked out about stuff changing but it all turned out okay.
After having listened to every single episode — some multiple times — I’ve got this sinking feeling that just mocking the worries of the past misses a few important things.
I’m not so sure that the concerns about AI “killing culture” actually are as overblown as the worry about cursive, or record players, or whatever. The closest comparison we have is probably the printing press. And things got so weird with that so quickly that the government claimed a monopoly on it. This could actually be a problem.
If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.
Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.
Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.
I recommend listening to the episode. The crash is the overarching story, but there are smaller stories woven in which are specifically about AI, and it covers multiple areas of concern.
The theme that I would highlight here though:
More automation means fewer opportunities to practice the basics. When automation fails, humans may be unprepared to take over even the basic tasks.
But it compounds. Because the better the automation gets, the rarer manual intervention becomes. At some point, a human only needs to handle the absolute most unusual and difficult scenarios.
How will you be ready for that if you don’t get practice along the way?
Nor is losing your night vision to the glare of a car (it’s always a pickup) behind you with too-bright lights that fill your mirrors.
It really fucking is. Nothing is a bigger red flag to me than a pickup. 98% of pickup drivers are assholes.
Basically this: Flying Too High: AI and Air France Flight 447
Description
Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.
Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…
In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn’t - and that the consequences of asking the wrong question are disastrous.
Surprising number of people taking this seriously.
Don’t worry. Someone will soon come by to remind us that it’s pointless to regulate AI, and also harmful to do it, and it’s actually a good thing for everyone, and also we’ll be shoveling shit until we die if we don’t get on board, and please oh please just let me get off to one more deepfake of my classmate before you take away my toy it’s not faiiiiir.
Yeah… What a mess. A horrible, horrible idea.
Mass producing disguised explosives is risky business.
Obviously they wanna price them low, to attract buyers in the target market. But if you price them too low, they become an opportunity for middlemen to resell to another market.
And now you’ve spread several batches of explosives to who-knows-where.
Hopefully they thought of that and restricted the detonation trigger to specific country codes. But that doesn’t erase the fact that there are explosives in the device.
attendee’s
Here comes an s!
Arguably one of the most important groups to hear from if we’re gonna find the right balance between freedom to create and freedom from harm.
The pieces fit in my ass
I’m sympathetic to the reflexive impulse to defend OpenAI out of a fear that this whole thing results in even worse copyright law.
I, too, think copyright law is already smothering the cultural conversation and we’re potentially only a couple of legislative acts away from having “property of Disney” emblazoned on our eyeballs.
But don’t fall into their trap of seeing everything through the lens of copyright!
We have other laws!
We can attack OpenAI on antitrust, likeness rights, libel, privacy, and labor laws.
Being critical of OpenAI doesn’t have to mean siding with the big IP bosses. Don’t accept that framing.
Not even stealing cheese to run a sandwich shop.
Stealing cheese to melt it all together and run a cheese shop that undercuts the original cheese shops they stole from.
That’s the reason we got copyright, but I don’t think that’s the only reason we could want copyright.
Two good reasons to want copyright:
Accurate attribution:
Open source thrives on the notion that: if there’s a new problem to be solved, and it requires a new way of thinking to solve it, someone will start a project whose goal is not just to build new tools to solve the problem but also to attract other people who want to think about the problem together.
If anyone can take the codebase and pretend to be the original author, that will splinter the conversation and degrade the ability of everyone to find each other and collaborate.
In the past, this was pretty much impossible because you could check a search engine or social media to find the truth. But with enshittification and bots at every turn, that looks less and less guaranteed.
Faithful reproduction:
If I write a book and make some controversial claims, yet it still provokes a lot of interest, people might be inclined to publish slightly different versions to advance their own opinions.
Maybe a version where I seem to be making an abhorrent argument, in an effort to mitigate my influence. Maybe a version where I make an argument that the rogue publisher finds more palatable, to use my popularity to boost their own arguments.
This actually happened during the early days of publishing, by the way! It’s part of the reason we got copyright in the first place.
And again, it seems like this would be impossible to get away with now, buuut… I’m not so sure anymore.
—
Personally:
I favor piracy in the sense that I think everyone has a right to witness culture even if they can’t afford the price of admission.
And I favor remixing because the cultural conversation should be an active read-write two-way street, no just passive consumption.
But I also favor some form of licensing, because I think we have a duty to respect the integrity of the work and the voice of the creator.
I think AI training is very different from piracy. I’ve never downloaded a mega pack of songs and said to my friends “Listen to what I made!” I think anyone who compares OpenAI to pirates (favorably) is unwittingly helping the next set of feudal tech lords build a wall around the entirety of human creativity, and they won’t realize their mistake until the real toll booths open up.
I’ll believe it when GN says it.