on the other hand, when Putin’s done killing off most of their own present and future workforce in a senseless war and completely tanking his own economy, that might be the equivalent of like $3
on the other hand, when Putin’s done killing off most of their own present and future workforce in a senseless war and completely tanking his own economy, that might be the equivalent of like $3
Socials and the Internet in general would be a much better place if people stopped believing and blindly resharing everything they read, AI-generated or not.
I’m not sure we, as a society, are ready to trust ML models to do things that might affect lives. This is true for self-driving cars and I expect it to be even more true for medicine. In particular, we can’t accept ML failures, even when they get to a point where they are statistically less likely than human errors.
I don’t know if this is currently true or not, so please don’t shoot me for this specific example, but IF we were to have reliable stats that everything else being equal, self-driving cars cause less accidents than humans, a machine error will always be weird and alien and harder for us to justify than a human one.
“He was drinking too much because his partner left him”, “she was suffering from a health condition and had an episode while driving”… we have the illusion that we understand humans and (to an extent) that this understanding helps us predict who we can trust not to drive us to our death or not to misdiagnose some STI and have our genitals wither. But machines? Even if they were 20% more reliable than humans, how would we know which ones we can trust?
Most things to do with Green Energy. Don’t get me wrong, I think solar panels or wind turbines are great. I just think that most of the reported figures are technically correct but chosen to give a misleadingly positive impression of the gains.
Relevant smbc: https://www.smbc-comics.com/comic/capacity
Just wanted to point out that the Pinterest examples are conflating two distinct issues: low-quality results polluting our searches (in that they are visibly AI-generated) and images that are not “true” but very convincing,
The first one (search results quality) should theoretically be Google’s main job, except that they’ve never been great at it with images. Better quality results should get closer to the top as the algorithm and some manual editing do their job; crappy images (including bad AI ones) should move towards the bottom.
The latter issue (“reality” of the result) is the one I find more concerning. As AI-generated results get better and harder to tell from reality, how would we know that the search results for anything isn’t a convincing spoof just coughed up by an AI? But I’m not sure this is a search-engine or even an Internet-specific issue. The internet is clearly more efficient in spreading information quickly, but any video seen on TV or image quoted in a scientific article has to be viewed much more skeptically now.
I think I’m with him on this one. Replacing all the people on social with AI agents would give us back so much free time! And we could even restart socializing for real.
Go on Zuckerberg, give us a Facebook made only of AI agents creating fake pictures of inexistent gatherings and posting them, so other AIs can recommend them and million of other AIs can comment on them!
You are an unsung hero, Zuckerberg, but one day they’ll understand and thank you