• 0 Posts
  • 15 Comments
Joined 10 months ago
cake
Cake day: November 22nd, 2023

help-circle
  • Yes. That’s the remnants of a massive hurricane that just pushed through Florida. Hurricanes sometimes bring salt water with them in the form of rain many miles away from the coast. When I was very young, there was one time where there was a massive hurricane here that was bad enough that we were evacuated, and when we came back, the glass door at my dad’s office was covered in so much salt that it looked like frosted glass. And that office was miles away from the beaches.

    This is basically the only time those idiots with the “Salt Life” stickers hundreds of miles from the coast will see salt water.


  • I wish I had the source on hand, but you’ll just have to trust my word - after all, 47% of the time, it’s right 100% of the time!

    Joking aside, I do wish I had the link to the study as it was cited in an article from earlier this year about AI making stuff up even when it cited sources (literally lying about what was in the sources it claimed it got the info from) and how the companies behind these AI collectively shrugged their shoulders and said “there’s nothing we can do about it” when asked what they intend to do about these “hallucinations,” as they call them.







  • It’s not about what humans “like,” it’s about the human bodies’ internal operating temperature and using that as a reference point, the same way that Celsius is about the states of matter of water . Fahrenheit is useful in medicine for that reason, while Celsius is useful anytime a comparison to water is helpful, and beyond that, it’s really just whatever you grew up with. Using a system based on what water “likes” is equally as useless unless you grew up using it as your reference point for temperature in your daily life. Neither 75 Fahrenheit or 23.8889 Celsius tell me whether or not I’m going to need a jacket today unless I’ve already experienced said temperature and use that scale in my daily life.


  • Another Millennial here, so take that how you will, but I agree. I think that Gen Z is very tech literate, but only in specific areas that may not translate to other areas of competency that are what we think of when we say “tech savvy” - especially when you start talking about job skills.

    I think Boomers especially see anybody who can work a smartphone as some sort of computer wizard, while the truth is that Gen Z grew up with it and were immersed in the tech, so of course they’re good with it. What they didn’t grow up with was having to type on a physical keyboard and monkey around with the finer points of how a computer works just to get it to do the thing, so of course they’re not as skilled at it.



  • Because we’re talking pattern recognition levels of learning. At best, they’re the equivalent of parrots mimicking human speech. They take inputs and output data based on the statistical averages from their training sets - collaging pieces of their training into what they think is the right answer. And I use the word think here loosely, as this is the exact same process that the Gaussian blur tool in Photoshop uses.

    This matters in the context of the fact that these companies are trying to profit off of the output of these programs. If somebody with an eidetic memory is trying to sell pieces of works that they’ve consumed as their own - or even somebody copy-pasting bits from Clif Notes - then they should get in trouble; the same as these companies.

    Given A and B, we can understand C. But an LLM will only be able to give you AB, A(b), and B(a). And they’ve even been just spitting out A and B wholesale, proving that they retain their training data and will regurgitate the entirety of copyrighted material.



  • The argument that these models learn in a way that’s similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.

    And these things don’t learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I’ve gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won’t be able to identify where a light source is because the shadows come from all different directions. These things don’t understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn’t even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.


  • I think this misses 2 possibilities. The first one being the unlikely scenario where a species’ space travel program outpaces the ecological collapse of their planet, necessitating a jump into an interplanetary civilization, and the second being the rarity of certain materials required for a technological civilization to continue to exist. The Rare Earth metals are so named because of their rarity on the planet, with most deposits being the result of meteorite impacts, and even things like iron only exist in finite quanities. There’s been talk for years now of capturing asteroids in orbit around the planet for mining purposes and atmospheric “scooping” to harvest gases from the gravity wells of other planets for gases such as hydrogen.

    Unless a civilization achieves 100% efficiency in a closed cycle of material use, they will need to look to the stars by necessity eventually.


  • When it comes to AI art, the Photoshop/invention of the camera argument doesn’t really compare because there’s really 2 or 3 things people are actually upset about, and it’s not the tool itself. It’s the way the data is sourced, the people who are using it/what they’re using it for, and the lack of meaning behind the art.

    As somebody said elsewhere in here, sampling for music is done from pre-made content explicitly for use as samples or used under license. AI art generators do neither. They fill their data sets with art used without permission and no licensing, and given the right prompting, you can get them to spit out that data verbatim.

    This compounds into the next issue, the people using it, and more specifically, how those people are using it. If it was being used as a tool to help make the creation process more efficient or easier, that would be one thing. But it’s largely being used by people to replace the artist and people who think that being able to prompt an image and use it unedited makes them just as good an artist as anybody working by hand, stylus, etc. They’re “idea” guys, who care nothing for the process and only the output (and how much that output is gonna cost). But anybody can be an “idea” guy, it’s the work and knowledge that makes the difference between having an idea for a game and releasing a game on Steam. To the creative, creating art (regardless of the kind - music, painting, stories, whatever) is as much about the work as it is the final piece. It’s how they process life, the same as dreaming at night. AI bros are the middle managers of the art world - taking credit for the work of others while thinking that their input is the most important part.

    And for the last point, as Adam Savage said on why he doesn’t like AI art (besides the late-stage capitalism bubble of it putting people out of work), “They lack, I think they lack a point of view. I think that’s my issue with all the AI generated art that I can see is…the only reason I’m interested in looking at something that got made is because that thing that got made was made with a point of view. The thing itself is not as interesting to me as the mind and heart behind the thing and I have yet to see in AI…I have yet to smell what smells like a point of view.” He later goes on to talk about how at some point a student film will come out that does something really cool with AI (and then Hollywood will copy it into the ground until it’s stale and boring). But we are not at that point yet. AI art is just Content. In the same way that corporate music is Content. Shallow and vapid and meaningless. Like having a machine that spits out elevator music. It may be very well done elevator music on a technical level, but it’s still just elevator music. You can take that elevator music and do something cool with it (like Vaporwave), but on its own, it exists merely for the sake of existing. It doesn’t tell a story or make a statement. It doesn’t have any context.

    To quote Bennett Foddy in one of the most rage inducing games of the past decade, “For years now, people have been predicting that games would soon be made out of prefabricated objects, bought in a store and assembled into a world. And for the most part that hasn’t happened, because the objects in the store are trash. I don’t mean that they look bad or that they’re badly made, although a lot of them are - I mean that they’re trash in the way that food becomes trash as soon as you put it in a sink. Things are made to be consumed in a certain context, and once the moment is gone, they transform into garbage. In the context of technology, those moments pass by in seconds. Over time, we’ve poured more and more refuse into this vast digital landfill that we call the internet. It now vastly outweighs the things that are fresh, untainted and unused. When everything around us is cultural trash, trash becomes the new medium, the lingua franca of the digital age. You could build culture out of trash, but only trash culture. B-games, B-movies, B-music, B-philosophy.”