Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don’t even know?
I will actually read it. Probably the only one of us two who would do that.
If it’s convincing I may change my mind. I’m not a radical, like many other people are, and my opinions are subject to change.
They have a conclusion that they’ve come to the conversation with and anything that challenges that gets down voted without consideration.
The assumptions you aren’t allowed to challenge, in order: AI is bad; Computer intelligence will never match or compete with human intelligence; computer intelligence isn’t really intelligence at all, it’s this other thing [insert ‘something’ here like statistical inference or whatever].
“AI is bad” is more of a dictum extending from cultural hedgemony than anything else. It’s an implicit recognition that in many ways, silicon valley culture is an effective looting of the commons, and therefore, one should reject all things that extend from that culture. It’s not a logical or rational argument against AI necessarily, but more of an emotional reaction to the culture which developed it. As a self preservation mechanism this makes some sense, but obviously, it’s not slowing down the AI takeover of all things (which is really just putting highlighter on a broader point that silicon valley tech companies were already in control of major aspects of our lives).
Computer intelligence never match human intelligence is usually some combination of goal post moving, or a redefining of intelligence on the fly (this I’ve specifically presented for the third critique, because it warrants addressing). This is an old trope that goes back almost to the beginning of computer intelligence (it’s not clear to me our definitions of machine intelligence are very relevant). It quite litterally started with multiplying large numbers. Then, for literally decades, things like chess and strategy, forwards facing notions in time were held up as some thing only “intelligent systems could do”. Then post deep blue, that got relegated to very clever programmers and we changed intelligence to be something about learning. Then systems like Alpha go etc came about, where they basically learned the rules to the game by playing, and we relegated those systems to ‘domain specific’ intelligences. So in this critique you are expected to accept and confirm the moving of goalposts around machine intelligence.
Finally, it’s the "what computers do isn’t intelligence, it’s some_other_thing.exe™. In the history of machine intelligence, that some other thing has been counting very quickly, having large-ish memory banks, statistical inference, memorization, etc. The biggest issues with this critique, and when you scratch and sniff it, you very quickly catch an aroma of Chomsky leather chair (and more so if we’re talking about LLMs), and maybe even a censer of a Catholic Church. The idea that humans are fundementally different and in some way special is frankly, fundemental, to most western idealogies in a way we don’t really discuss in the context of this conversation. But the concept of spirit, and that there is something “entirely unique” about humans versus “all of the rest of everything” is at the root of Abrahamic traditions and therefore also at the root of a significant portion of global culture. In many places in the world, it’s still heretical to imply that human beings are no more special or unique than the oak or the capibara or flatworm or dinoflagellate. This assumption, I think, is on great display with Chomsky’s academic work on the concept of the LAD, or language acquisition device.
Chomsky gets a huge amount of credit for shaking up linguistics, but what we don’t often talk about, is how effectively, his entire academic career got relinquished to the dust bin, or at least is now in that pile of papers where we’re not sure if we should “save or throw away”. Specifically, much of Chomsky’s work was predicted on the identification of something in humans which would be called a language acquisition device or LAD. And that this LAD would be found in as a region in human brains and would explain how humans gain language. And just very quickly notice the overall shape of this argument. It’s as old as the Egyptians in at least trying to find the “seat of the soul”, and follows through abrahamism as well. What LLMs did that basically shattered this nothing was show at least one case where no special device was necessary to acquire language; where in fact no human components at all were necessary other than a large corpus of training data; that maybe languages and the very idea of language or language acquisition are not special or unique to humans. LLMs don’t specifically address the issue of a LAD, but they go a step farther in not needing to. Chomsky spent the last of verbal days effectively defending this wrong notion he had (which has already been addressed in neuroscience and linguistics literature), which is an interesting and bitter irony for a linguist, specifically against LLMs.
To make the point more directly, we lack a good coherent testable definition of human intelligence, which makes any comparisons to machine intelligence somewhat arbitrary and contrived, often to support the interlocutors assumptions. Machine intelligence may get dismissed as statistical inference, sure, but then why can you remember things sometimes but not others? Why do you perform better when you are well rested and well fed versus tired and hungry, if not for there being an underlying distribution of neurons, some of which are ready to go, and some of which are a bit spent and maybe need a nap?
And so I would advocate caution about investing heavily into a conversation where these assumptions are being made. It’s probably not going to be a satisfying conversation because almost assuredly they assumptee hasn’t dove very deeply into these matters. And look at the downvote ratio. It’s rampant on Lemmy. Lemmy’s very much victim to it’s pack mentality and dog piling nature.
Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.
The reason OP was so rude is that your very premise of “what is the brain doing if not statistical text prediction” is completely wrong and you don’t even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.
The brain uses words to describe thoughts, the words are not actually the thoughts themselves.
Think about small children who haven’t learned language yet, do those brains still do “stastical text prediction” despite not having words to predict?
What about dogs and cats and other “less intelligent” creatures, they don’t use any words but we still can teach them to understand ideas. You don’t need to utter a single word, not even a sound, to train a dog to sit. Are they doing “statistical text prediction” ?
Read other replies I gave on your same subject. I don’t want to repeat myself.
But words DO define thoughts, and I gave several examples. Some of them with kids. Precisely in kids you can see how language precedes actual thoughts. I will repeat myself a little here but you can clearly see how kids repeat a lot phrases that they just dont understand, just because their beautiful plastic brains heard the same phrase in the same context.
Dogs and cats are not proven to be conscious as a human being is. Precisely due the lack of an articulate language. Or maybe not just language but articulated thoughts. I think there may be a trend to humanize animals, mostly to give them more rights (even I think that a dog doesn’t need to have a
intelligent consciousness for it to be bad to hit a dog), but I’m highly doubtful that dogs could develop a chain of thoughts that affects itself without external inputs, that seems a pretty important part of the consciousness experience.
The article you link is highly irrelevant (did you read it? Because I am also accusing you of not reading it as just being result of a quick google to try point your point using a fallacy of authority). The fact that spoken words are created by the brain (duh! Obviously, I don’t even know why the how the brain creates an articulated spoken word is even relevant here) does not imply that the brain does not also take form due to the words that it learns.
Giving an easier to understand example. For a classical printing press to print books, the words of those books needed to be loaded before in the press. And the press will only be able to print the letters that had been loaded into it.
the user I replied not also had read the article but they kindly summarize it to me. I will still read it. But its arguments on the impossibility of current LLM architectures to create consciousness are actually pretty good, and had actually put me on the way of being convinced of that. At least by the limitations spoken by the article.
Your analogy to mechanical systems are exactly where the breakdown to comparison with the human brain occurs, our brains are not like that, we don’t only have the blocks of text loaded into us, sure we only learn what we get exposed to but that doesn’t mean we can’t think of things we haven’t learned about.
The article I linked talks about the separation between the formation of thoughts and those thoughts being translated into words for linguistics.
The fact that you “don’t even know why the how the brain creates an articulated spoken word is even relevant here” speaks volumes to how much you understand the human brain, particularly in the context of artificial intelligence actually understanding the words it generates and the implications of thoughts behind the words and not just guessing which word comes next based on other words, the meanings of which are irrelevant.
I can listen to a song long enough to learn the words, that doesn’t mean I know what the song is about.
but that doesn’t mean we can’t think of things we haven’t learned about.
Can you think of a colour have you never seen? Could you imagine the colour green if you had never seen it?
The creative process is more modification than creation. taking some inputs, mixing them with other inputs and having an output that has parts of all out inputs, does it sound familiar? But without those input seems impossible to create an output.
And thus the importance of language in an actual intelligent consciousness. Without language the brain could only do direct modifications of the natural inputs, of external inputs. But with language the brain can take an external input, then transform it into a “language output” and immediately take that “language output” and read it as an input, process it, and go on. I think that’s the core concept that makes humans different from any other species, this middle thing that we can use to dialogue with ourselves and push our minds further. Not every human may have a constant inner monologue, but every human is capable to talking to themself, and will probably do when making a decision. Without language (language could take many forms, not just spoken language, but the more complex feels like it would be the better) I don’t know how this self influence process could take place.
It’s a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don’t think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we’d need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don’t have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the “adjust learning rate” level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that’s where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.
As to “rudeness”: Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree… because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including “my retirement savings are in AI stock”) they c) would’ve come across the general argument themselves during their technological research. Or came up with it themselves, I’ve also seen examples of that: If you have a good intuition about complexity (and many programmers do) it’s not unlikely a shower thought to have. Not as fleshed out as in the article, of course.
That seems a very reasonable approach on the impossibility to achieve AGI with current models…
The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.
But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.
Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.
Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.
Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.
Does a dog have the Buddha nature?
…meaning to say: Just because you happen to have the habit of identifying your consciousness with language (that’s TBH where the “stuck in your head” thing came from) doesn’t mean that language is necessary, or even a component of, consciousness, instead of merely an object of consciousness. And neither is consciousness necessary to do many things, e.g. I’m perfectly able to stop at a pedestrian light while lost in thought.
I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.
What Descartes actually was getting at is “I can’t doubt that I doubt, therefore, at least my doubt exists”. He had a bit of an existential crisis. Unsolicited Advice has a video about it.
But when I think of how to define a consciousness and divert it from instinct or reactiveness (like stopping at a red light). I think that something that makes a conscience a conscience must be that a conscience is able to modify itself without external influence.
A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?
A human being can sit alone in a room and start processing information by itself in a loop and completely change that flux of information onto something different, even changing the brain in the process.
For this to happen I think some form of language, some form of “speak to yourself” is needed. Some way for the brain to generate an output that can be immediately be taken as input.
At this point of course this is far more philosophical than technical. And maybe even semantics of “what is a conscience”.
A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?
As per current psychology’s view, yes, even if to a smaller extent. There are problems with how we define conscience, and right now with LLMs most of the arguments are usually related to the Chinese room and philosophical zombie thought experiments, imo
How to tell me you’re stuck in your head terminally online without telling me you’re stuck in your head terminally online.
But have something more to read.
Why being so rude?
Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don’t even know?
I will actually read it. Probably the only one of us two who would do that.
If it’s convincing I may change my mind. I’m not a radical, like many other people are, and my opinions are subject to change.
They have a conclusion that they’ve come to the conversation with and anything that challenges that gets down voted without consideration.
The assumptions you aren’t allowed to challenge, in order: AI is bad; Computer intelligence will never match or compete with human intelligence; computer intelligence isn’t really intelligence at all, it’s this other thing [insert ‘something’ here like statistical inference or whatever].
“AI is bad” is more of a dictum extending from cultural hedgemony than anything else. It’s an implicit recognition that in many ways, silicon valley culture is an effective looting of the commons, and therefore, one should reject all things that extend from that culture. It’s not a logical or rational argument against AI necessarily, but more of an emotional reaction to the culture which developed it. As a self preservation mechanism this makes some sense, but obviously, it’s not slowing down the AI takeover of all things (which is really just putting highlighter on a broader point that silicon valley tech companies were already in control of major aspects of our lives).
Computer intelligence never match human intelligence is usually some combination of goal post moving, or a redefining of intelligence on the fly (this I’ve specifically presented for the third critique, because it warrants addressing). This is an old trope that goes back almost to the beginning of computer intelligence (it’s not clear to me our definitions of machine intelligence are very relevant). It quite litterally started with multiplying large numbers. Then, for literally decades, things like chess and strategy, forwards facing notions in time were held up as some thing only “intelligent systems could do”. Then post deep blue, that got relegated to very clever programmers and we changed intelligence to be something about learning. Then systems like Alpha go etc came about, where they basically learned the rules to the game by playing, and we relegated those systems to ‘domain specific’ intelligences. So in this critique you are expected to accept and confirm the moving of goalposts around machine intelligence.
Finally, it’s the "what computers do isn’t intelligence, it’s some_other_thing.exe™. In the history of machine intelligence, that some other thing has been counting very quickly, having large-ish memory banks, statistical inference, memorization, etc. The biggest issues with this critique, and when you scratch and sniff it, you very quickly catch an aroma of Chomsky leather chair (and more so if we’re talking about LLMs), and maybe even a censer of a Catholic Church. The idea that humans are fundementally different and in some way special is frankly, fundemental, to most western idealogies in a way we don’t really discuss in the context of this conversation. But the concept of spirit, and that there is something “entirely unique” about humans versus “all of the rest of everything” is at the root of Abrahamic traditions and therefore also at the root of a significant portion of global culture. In many places in the world, it’s still heretical to imply that human beings are no more special or unique than the oak or the capibara or flatworm or dinoflagellate. This assumption, I think, is on great display with Chomsky’s academic work on the concept of the LAD, or language acquisition device.
Chomsky gets a huge amount of credit for shaking up linguistics, but what we don’t often talk about, is how effectively, his entire academic career got relinquished to the dust bin, or at least is now in that pile of papers where we’re not sure if we should “save or throw away”. Specifically, much of Chomsky’s work was predicted on the identification of something in humans which would be called a language acquisition device or LAD. And that this LAD would be found in as a region in human brains and would explain how humans gain language. And just very quickly notice the overall shape of this argument. It’s as old as the Egyptians in at least trying to find the “seat of the soul”, and follows through abrahamism as well. What LLMs did that basically shattered this nothing was show at least one case where no special device was necessary to acquire language; where in fact no human components at all were necessary other than a large corpus of training data; that maybe languages and the very idea of language or language acquisition are not special or unique to humans. LLMs don’t specifically address the issue of a LAD, but they go a step farther in not needing to. Chomsky spent the last of verbal days effectively defending this wrong notion he had (which has already been addressed in neuroscience and linguistics literature), which is an interesting and bitter irony for a linguist, specifically against LLMs.
To make the point more directly, we lack a good coherent testable definition of human intelligence, which makes any comparisons to machine intelligence somewhat arbitrary and contrived, often to support the interlocutors assumptions. Machine intelligence may get dismissed as statistical inference, sure, but then why can you remember things sometimes but not others? Why do you perform better when you are well rested and well fed versus tired and hungry, if not for there being an underlying distribution of neurons, some of which are ready to go, and some of which are a bit spent and maybe need a nap?
And so I would advocate caution about investing heavily into a conversation where these assumptions are being made. It’s probably not going to be a satisfying conversation because almost assuredly they assumptee hasn’t dove very deeply into these matters. And look at the downvote ratio. It’s rampant on Lemmy. Lemmy’s very much victim to it’s pack mentality and dog piling nature.
Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.
The reason OP was so rude is that your very premise of “what is the brain doing if not statistical text prediction” is completely wrong and you don’t even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.
The brain uses words to describe thoughts, the words are not actually the thoughts themselves.
https://advances.massgeneral.org/neuro/journal.aspx?id=1096
Think about small children who haven’t learned language yet, do those brains still do “stastical text prediction” despite not having words to predict?
What about dogs and cats and other “less intelligent” creatures, they don’t use any words but we still can teach them to understand ideas. You don’t need to utter a single word, not even a sound, to train a dog to sit. Are they doing “statistical text prediction” ?
So agi is statistical emotion prediction we then assign logic to
Read other replies I gave on your same subject. I don’t want to repeat myself.
But words DO define thoughts, and I gave several examples. Some of them with kids. Precisely in kids you can see how language precedes actual thoughts. I will repeat myself a little here but you can clearly see how kids repeat a lot phrases that they just dont understand, just because their beautiful plastic brains heard the same phrase in the same context.
Dogs and cats are not proven to be conscious as a human being is. Precisely due the lack of an articulate language. Or maybe not just language but articulated thoughts. I think there may be a trend to humanize animals, mostly to give them more rights (even I think that a dog doesn’t need to have a intelligent consciousness for it to be bad to hit a dog), but I’m highly doubtful that dogs could develop a chain of thoughts that affects itself without external inputs, that seems a pretty important part of the consciousness experience.
The article you link is highly irrelevant (did you read it? Because I am also accusing you of not reading it as just being result of a quick google to try point your point using a fallacy of authority). The fact that spoken words are created by the brain (duh! Obviously, I don’t even know why the how the brain creates an articulated spoken word is even relevant here) does not imply that the brain does not also take form due to the words that it learns.
Giving an easier to understand example. For a classical printing press to print books, the words of those books needed to be loaded before in the press. And the press will only be able to print the letters that had been loaded into it.
the user I replied not also had read the article but they kindly summarize it to me. I will still read it. But its arguments on the impossibility of current LLM architectures to create consciousness are actually pretty good, and had actually put me on the way of being convinced of that. At least by the limitations spoken by the article.
Your analogy to mechanical systems are exactly where the breakdown to comparison with the human brain occurs, our brains are not like that, we don’t only have the blocks of text loaded into us, sure we only learn what we get exposed to but that doesn’t mean we can’t think of things we haven’t learned about.
The article I linked talks about the separation between the formation of thoughts and those thoughts being translated into words for linguistics.
The fact that you “don’t even know why the how the brain creates an articulated spoken word is even relevant here” speaks volumes to how much you understand the human brain, particularly in the context of artificial intelligence actually understanding the words it generates and the implications of thoughts behind the words and not just guessing which word comes next based on other words, the meanings of which are irrelevant.
I can listen to a song long enough to learn the words, that doesn’t mean I know what the song is about.
Can you think of a colour have you never seen? Could you imagine the colour green if you had never seen it?
The creative process is more modification than creation. taking some inputs, mixing them with other inputs and having an output that has parts of all out inputs, does it sound familiar? But without those input seems impossible to create an output.
And thus the importance of language in an actual intelligent consciousness. Without language the brain could only do direct modifications of the natural inputs, of external inputs. But with language the brain can take an external input, then transform it into a “language output” and immediately take that “language output” and read it as an input, process it, and go on. I think that’s the core concept that makes humans different from any other species, this middle thing that we can use to dialogue with ourselves and push our minds further. Not every human may have a constant inner monologue, but every human is capable to talking to themself, and will probably do when making a decision. Without language (language could take many forms, not just spoken language, but the more complex feels like it would be the better) I don’t know how this self influence process could take place.
It’s a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don’t think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we’d need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don’t have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the “adjust learning rate” level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that’s where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.
As to “rudeness”: Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree… because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including “my retirement savings are in AI stock”) they c) would’ve come across the general argument themselves during their technological research. Or came up with it themselves, I’ve also seen examples of that: If you have a good intuition about complexity (and many programmers do) it’s not unlikely a shower thought to have. Not as fleshed out as in the article, of course.
That seems a very reasonable approach on the impossibility to achieve AGI with current models…
The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.
But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.
Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.
Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.
Does a dog have the Buddha nature?
…meaning to say: Just because you happen to have the habit of identifying your consciousness with language (that’s TBH where the “stuck in your head” thing came from) doesn’t mean that language is necessary, or even a component of, consciousness, instead of merely an object of consciousness. And neither is consciousness necessary to do many things, e.g. I’m perfectly able to stop at a pedestrian light while lost in thought.
What Descartes actually was getting at is “I can’t doubt that I doubt, therefore, at least my doubt exists”. He had a bit of an existential crisis. Unsolicited Advice has a video about it.
It may be because of the habit.
But when I think of how to define a consciousness and divert it from instinct or reactiveness (like stopping at a red light). I think that something that makes a conscience a conscience must be that a conscience is able to modify itself without external influence.
A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?
A human being can sit alone in a room and start processing information by itself in a loop and completely change that flux of information onto something different, even changing the brain in the process.
For this to happen I think some form of language, some form of “speak to yourself” is needed. Some way for the brain to generate an output that can be immediately be taken as input.
At this point of course this is far more philosophical than technical. And maybe even semantics of “what is a conscience”.
As per current psychology’s view, yes, even if to a smaller extent. There are problems with how we define conscience, and right now with LLMs most of the arguments are usually related to the Chinese room and philosophical zombie thought experiments, imo