Don’t be fooled to think computer neural networks is how the brain is structured. Through out history we’ve always compared the brain to the most advanced technology at the time. From clocks, to computers with short and long term memory, and now to neural networks.
That is a good point, though the architecture of computer neutral networks is inspired by how we think the brain works, and if I understand correctly there is some definite similarity in the architecture.
I would guess that every statement made is kind of true. It is a clock, a computer and a LLM,…
I would even go as far as LLM is the closest to a functioning brain we can produce from a functional perspective. And even the artificial brains are to complex to understand in detail.
I reckon we can get a lot closer than an LLM in time. For one thing, the mind has particular understanding of interim steps whereas, as I understand it, the LLM has no real concept of meaning between the inputs and the output. Some of this interim is, I think, an important part of how we assess truthfulness of generated ideas before we put them into words.
Don’t be fooled to think computer neural networks is how the brain is structured. Through out history we’ve always compared the brain to the most advanced technology at the time. From clocks, to computers with short and long term memory, and now to neural networks.
That is a good point, though the architecture of computer neutral networks is inspired by how we think the brain works, and if I understand correctly there is some definite similarity in the architecture.
Lots of difference though, still!
I would guess that every statement made is kind of true. It is a clock, a computer and a LLM,…
I would even go as far as LLM is the closest to a functioning brain we can produce from a functional perspective. And even the artificial brains are to complex to understand in detail.
I reckon we can get a lot closer than an LLM in time. For one thing, the mind has particular understanding of interim steps whereas, as I understand it, the LLM has no real concept of meaning between the inputs and the output. Some of this interim is, I think, an important part of how we assess truthfulness of generated ideas before we put them into words.
deleted by creator