This is yet another one of those quick bury that post! posts. Not that I find anything wrong with what I most recently wrote, but it is rather breathless, and I need to put my thinking back up on its shelf for my private inspiration. Perhaps that's where it belongs. I think lots of people looking for direction have these moments of private inspiration which provide it for them. Mine just happened to be in words.
So, that's my theme for today. After which I'll go back to writing about male vibrators and other silly things. My theme is writing, language, and artificial intelligence. My thinking about it relates most to Gödel, Escher, Bach: An Eternal Golden Braid, only in the sense that the reader is forced to leave the whole big question wide open, where it remains today, riding on top of the fairly complete incompleteness theorem. I know the book is old school, but I can't find anything to change the basics.
In fact most debates about AI still include this basic difference - it's almost a predilection - between whether in principle this will always be an open question, or if things can lead to closure. I am pretty sure we're not going to see until we see, so it will remain for me a fairly uninteresting matter of the limits to abstraction. And I'm just not sure how the limits to abstraction can represent the limits to reality.
But in the particular, I am fairly certain that humans, as the only example ready to hand, never did think until language developed. Specifically, I mean that what we call intelligence is no property of mind alone, but is rather a property of minds together, in communication. I suppose therefore that intelligence would have to be called an epiphenomena of the brain, rather than something which inheres in it.
Now I understand fully that there are lots of people much smarter than me who study this stuff every day as their professional, vocational, and passion-filled occupation. They study all these things to the point of tedium, and I respect their studies, I really do. They have what I don't, a collegial (or not) set of fellow workers, all of whom share a common language. They share a common vocabulary and grammar for usage, and they have developed that language collectively so that they may push its envelope of sense.
That gives them the ability, far superior to mine, to throw terms around like "epiphenomena" and be pretty sure they know what they are talking about. If I have any advantage at all, it is because I am not embedded in any particular school of thought, and so I might see things which can't be seen from the inside. A kind of forest/trees issue. The naked emperor story.
But I do understand that I too must make clear sense, which as I've said I really pushed the edge of myself in that last post.
I am among those people who are fairly certain that we will never find any fairy dust in the brain. No matter how you think intelligence works, there isn't any mysterious substance or structure or pineal-type organ which has yet to be revealed which gives the brain its spark. Such thinking harkens back to talk of "soul", toward which I hope my training in Chinese traditions has hardened me a bit. I prefer to use less mysterious terms like heart or center.
So, therefore, I am also on the side of those who figure sure, why not, we can create an artificial brain. I don't think it will have very much to do with computers as we now use that term, but there is incredibly interesting research being done on neural network style computing. Quantum processing and massively parallel processing will come on line to make our current megahertz seem paltry. And these will also begin to break down the limitations of boolean and digital logic.
Computers, therefore, or machines, or artificial brains, may even be able to deal with metaphor, and therefore start to feel. But not until or before they develop language, which means that they too must distinguish each from the other, and have something to say to one another.
Right now, we might think of such arrangements as a matter of networks and signals passing back and forth. But no thinking, no intelligence, will come from such communication, which is the thing I want to stake a claim on here and now.
When writing first started, I think it might be fairly clear that it was a matter of keeping score. You know, those hash marks with a line through them which count to digital "five". Four score and so forth equals, um, 20? No, they get bunched in fours, so 80! Lines on a stick, soon followed in the case of China, with cracks on shells and trying to bring some of the constancy of heaven's patterns down to earth where we could make sense of them. Looking for signs.
But let's dwell on the counting for a minute. Scores or counts can always only refer to undifferentiated and therefore, for memory purposes, identical items. If you want to know how many sheep you bought or sold, the score can pretty much only tell you the count, and nothing more than that. If you need to know the colors, you'd have to split the count into color categories, or add something to the hashmarks.
And if you want to remember individual sheep, you'd have to give them a name.
I've said before that the only kind of memory which endures is ways of knowing. Once you identify a shape, for instance, it's hard to unlearn what that shape means, if it means something to those around you. It's very hard to forget a face if it has a familiar name to go with it. But if you're going to partake in community, then you have to learn to recognize and to name those things which you will use in common. I think in some sense that's what language means.
I have three cats, and I don't think it would be too easy for me to forget what a cat is, or to recognize one at a thousand paces. Maybe in the dark, I might mistake a racoon for one. But I confess that I don't always recognize which is which. They're all from the same litter, after all.
Oh sure, one seems to have a different father, and so Stella's easy to distinguish from the rest. But at a distance, there seem to be many many Stella's out there. And her two sisters look pretty much alike. But if I pause, and sorry to say I actually have to dredge my memory, I can remember which one has the orange eartips, and call up their proper names.
I know that makes me a lousy cat lover, and that many of you would distinguish one among a thousand without any thought at all. But I doubt you're using more than just their faces, which, no matter how much you love cats, are nowhere near so expressive as the human face is.
Yeah, sure I've seen all those cute pictures of cats dressed up and seeming to have expression. I've also read Oliver Sacks about how humans can fail with facial recognition, or even mistake their wife for a hat, and still control their language. But exceptions often prove the rule. If nothing else, the biological human "machine" is immensely redundant and malleable. There are even perfectly conscious humans who were born with only one half of a functioning brain.
But I do think that before we can create our own machines that think, they will have to be distinguished enough, one from the other, so that they actually do have something to say and aren't just part of one big artificial brain. You might wonder if I read too much science fiction, to which I'd have to answer, no, I don't think I read enough. Even when I seem to write too fast. There's only so much time in the day after all, and I don't read as fast as a lot of people seem to, and frankly, the science fiction isn't always the most interesting stuff.
One big brain, no matter how big, really can't think. And machines which are isomorphic compatible - think armies of droids in some Hollywood thrilla - down to the level of their design specs, by definition, won't have anything to say to one another.
Well, until they build up differential experiences. But here already a time lag is essential. That's what differential experience means. And so there has to be a going apart and a coming back together, rather than the near instantaneous communication we expect from our computer networks.
So, they have to be autonomous, these thinking machines, and they have to be distinguishable each from the other when they return for communication. Perhaps they can wear the scars from their experiences.
The trouble with wearing the scars from their experiences is that they only get them after separation, and so there can be no recognition upon return because, just like Dad's memory now, it's all new!
So, the separation has to move in stages, building difference, building recognition, until you get something like a face emerging on each machine. There has to be something familiar before re-cognition can occur.
Sure, you could seriallize all the machines with numbers. That would be a perfectly good way to distinguish each machine. Except numbers, remember, don't do anything but count sameness. Like with my cats, you'd have to look up in your memory banks which is which, and again like with my cats, it has to beg the question about how much you really care.
Now caring, of course, is not often considered to be a quality of machines. But what else would it be to direct attention inward toward memory, as it were, unless some wired-in compulsion to do so. Which brings us right back to the beginning all over again, with everyone - every machine - now pretty much acting like every other.
You could catalog which machine cares for which other in some kind of giant sorting game, but it would seem pretty far from what we mean by thinking pretty quickly. More like a flock of birds or school of fish.
I think brain science has already determined pretty convincingly that our inward brain often makes decisions before our conscious self is aware that we "have" them. We also know that we process much more information than we could possibly be consciously aware of. Emotions seem to play a role here too, sorting among all the inward random brain impressions, to bring to the fore those which interest us. Those which might prove to be of some use, pretty much in the same way as those named shapes which we use together with our community.
And the community in the first place only works together because of some sort of caring. Some facial recognition, maybe, tying family members together more easily than a branding system. Sure, plenty of exceptions prove this rule, but I think you get the idea. And collections of families are also bound by familiar behaviors and traditions - things held in collective memory.
Eventually, this collective memory gets written down, and civilizations can endure, some few as long as China has. Although to call China China all through those years might be a matter of splitting some interesting hairs, since about the only thing that's remained constant has been the written language itself. And that took a whole lot of draconian diligence, and an almost obsessive concern with how history gets told and poetry crafted.
So, I'll leave it there, then. I'll hope to come back for the sake of amusing trifles now and then. In the meantime, I really must get to work cleaning out my house for the next guy.