Friday, December 31, 2021

Self-referential Notes While Reading 'Life 3.0' by Max Tegmark

Don't read this. It's sloppy endless railing against the stupidity of AI. I post these things as notes to self, even if I almost never have the energy to read them through. Read The Razor's Edge instead. It's free at gutenberg. I'm tired of reworking this and letting it grow hairy. But if I don't post it, I'll forget that it exists and won't be able to mark any progress I might make. No big deal, since nobody's paying attention.

* * *

So Barack Obama called it a good book, which only confirms his neoliberal cred. He too thinks we’re at the end of history. I'll say at the outset that when Tegmark makes this following statement toward the beginning of this book:

"But beauty is in the eye of the beholder, not in the laws of physics, so before our Universe awoke, there was no beauty. This makes our cosmic awakening all the more wonderful and worthy of celebrating: it transformed our Universe from a mindless zombie with no self-awareness into a living ecosystem harboring self-reflection, beauty and hope—and the pursuit of goals, meaning and purpose. Had our Universe never awoken, then, as far as I’m concerned, it would have been completely pointless—merely a gigantic waste of space. Should our Universe permanently go back to sleep due to some cosmic calamity or self-inflicted mishap, it will, alas, become meaningless.

. . . when Tegmark writes that, I immediately want to call out his mind as the meaningless gigantic waste of vacant and vacuous boring meaningless space. 

I don't really wish to rag on Obama. The trouble for me with Obama is that I projected something impossible onto him. I suppose that a lot of others did as well. Our enthusiasm exceeded what the presidency could accomplish, and in that way he created the space for Trump to inhabit. He may even have sealed the fate of both of them with his brilliant take-down payback at the White House Press Club. But how could you blame him after all that birther bullshit?

Spoiler alert: Before reading this over, I have to confess that I did just watch Matrix Resurrections, which is a better commentary on this book than I can write. I also surely think that McKenzie Wark, with her update of Marxism she calls "vectorialism," makes a better claim on our future than Max Tegmark does. There will never be the travesty of "artificial life", just as surely as rampant AI driven vectorialist capitalism will destroy the earth.

Wark's transgender transformation could be seen in the same space as the Wachowski's; which is, now that I think about it, also the same space as Tegmark's proposed transformation of humanity into data. But the one is innered and acting only on oneself in the now, while the other is outered onto the universe and onto the future.

Tegmark swallows the blue pill whole, and buys the world as it is, while Wark dives beneath her skin and ours - along with our minds - to find reality. The world is already driven by AI. The thing to do is to hack it. 

To go for the essence of what's collectively real. Donna Haraway come a lot closer to what is real about technology, and her chthulucene is a lot less apocalyptic than the one-dimensional anthropocene, and a lot more hopeful. We need to focus on getting through and staying with the trouble, not on the "win" of transcending it.

Given my perusal of dystopian fiction, of the likes of The Road, or Harrow, or Mad Max, or Straw Dogs, or a bejillion others, all we can imagine after the collapse of capitalism is ruthlessness in spades. A bleak denuded landscape crawling with psychopaths. This is the capitalist mirror. The dark in our experience of our triumphs.

Why can't we even imagine self-organizing humans, but we can imagine self-creating robots? Jamison: "it's easier to imagine the end of the world than the end of capitalism." Capitalism is the end of the world, stupid!

Tegmark is the quintessential one-dimensional man, who not only hasn't noticed that the subject/object distinction has already blurred, if not disappeared altogether, he can't even imagine how very different the world would have to be if his vision for AI were ever fulfilled, not to mention how different the course along the way would have to be. He seems only to imagine making better what we already have. As though what we could be is finished.

Next true confession: I just finished reading The Dawn of Everything, which I found to be a convincing debunking of all the recent "Big Histories" (Most of which I've also read) which, it turns out, project a set of very parochial narratives onto our sketchy understanding of our past, and especially our pre-historical (recorded) past, which is subject only to archaeological research.

The David's book stays well within the scientific method in its challenge to received wisdom about the stages of human development. As generally narrated, those stages are often thought to lead progressively and then inevitably to our current political and economic arrangements. For me, the great bonus of the book is that you can turn its lense around toward the future and see that our current angst about the anthropocene is part of the same progressive (and Western) narrative: History ends with apocalypse.

In both the Christian and in the Geek Rapture sense (that's what Tegmark is embedded in), there is a good apocalypse, where God comes down as man, or Man rises up as Homo Deus. I remember being excited by both Hariri books. It's exciting to get a well-read narrative from the altitude of outer space. Even if it is from a space cadet. And I do love good science fiction, to which genre Tegmark should limit himself.

My hope is founded on the promise that we can get beyond private property; and beyond the fetishization of the selfie-self, which is private property's inevitable corollary. 

We can self-organize in ways to maximize actual freedom and absence of tyranny. We have done it many times in our past, the Davids remind us. Neoliberal capitalism has taken on the form of perpetual warfare now; a form of police-state command and control governance which has both occurred before and which has been overrun many times at well. We are stuck now precisely because we are so impressed with ourselves, and especially with the promises of our technologies.

We remain in the thrall of our technologies because of a kind of artificial fear. We fear that the world will end, just as we fear that we ourselves will end. And we think that we're just about clever enough to fix it all.

There is a homology (?) from neoliberal capitalism to projecting humanity onto the entire cosmos. If we don't cut it out, the world will indeed end. 

That doesn't mean, and isn't the same thing as, the end of technology, the end of the self, the end of a free market economy. It only means backing down from totalism of any sort. 

Totalism (I want to hang back from totalitarianism name-calling, by emphasizing only the totalizing nature of Tegmark and his ilk's thinking about what digital can be, and his strange fetish about private property, as though we couldn't have privacy without it, and as if it doesn't constrain autonomy as much as it might give the illusion of it) . . . totalism kills autonomy of thought, and fetishising the self limits autonomy to the extent that you cut yourself off from all those parts of you that are realized in and by others.

A car can be a prison, limiting the driver to highly controlled and constrained highways and rules, as much as it can represent freedom. Everyone wanting to be a superstar can kill us in the same way. It's all projection; it's all the same deluded fantasy.

Totalising governance of the sort we practice is only necessary for survival in the face of grand challenges or emergency, or for the Big Hunt. But now neoliberal technocratic totalism is what's causing the end to loom so near. 

We're not on a ship at sea. The only emergency is the one we created. All we have to do is to dismantle the structures which oppress most of us. If we don't, the world we live in will move even further toward making us feel as though even our houses are awash in a storm. Oh wait, it already feels like that.

Tegmark and I agree that there is no need to enslave and degrade some of us for a very few to thrive. But his is a libertarian vision where the only thing sacred is private property. He seems to have no sense of irony.

I do believe that we live in a time of irony, if not paradox, where the seeming solution is the problem. We are beating back the pandemic in ways that fall well short of a police-state, thank the gods. But a little more respect for authority wouldn't hurt. We already know that objective physical science has run its course, but we are having way too much fun pretending that it hasn't. And especially, we know that we can't know everything, while we can't stop having fun projecting the fantasy that we could or might onto our future.

So far there is no asteroid hurtling in our direction, but there certainly is climate change. According to the Dawn of Everything, which is filled with citations, part of what has formed humanity over time has been massive climate change in our past. The dinosaurs' planet was leveled by an astronomic collision, without which we wouldn't be here. In recent times, it was a glacial ice-age, through which the earth and humanity survived. Now it will be sea-level rise and erratic weather events, through which earth and humanity will also survive.

Artificial Intelligence will help with this, but I believe that the fantasy of filling the universe with projections of humanity as we are, in whatever real or artificial form, is what all fantasy is: A denial of reality. Real life does not 'lead up' to intelligence. There is more to humans than just our thinking as that can be modelled by way of logic gates. There is more to life than our science can tell us.

I hope to show you why Life 3.0 is fantasy, and how dangerous such fantasy is when authors lose track of the difference between fantasy and the real. My hope is probably more extravagant even than theirs.

The only question worth answering here is whether AI can gain what Tegmark calls consciousness, which he reduces to subjective experience. It's clear that humans can project subjectivity onto geometric objects and swarms of shapes made to move about on a screen in most any narrative fashion. It's less clear that an AI can experience consciousness.

So further true confession: I'm also reading The Extended Mind, which debunks the brain/computer set of metaphors. If, as I myself know to be true, the technologies which might lead to AI, are, on balance actually extending the human mind, there might be cause for hope. But the counter balance is that technology has also been acting as an accelerant to the wildfire of unregulated capital. 

It may well be that our quest for AI is actually destroying not just human mind, but mind in general. It may be that it is way too early to be making the suppositions made here by tech enthusiasts. I know that my mind and certainly my memories are embedded in the world about me. As the geography and natural order is transformed at an accelerating rate, I am quite literally losing my mind. There are many many memories which I simply can't access without actually being in the familiar space where they happened.

As books on shelves are replaced by machine renderings, I am losing all my referents. I'd say we need to fix search before we move on to AI. If you can't reliably find the same information twice, and know what's not available by way of the web, then search is also destructive of knowledge.

But then my God is Irony. And I won't have convinced anyone of anything before I'm gone. But I won't stop trying. I'm very trying. And pathetically goofy. Merry Christmas! Let's read on . . .

* * *

Tegmark wants to go ahead and impose his vacant subjectivity on the entire universe and make it as dead as his dreams for AI make me feel. This person has lost all wonder, and is himself exhibit A for the reality of AI. He mistakes blockbuster capitalist movers as geniuses?!? Or is he merely pandering for his Institute?

Look, before I post this (still re-reading, apparently, if not quite editing) I have to say that I don't think these people are somehow evil. Certainly Barack Obama isn't, and Max Tegmark seems like a nice and really interesting guy. I just have a very different world-view than either of them do. I would never hug Richard Branson, and especially not to go up under a parasail the way O did, or in some sort of rocket.ship  slingshot ride into outer space.

Unfortunately my mind isn't good or powerful enough, and my writing too tortured, for me to find any company in this world-view. Or when I do find company, they don't give me the time of day, mostly because I'm not deep enough into their specialty.

Naturally enough, my world-view pulls the rug out from under nearly everything about how we live now, which makes it ever more unlikely that I'll make any progress. I don't think that thinking which goes against the power structure has ever had a worse possibility of making headway.

One brief way to outline my thinking is that we are already ruled by AI. The infrastructure of our minds, by way of existing communications technology, is organized by the supercomputing power of money. Our collective human desires are skewed in favor of the hollow men who become superstars of one sort or another.

Most of us don't want multiple mansions and yachts. We just want honest work that doesn't belittle us, with enough free time and space and thought to put a smile on our face and a laugh in our belly for enough of our waking time. 

We don't want to be left out of any and all decisions which affect us - as we are now. I mean really, who among us feels that they can make a difference?

The other thing we want is to be secure in our knowledge that the material world isn't everything. We know that there is something to reality that is bigger than our sciences can discover, but it can't be measured or quantified or often even communicated, much of the time. My faith that this knowledge will prevail has never been weaker. Or maybe it's never been stronger.

I suppose my discouragement is because my mind is working about as well as my body. It hurts to do challenging things. Like, for instance, while reading this book is pretty trivial and fast, reading Herbert Marcuse is a freaking chore. But Marcuse is a lot closer to "the truth."

I do not oppose science. My complaint is that within science are already discoveries that lead beyond it. My argument, sadly, is far more subtle than I am. The term "science" is not meant to be limited by detectable physical reality. It's meant to encompass anything that can be known and communicated convincingly to others. 

So yes, I think that God is a canard. Positing the existence of some god or other - even a future AI demigod - is pretty much what limits science as we practice it now.

The other limits are all disciplinary. The division of academic labor. As in, there isn't much you can know as an amateur. And really all of us are amateurs outside our discipline. I have no discipline, so I'm an amateur in everything. So is every one of our politicians. We need to do a better job of selecting them, number one, and number two, there are intelligent people who are kind and thinking people who should have more of a voice. Trouble is that in our society now, you have to be a self-promoter to be heard at all.

And yes of course, there is life elsewhere in our universe. And of course it will never be physically present for us. The laws of physics, especially as invoked in this book, are as complete as they need to be. We just simply aren't paying attention to the life that does exist elsewhere. Likely just because it's so, like, everywhere and all of the time. Even here on earth we've forgotten how to listen to the trees.

* * *

Yes, for sure, what I am saying is that whether silicon based or otherwise based, any artificial intelligence will be, by definition, cut off from cosmic mind. Are we still so stupid that we can't see that? Consciousness is the subjective experience of feeling what the unconscious has assembled. AI has no unconscious. That's why, no matter how complex we make it or imagine it, AI can never be conscious. 

Our feelings are grounded in our bodies and in the existence of myriad similar bodies with whom we emotively connect. Our brains inner our body's perceptions, but our minds are always outside in the objects perceived. A computer of any sort is cut off.

Sure my thinking has echoes of mystical and religious thinking, but I come by it honestly, by way of the scientific method. There is simply no reason to presume or even suggest that the human mind is somehow that special and removed from the rest of the cosmically real.

The "revolutions" which have led to this moment have - quite evidently now if you but open your eyes and look around you - almost nothing to do with intelligence. They have to do with binding us one to the other and taking advantage of what we can do collectively that we could never do alone. Mind has never been and never will be something that can be contained or defined. There is no boundary to it.

It is the collective we which hasn't yet awakened. We are stupider than our primitive progenitors who at least knew that they were contained within, and infinitely smaller than, the godhead. Whatever the fuck the godhead is it's certainly not the apotheosis of humanity.

The issue is not whether artificial minds can be led to take over all of the stuff that we decide about. The issue is whether we shall open our minds before this happens, because when and if AI takes over, we will already have ended. 

Not because some scary machine mind has taken over, but because our human thinking feeling communicating loving artistic and literary mind will already have lost its imagination and become the machine mind that it thinks that it wishes to create.

Or in other words, what a stupid stupid book representing a stupid stupid train of thought. But hey, I'll keep reading. We'll see where this bimbo is going with this.

But before I do let me remind you, gentle reader, that we cannot know, directly by way of scientific detection, if there is other consciousness in the cosmos. That's because we have yet to disprove the laws of physics as we have construed them to now. 

The onliest way to know is by feeling. Feeling is, by definition, a mutual and simultaneous reality. That's my definition, OK? Physically we can't even be in touch with one another according to the very same laws which limit detection by the canonical speed of light. We can't even exist in the same universe person to person if you follow the logic to its obvious conclusion

We exclude conceptual reality from our reality, not realizing that without a very real definition for simultaneity which escapes the Laws of Relativity, our subjective reality can't even exist in the first place. There is no consciousness without some other, and if we humans are to be the other which makes machines conscious, we will have had to reduce ourselves to their terms, negating the very possibility to know the machines, because they will be without emotion. Their conceptual reality is only mathematical. Beautiful only for rubes like Tegmark. And he himself finds the cosmos void of meaning without us. Uroburos.

* * *

Already at the opening of chapter 2, I see that this fellow is using good conspiracy theory rhetoric. And like a 'good' GWB style liar, he takes himself in along with the attempt on the rest of us. He defines intelligence at what he calls "a maximally broad and inclusive view"  = ability to accomplish complex goals.

Along the way he tosses out various competing glosses for intelligence; "capacity for logic, understanding, planning, emotional knowledge, self-awareness, creativity, problem solving and learning."

The glaring ones  that he slides past are "emotional knowledge" and "creativity." Does he really think that these are subsumed by his 'accomplish complex goals!?'

I, for only one, find it trivial to establish that emotional response is the basis for any and all goals in the first place. But he's going to just slip past that. And does anyone really think that creativity is about goal accomplishment? Creativity is what happens when you either don't know what the goal is, or you don't know how to get there. Machines might look creative just because we couldn't have imagined their solution to a problem we gave them, but I'm saying that by definition they're not creative. Um, that's because, yes, they're not alive.

The brain, for instance, is not some unitary device. Perceptions take time to register and to assemble, and most of the time there isn't time to come up with optimal solutions. To the problem, say, of a tiger about to eat you. Perceptions have to be put in order, which is to say that some narrative structure must be imposed. The narrative structure has to feel right before you can act on it.

Collectively, humans clearly can't decide which goals are important, even in the face of utter catastrophe. Individual humans do stupid things all the time, but mostly stay alive even while exhibiting colossal stupidity, like driving a car with minimal knowledge of its physics and minimal subtlety about the importance of signaling and taking in the biggest possible picture. Not to mention driving while not even quite paying close attention.

The staying alive part is in the design of highways and traffic signals and general experience driving which leads to pretty good general awareness of what to do when. Not to mention recently resilient forgiving and massive cars.

Sure, a AI driver can become nearly accident free. It can get from point A to point B, as directed by some decider. But that hardly makes it intelligent. Says me.

The intelligence is built into the overall system, and neither AI nor stupid human could survive apart from that system. Plunk a good American driver down in China and you have an accident on your hands (though the world is so shrunken and reduced, that it wouldn't take too long to learn).

Look, my point is also the main point of the book. But so far, we haven't even defined the boundaries for intelligence, which I claim must extend to the reaches of "the universe," defined here as "The region of space from which light has had time to reach us during the 13.6 years since our Big Bang."

Fine, I'll accept that as our universe. What I won't accept is the narrative structure this guy is imposing on the universe, which leaves it dead and void without us. By his own definition, we can know nothing about the universe apart from that which we can connect with within some reasonable facsimile of simultaneity, which I say means we can't really connect with anything other at all, since simultaneity is a really really fuzzy concept, post relativity theory. Not to mention post quantum theory.

And yet we do manage to know something. That's because mind is built on emotion which is mutual and involves no exchange of light-speed-limited particles to be real and true and a goad to, um, action!

Now I would be hurling nasty epithets were I to suggest that people who work on artificial intelligence are emotionally stunted. Or that they're on the spectrum, you know, like Greta Thrunburg or Elon Musk are, according to themselves. But no, the epithets I'd like to hurl have much more to do with the lousy narratives they want us all to inhabit. 

I really don't have anything against self-driving cars. I have something really big against cars, though. As in if we want cars, then we won't survive, full stop. Our goal can't be to keep driving cars or having them drive us. A better goal would be to open ourselves to the universal mind, which is something that artificial intelligence simply can't do. Again, I'd say that's by definition.

Here's Tegmark's definition of life: "Process that can retain its complexity and replicate." As is his definition for the universe, I think it's a pretty good definition. But he's leaving out anything about the evident fact that life arises by way of evolution, which is in turn dependent on accident, chance, random, whatever you want to call it. No goals allowed, according to our definition for random. And yet this entire book infers that there was a direction to evolution and that it led inevitably to consciousness (defined as "Subjective experience").

OK fine, but wouldn't that mean that it led to consciousness everywhere and not just here? And yet somehow the universe is vacant without our particular subjectivity? 

I'm coming from a place of optimism, gotten with great difficulty from the destruction of self-aggrandizing narratives for human history as accomplished in the quite brilliant book The Dawn of Everything. I'm not going to buy this guy's stupid claim that somehow now we're on the verge of solving all problems and reaching all goals that could ever be conceived anywhere. Like life can be finished? Really??

Here's an epithet for you. These guys have no sense of irony! Take that, you idiots!

There is no goal for evolution, but I can tell you its direction. Evolution moves in the direction of love, which is the main component of intelligence by my definition. That's even how we can distinguish artificial from real and good from bad, which he notes that his definition can't do. And somehow humans, as keepers of the goals for AI can? Our track record ain't great on that one.

* * *

Still in chapter 2, on intelligence, I find this language: "if an AI decides that it wants better X skills, in can acquire them." Notice here again that there is no definition of what deciding means. I might want to say that an AI can only decide in the sense of rolling dice or enacting goals already decided elsewhere. By the circular definition above, it can't "want" because it can already have whatever it wants. Want can mean lack or it can mean desire, but at least humans don't consider want to be a lack that can always be fulfilled. Wanting, for humans, generally means desiring which only sometimes means getting. 

So I posit this law, perhaps akin to Moore's law, that no AI no matter how complex can ever make a better decision than a universe-connected emotive creature that has evolved naturally, and which has the ability to interact with that AI. Or in other words, given honest interaction, the human will always be the better, faster, decider than the AI. 

Call it Rick's Law. The corollary to this law is that to the extent that we delegate decision-making, we already live under the aegis of AI. This, by the way, is just about the only thing that the Trumpers are right about. They already know that most of or media runs on automatic. The trouble is that they don't have the sense to tell that their leader would be a criminal in any "good" system we can imagine, and, naturally, that they can't tell the difference between truth and illusion in the first place. 

Now of course I have no idea if or whether the universe will ever allow AIs to love, wink wink. We seem to find robots possibly lovable in movies, but those movies leave out the very same complexities that this guy does. Or perhaps they include the complexities that are left out here. 

In any case, as I said above, we already have AI running our show and it's already killing us.

* * *

On to Chapter 3. 

Now I've watched the Alpha Go film, and I get how surprising it was for the AI to beat the human champ. The author quickly moves on to suggest how AI can optimize "for example . . . investment strategy, political strategy and military strategy." Then later on talks about the great sucking sound in AI graduate programs where students are being siphoned off by lucre.

This is just the serpent of technology eating its tail. If they're doing it for money, they aren't doing it for love, and it would be stupid to think that technology came from anything like what I might mean by intelligence. Technology has always been about commerce and warfare and capital concentration. The good stuff is epiphenomenal. And yes, Virginia, there is good in humanity. It's just not showing right now.

Yeah, and so the human learning from this is that we must quickly change our human strategies so that human needs are met and not the needs of investment, political, and military strategy. Those games already dispossess most of us. The photo of the AI researcher gathering shows a preponderance of white men, with perhaps 10 women and two dark skinned men out of 70 or 75 attendees. We know where machine learned strategy will go. And the argument will be that it's just better than humans. Meaning, mostly, better than blacks and women. Fuck that shit.

How about we ditch the game of GO as a human endeavor? I very recently labored in the salt mines of translation from Chinese to English with a webnovel of 600+ chapters. Google translate was of zero use. I wasn't quick enough for my gig-work overlords, so they offered as my next "book" one which I didn't care to read nevermind translate. Google translate did it just fine. Frankly reading this book is quick, boring and easy too, but I feel some sort of stupid obligation to read it through. 

I'm not saying that this AI research is evil or that the results are evil. I'm objecting to unquestioned assumptions of what intelligence is, what life is, how the universe is composed and that intelligence (of the sort they define) is what's most human about us. Intelligence as they define it is not the pinnacle of life, fer Chissakes. 

Or in other words, it's already far too late to prevent AI from infesting those above named strategies, which means that we have to change our ways of living and fast. It's not that the machines will win. It's that the ruthless soulless people will win. It's the system that's the problem, and we don't need AI to tell us that. We just need to pay attention. 

Very little thought is expended in this book about how things will change if and as AI expands. To me, that's a pretty massive flaw in the argument.

For instance, a lot of time is spent on the obsolescence of labor, with lip service given to how some people like to work. What if all people like to work, and the trouble with work is of a piece with what has led to AI. Meaning that the incentives for AI derive from a capitalist economy where there is a desire not necessarily to eliminate labor but to make it cheap and compliant. Furthermore the economy as it runs now is premised on capitalists wanting more and more to nearly infinity. There is no luxury extravagant enough for those mother-Earth fuckers. 

Nobody likes meaningless work directed by others. That feels like slavery, and nobody likes it except the plantation owners.

As money powers our AI machinery now, why wouldn't it continue to? Max assumes it will. I mean, clearly, that to the extent that we are driven by financial incentives and fears, these drive our lives and behaviors. And money is the most calculable element of our lives. It's also the most gamelike, which is the arena from where most of the HS (Holy Shit) moments for the AI geeks come from. GO is a game.

Then there is the unquestioned equivalence of computing with whatever goes on in the brain. But, of course, I don't believe that very much of what we call thinking does go on in the brain. Indeed, I don't think anything like cognition can or would happen without a complex perceptual universe and a written language. Most of our mind is, as Riccardo Manzotti would say, spread all about us.

Using FLOPs to measure the brain's thinking capacity is about like using photos to check on the capacity of a place for awe. Like using math to describe a sunset. Like the empty cosmos as Mad Max Tegmark sees it without us.

That might also be the case with artificial intelligence - that FLOPS is a stupid metric - but then the tests for it are almost always in very constrained systems, such as tricky games. 

If indeed AI is as much the result of our current arrangements as it is poised to transform those arrangements, then I might propose that the only way it will accomplish that is by a sort of accellerationism. In other words, as technology has done so far, it will push our internal and withheld (held at bay) social contradictions to their eruption and fracture. Technology has already done this by its intense concentration of wealth in the hands of ever fewer and younger titans employing fewer and fewer people. 

I suspect that all AI cheerleaders do this, but aren't they supposed to think beyond what is? Are they so entranced by the beyond of AI that they can't see anything but making cars safer (for their occupants, certainly not for the earth) and finance more efficient? We already know how to make travel safer. Nobody wants it, especially not captains of industry.

And how come the examples of technical failure and failure of AI haven't included the AI that drives social networks such as Facebook and YouTube and search. These technologies drive clicks and maximize eyeball time and therefore profit. They also infect people with bizarre and dangerous beliefs. No wonder these people - the conspiracy believers - think that everything's a conspiracy driven by those who might understand the stuff that's wrecking the conspiracy suckers' lives!!

* * *

Well I have nothing to add to what I think about the singularity, so I don't need to rehearse it here. That's what the "explosion" chapter is about, and I don't think Max adds much. Kurtzweil yada yada, Yudkowski yada yada. Nice guys, I'm sure, but they aren't helping. Just like Obama didn't help, in the end. 

Not much to say about this chapter, except that it makes it obvious that this guy is writing science fiction. It's interesting, in a way, what happens when you're writing fiction while thinking that you're talking about what is real. Isn't that what the Trumpers do?

It's not that he isn't "creative" in a sense, but he's not very clever about anticipating how each of the things that he's effectively holding constant will have to transform along with the transformation of all the rest. (the way the economy works, the notion that physics won't change fundamentally, if it changes at all, and especially that we will never give a shit about all the living things that we are banishing from earth, and which he is banishing from his scifi.)

Clearly, whatever he means by consciousness - and I'm eager to find out at the close of the book - it excludes other life as being part of it. I'll say right out that I think that's just stupid. It's of a piece from the other implicit, and I'd say dangerous, assumption that the mind can be divided from the body (either conceptually or physically, it hardly makes a difference). 

Numbers and math are the most iconic version of abstraction. Tabulation and money are probably the earliest counters. Our bodies are one with the extended universe once that conception is conceived more as a waveform, unperceived, than as a mess of causally related particles. Abstraction is a removal from embodied reality. The mind can't be abstracted from the body.

Sure, you could design an subconscious aspect of an AI mind, and limit the subjective awareness of consciousness to a different portion of it, perhaps designated as the higher order mind.  But you still end up with the infinite regress of abstraction on top of abstraction, abandoning the real altogether and ending up with the philosophical zombies that can only exist in the mind. The hard problem of consciousness is only hard if you want to abstract consciousness.

So what I know will change along the way to AGI - Artificial General Intelligence - is that we will find that our mind disappears along with all the flora and fauna. This is, indeed, what is already happening. It's what I mean when I take note that we are already artificially intelligent. We are already machines, or moving rapidly in that direction. Human's can't be cut off from the rest of life, for the same reason that emotion isn't described by the laws of physics and never will be, for the reason that we are not, never have been, and never can be apart from the rest of life. 

Mad Max does an interesting thing when he compares the subordination of the cells in our body to the more collective life, under the control of our consciousness, or in the form of politics. He makes an analogy to politics by way of game theory and nations states, and has extremely interesting stuff to say about scale in time and distance, and how that might relate to changing minds. In doing this he makes another implicit metaphor between government and the mind (though he really only means the brain).

So for sure he is in that camp decimated by the authors of The Dawn of Everything which believes that our current arrangements have been the inevitable result of history. That there is only one possible endpoint, which he now extends to mean the explosive blossoming of intelligence beyond what's humanly possible. And he therefore indulges in the teleological argument that he declares - using the same misdirection as a magician - to be out-of-bounds. 

If this is the end of history, then it really is the end. Of everything for all time. Department of redundancy department.

I took a quick look a the website where readers express their preferences. From the results, the participants slant heavily toward propeller heads. But utterly absent is the choice that I would pick:

  • We learn why machine AGI is not possible, and why any kind of supercomputing harnessed to life destroys life, if for no other reason that that it amplifies and accelerates all previously repressed or invisible internal contradictions. Not the ones exposed by Marx (though those are not irrelevant) but the ones he couldn't have imagined. We proceed to live and to evolve and never forget the limits of computational intelligence, and its indifference to life.
This fellow is constantly imagining that everything about us stays the same; motivations, economic arrangements, what turns us on or excites us, that we will want to do similar things that we (he) want to  do now. He can't seem to imagine that none of these things can or would stay the same in any of his scenarios. Almost any fiction writing would be more true-to-life than this.

Anyhow, all this talk about "superintelligence" hides the evident fact that we already have it. That's what society is. Scientists don't and can't do science individually and alone. They depend on social infrastructure and physical infrastructure, and repositories of information that is beyond their own specialty, but critical for their work. You can't do astronomy (in the old days) without telescopes. And you can't do almost any sort of science now without computers. 

But the notion that something like a newly discreet superintelligence that directs itself and builds new knowledge faster and better than humans could ever do is of a piece with our contemporary "genius" worshipping culture, which Mad Max has already pretty much confessed that he's in thrall to. 

It's bizarre to me how Larry Page gets to be called a genius because he happened on a pretty short-lived and pretty simple page-rank algorithm, patented it, and then proceeded to build a monopoly around an almost entirely broken keyterm-auction-based and therefore word-based search paradigm that no-one can break because no-one can hoover the entire Internet multiple times a day (a minute? an hour? a second?) because only one monopoly power can have that kind of advantage. There are few ads on a google search page. The ads are planted among the results, destroying local news organs and sending all the keyterm auction revenues back to the juvie center.

Plenty of people had plenty of great ideas for search. They just simply got squashed. I honestly don't think it's much more difficult to debunk any kind of so-called superintelligence than it is to debunk genius. And we pretty much conflate genius with getting rich anyhow, as though luck has nothing to do with it. Elon Musk a genius? Oh please, give me a freaking break.

Let's suppose, as I do, that mind is coeval with matter, and didn't require humanity for it to pop into being. Let's further suppose that mind cannot therefore be copied on any arbitrary substrate, and that mind which is as complex as the human mind must be grown and it must take time.

We must clearly also suppose that we are as far away from a good understanding of mind as mathematics will always be from describing it. The computer-based AI metaphor is not only a dead end, but probably dangerous if it prevents us from seeing what mind truly is. It currently does that by taking all the dazzle out of everything else. But the same thing is also happening by a subtle and mostly invisible closing of the scientific minds, as a collective, toward anything that breaks with objectivity and the subject required to know something.

My definition for mind obviates any worry about meltdown a million years from now (or was it a billion? Hardly matters). Not only won't we be the same, but hopefully our consciousness will have expanded somewhat, along the lines of how evolution got us to this place. And we will have a better understanding of what consciousness is and isn't. And we will, perhaps, have learned to communicate emotively with all the other blooming life in our universe, and perhaps even beyond it. We will start believing - not in the religious sense of the term, but in the scientific sense of the term - that miracles do happen and that they relate to cosmic mind, composed of all life everywhere. 

And if you don't think that cosmic mind can intervene locally, then I'm not sure that you're allowed to "believe" in free will either. It's not only physics that drives the world. Not everything requires an exchange of force-carrying "particles" to be present in two minds at once, no matter how far removed from one another. Failure of presence is not proof of absence, especially when we're not even paying any attention. And that mostly because paying attention has been pretty much ruled out.

* * *

Look, I don't really have anything against Elon Musk and all the others riding unicorns into the dark. He's a genius at making money, though I do strongly believe that our society should do infinitely more to redirect money away from such genius to more productive uses. Which is, I suppose, just another way to say that I don't exactly quite buy the apologies for why capitalism is the most likely social structure to get us into our future.

So at least Mad Max isn't motivated that way, no matter how much he swoons after money like a teenager after a movie star. But he sure is obsessed with numbers, which gives him at least a genetic sort of closeness to Ol' Elon.

Now I've already said many many times that our human mind is inseparable from our human body. I would also say that our mind/body is inseparable from the entire living earth (no matter what minuscule proportion of the mass is living - hell, we're mostly water too, right? You need only consider the superficial - the micro-membrane surface - if you're looking at life on earth. 

And furthermore, I am not about to trust some FLOPpy calculation about how much more information we can stow in silicon or its descendants than can be assumed for the mass of the living earth. A quick glance out at the universe is enough to show me how complex and important we collectively are. I think I may be a little less entranced by distances in time and space and their associated numbers than Mad Max is.

If the living earth is our extended body, then our minds are likely not disconnected either. Sure, we're connected by language and media, and I know better than most how difficult - arduous really - it is to cross cultural boundaries even within the living limits of earth. That sort of difficulty impresses me far more than the difficulty of (feeling like you are) comprehending physics. Though "that mysterious dark matter" which composes the bulk of the universe always sets me back.

I'm the guy in the novel who turns his back on a comfortable life (the girl, the riches, the absence of worry after the exciting chase), and I suppose that Mad Max is too. Neither of us would waste our time chasing after money, though only one of us would chase after those who have it. 

But before we reduce our living world to its information content, I do think we should consider what's at stake. We could be very wrong about what counts as information, especially after its been reduced to what can be "contained by" any sort of universal machine resting on and off as its substrate. We certainly know and understand very little, given the minuscule information carrying capacity of the human mind. We are, as Mad Max takes note, in a period of explosive mind expansion. Shouldn't we pause for a bit to gather our breath?

One of my main issues is that on/off by definition cuts off everything that's connected in ways that we don't understand. Either/or is no way to live. 

Given the expanse of our collective ignorance, I think it would be sensible to assume that we are connected even beyond earth to life all over the place. And that we have no clue about how to - how we actually already do - interact with it.

Which drags me back to that place of random chance in life's evolution. Our view of chance is highly culturally relative, especially if you consider the brief moment of atheistic humanism in which we now hold our breaths. 

No, I am NOT a believer in God, but "atheistic" is the only shorthand I can come up with. Call it "life-force" if you're into Star Wars. There's something to it - at least enough to give us pause. 

Sure, if you're struggling in the pursuit of your very life, you aren't going to take much note of the daily miracles which abound. Well, no maybe you will if you're struggling. If you're not struggling, maybe all you do is notice the miracle of your good luck. Until it goes to your head and you write it all down in a book. 

Our conception of "space" and "empty" doesn't impress me very much. Nor does physically mediated information. I'm far more impressed by the life of forests, which makes a better set of metaphors. But hey, that's just me, and I'm just a guy who's barely educated.

I am certainly not nearly as smart as this guy, or those he hangs with. I agree with him on many things, especially his disclaimer about a single measure for general intelligence (that people can be more or less intelligent is different arenas), and in his observation that our goal-directed behaviors are set by feelings and not by optimizing rational choice. 

And yet I'm confident that I'm closer to "the truth" than he is. That's not because I'm praying to some God that he'll be proven wrong, because I don't like where he's going with his thinking and find it dangerous (which is true, I do). I'd use his own argument against him.

What I know is that any superintelligence would quickly notice what he can't or won't notice, and that nobody embedded in any highly specific domain for learning can or will notice. But a superintelligence of the sort that he supposes will come from AI would certainly notice the truth I'm talking about, because it won't have blinders or predilections of sunk costs in a particular way of thinking.

I simply noticed that physics is incoherent if you limit it to objective measurable facts about the so-called objective world. I noticed that mind is "out there" everywhere, and that it's always subject to feeling. This notice came by way of the paradoxes embedded in physics, that we always brush aside or calculate away, confident that clarity will come in time. I mean, nobody really worries about Zeno's paradox anymore, since nobody believes in precision beyond a certain minimal point.

Mind is a construing of objects without relations of force, which means without perceptual connections. Mind is composed of conceptual relations, and its objects move emotively, not as the result of physical force. 

Or in short, computers will take note before we will that computing is cut off, by definition, from the felt side of life. Feelings are not epiphenomena of the sort of thinking that computers might be able to do. Feelings have been there since the beginning. They're as elemental as the most elementary subatomic whatever. Gravity may be the bridge, but now I'm talking way beyond myself. 

Call it Rick's incompleteness theorem. 

Now, on to consciousness:

* * *

I don't necessarily disagree with Tegmark's definition that consciousness is subjective experience. But since all the cognitive centers of the human brain can be obliterated leaving consciousness intact, I do disagree with his continued focus on intelligence as the basis for subjective experience. In my terms, lizards are already conscious. 

What lizards have that AI lacks is an emotive response to the lizard's environment. It's strategic self-defense that perhaps defines consciousness. Cognitive processes are too slow, and evolved much later for other, more strategic, purposes.

So for me the question is whether AI can feel anything. Tegmark seems to insist that they can, but I think he gets there by wrong assumptions about what he calls "feelings." He calls them 'rules of thumb' which are "perceived" as feelings. He seems to think that these ride on top of intelligence rather than beneath it, or perhaps in the peripheral nodes. He seems to get that these "feelings" help us to survive and to reproduce, but I'm not sure why he thinks that AI would have them. Yet. But he sure is still talking about information processing:
"Evidence suggests that of the roughly 107 bits of information that enter or brain each second from our sensory organs, we can be aware only of a tiny fraction, with estimates ranging from 10 to 50 bits. This suggests that the information processing that we're consciously aware of is merely the tip of the iceberg.

Sure, yes, it's the tip of the iceberg, which is what gets "perceived as feeling." So, the question is whether AI can "perceive feeling." I'm gonna say no, since in my cosmos, feeling is a connection to what's out there that can't be reasoned. Feeling is the seat of agency, which should also mean that it's where goals are formed.

No comments: