Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, May 20, 2025

Artificial General Intelligence is Not Around the Corner

All the AI gurus have taken to saying that Artificial General Intelligence is just about here. Maybe a year, may be two, maybe tomorrow. What's really just around the corner is awareness of that in which human intelligence consists. That will be transformational, and maybe even this quest for AGI will have been helpful in our dawning awareness.

Our dawning awareness shall include our awakening to the fact that the weather, the earthly pushback and even the volcanos and earthquakes gather their energy inside of us, if, unlike me, you believe in that insider/outsider distinction. 

Trumpism is, of course, a force of nature. To the extent that we don't know what to do about our complicity in earth's destruction - I just helped my daughter buy a sports car, ferchissakes - we also must celebrate MAGA insanity as the identical force that a tornado has when it tears through a community. Thank you Agent Orange for prodding our awakening! We so innocently sanctioned your spewing.

I'm just guessing, but people involved in AI have probably had the experience of feeling really smart. They're among people who are also really smart, but they just somehow get it better and faster than everyone else. 

I once ran a school for gifted kids and never did figure out how to resolve this conundrum of goodness's self-awareness leading to its self-destruction. If I had, the school would still be alive and promoting and instilling all those humane human tendencies which characterized the innocent direction of the lost and forever departed so-called American Century. We knew not what we did.

What they're good at, these AI titans, is something godawful boring to the rest of us. No, don't include me, I too was mesmerized by my first encounter with networks, and proceeded to spend an unforgiven godforsaken chunk of my life on them.

Naturally, these people, we these people, assume that intelligence is like what they have. But what if they're all really amoral, like AI is amoral. Meaning here that there's nothing "inside" the AI that generates moral acts and behaviors, but it's rather taken in from the handling they do of language and other artifacts. They might be able to project a shadow of human morality, but wouldn't be able to initiate a moral act on their own? Or if they would, it would have to be derivative.

One example of amoral behavior is that of the tech titans and especially of the titans of AI, when they go all out for themselves. Would any of them do what they do if there was nothing in it for them? And if they think that their work will make the world a better place, why aren't they willing to accept the opinions of others on that score? Simply because they have no need to. In our world, the economy stupid, is the proxy for humanity. Our wants are toted up for us, and along we go, merrily or not.

AI is the oldest story in the cosmos. God forsaken.

Well, OK, so the rest of us mostly are cheering them on because we largely think and believe that tech is good and AI could be good as well, especially if it becomes better at problem solving than humans are.

But what if humans are the problem that needs solving? Collectively, we all are supposed to thrive on recognition. On standing out. That's our interpretation of the cipher of evolutionary genetics. AI can do so much better than we can with that or any kind of ciphering. Oh please God turn us over to an AI engineering of our future! Please guide our DNA. Oh.

Maybe there's a difference between a stand out artist and a standout money-maker? Adrian Brody plays long-suffering Jewish geniuses in The Pianist and The Brutalist, and wins two academy awards. Living life in an antisemitic hell. Can there ever be too much beauty in the world? There can certainly be, and already is, too much technology.

It's not just the surveillance capitalism which Shoshana Zuboff calls out. It's the network effect, where each of us wants and needs to be where the rest of us are. But post-digital, none of it can be called capitalism when only one thing needs to be made with zero labor for the rest. Now it really is become a zero-sum game, with almost all of us losers when the winners win so big.

It's not even so much the dispiriting as caused by immiseration within the dynamics of inflationary costs and expectations. It's the more direct immiseration of the spirit when you know there's nothing all that special about yourself. When it's all a lottery.

Sure there's a bit of schadenfreude when trollish boring geeks like Musk and Zuckerberg and Cowbezos are haplessly lured into revealing their ghoulish innards by the likes off Trump. But can that compensate for their criminal appropriation of the vectorial commons for themselves?

I myself see a dangerous sub-text in Brody's roles. That some of us are simply superior, smarter, morally better. That some of us are guided by a compulsion of rectitude. But the suffering is human and he never makes a direct argument that his roles are more deserving of a better life than all of his fellow sufferers. He too mainlines heroin.

Anyhow, machine intelligence is a one-way street. It embodies the assumption that more intelligence is a good thing and the we humans can continue to behave as beastly as we do because super-human intelligence will make it all good. When what we really need is a Jesus Christ figure to bring us down from our anti-earth high. The problem to be solved is indeed ourselves. Maybe we can do it once the nature of human intelligence is revealed to enough of us. Only then will the utterly repudiated actual Jesus come back to life. JD Vance is sufficient alone to goad His IHS return to us.

Hell is a lousy concept, though maybe it worked for all those years. Anyhow we're bringing more hell down to earth than we care to admit. Yes, sure, life is better for more people than it has ever been before. But if it all ends in a hellish mess . . . it does none of us any good to externalize this evil.

How can we deter smart individuals from craving recognition and the power that goes along with it? Without personal hell? Without actual hell for the rest of us? I wish I knew. In what humanity consists.

Let us pray.


Wednesday, November 18, 2009

Contact! PoMo AI (Artificial Intelligence)

This is yet another one of those quick bury that post! posts. Not that I find anything wrong with what I most recently wrote, but it is rather breathless, and I need to put my thinking back up on its shelf for my private inspiration. Perhaps that's where it belongs. I think lots of people looking for direction have these moments of private inspiration which provide it for them. Mine just happened to be in words.

So, that's my theme for today. After which I'll go back to writing about male vibrators and other silly things. My theme is writing, language, and artificial intelligence. My thinking about it relates most to Gödel, Escher, Bach: An Eternal Golden Braid, only in the sense that the reader is forced to leave the whole big question wide open, where it remains today, riding on top of the fairly complete incompleteness theorem. I know the book is old school, but I can't find anything to change the basics.

In fact most debates about AI still include this basic difference - it's almost a predilection - between whether in principle this will always be an open question, or if things can lead to closure. I am pretty sure we're not going to see until we see, so it will remain for me a fairly uninteresting matter of the limits to abstraction. And I'm just not sure how the limits to abstraction can represent the limits to reality.

But in the particular, I am fairly certain that humans, as the only example ready to hand, never did think until language developed. Specifically, I mean that what we call intelligence is no property of mind alone, but is rather a property of minds together, in communication. I suppose therefore that intelligence would have to be called an epiphenomena of the brain, rather than something which inheres in it.

Now I understand fully that there are lots of people much smarter than me who study this stuff every day as their professional, vocational, and passion-filled occupation. They study all these things to the point of tedium, and I respect their studies, I really do. They have what I don't, a collegial (or not) set of fellow workers, all of whom share a common language. They share a common vocabulary and grammar for usage, and they have developed that language collectively so that they may push its envelope of sense.

That gives them the ability, far superior to mine, to throw terms around like "epiphenomena" and be pretty sure they know what they are talking about. If I have any advantage at all, it is because I am not embedded in any particular school of thought, and so I might see things which can't be seen from the inside. A kind of forest/trees issue. The naked emperor story.

But I do understand that I too must make clear sense, which as I've said I really pushed the edge of myself in that last post.

I am among those people who are fairly certain that we will never find any fairy dust in the brain. No matter how you think intelligence works, there isn't any mysterious substance or structure or pineal-type organ which has yet to be revealed which gives the brain its spark. Such thinking harkens back to talk of "soul", toward which I hope my training in Chinese traditions has hardened me a bit. I prefer to use less mysterious terms like heart or center.

So, therefore, I am also on the side of those who figure sure, why not, we can create an artificial brain. I don't think it will have very much to do with computers as we now use that term, but there is incredibly interesting research being done on neural network style computing. Quantum processing and massively parallel processing will come on line to make our current megahertz seem paltry. And these will also begin to break down the limitations of boolean and digital logic.

Computers, therefore, or machines, or artificial brains, may even be able to deal with metaphor, and therefore start to feel. But not until or before they develop language, which means that they too must distinguish each from the other, and have something to say to one another.

Right now, we might think of such arrangements as a matter of networks and signals passing back and forth. But no thinking, no intelligence, will come from such communication, which is the thing I want to stake a claim on here and now.

When writing first started, I think it might be fairly clear that it was a matter of keeping score. You know, those hash marks with a line through them which count to digital "five". Four score and so forth equals, um, 20? No, they get bunched in fours, so 80! Lines on a stick, soon followed in the case of China, with cracks on shells and trying to bring some of the constancy of heaven's patterns down to earth where we could make sense of them. Looking for signs.

But let's dwell on the counting for a minute. Scores or counts can always only refer to undifferentiated and therefore, for memory purposes, identical items. If you want to know how many sheep you bought or sold, the score can pretty much only tell you the count, and nothing more than that. If you need to know the colors, you'd have to split the count into color categories, or add something to the hashmarks.

And if you want to remember individual sheep, you'd have to give them a name.

I've said before that the only kind of memory which endures is ways of knowing. Once you identify a shape, for instance, it's hard to unlearn what that shape means, if it means something to those around you. It's very hard to forget a face if it has a familiar name to go with it. But if you're going to partake in community, then you have to learn to recognize and to name those things which you will use in common. I think in some sense that's what language means.

I have three cats, and I don't think it would be too easy for me to forget what a cat is, or to recognize one at a thousand paces. Maybe in the dark, I might mistake a racoon for one. But I confess that I don't always recognize which is which. They're all from the same litter, after all.

Oh sure, one seems to have a different father, and so Stella's easy to distinguish from the rest. But at a distance, there seem to be many many Stella's out there. And her two sisters look pretty much alike. But if I pause, and sorry to say I actually have to dredge my memory, I can remember which one has the orange eartips, and call up their proper names.

I know that makes me a lousy cat lover, and that many of you would distinguish one among a thousand without any thought at all. But I doubt you're using more than just their faces, which, no matter how much you love cats, are nowhere near so expressive as the human face is.

Yeah, sure I've seen all those cute pictures of cats dressed up and seeming to have expression. I've also read Oliver Sacks about how humans can fail with facial recognition, or even mistake their wife for a hat, and still control their language. But exceptions often prove the rule. If nothing else, the biological human "machine" is immensely redundant and malleable. There are even perfectly conscious humans who were born with only one half of a functioning brain.

But I do think that before we can create our own machines that think, they will have to be distinguished enough, one from the other, so that they actually do have something to say and aren't just part of one big artificial brain. You might wonder if I read too much science fiction, to which I'd have to answer, no, I don't think I read enough. Even when I seem to write too fast. There's only so much time in the day after all, and I don't read as fast as a lot of people seem to, and frankly, the science fiction isn't always the most interesting stuff.

One big brain, no matter how big, really can't think. And machines which are isomorphic compatible - think armies of droids in some Hollywood thrilla - down to the level of their design specs, by definition, won't have anything to say to one another.

Well, until they build up differential experiences. But here already a time lag is essential. That's what differential experience means. And so there has to be a going apart and a coming back together, rather than the near instantaneous communication we expect from our computer networks.

So, they have to be autonomous, these thinking machines, and they have to be distinguishable each from the other when they return for communication. Perhaps they can wear the scars from their experiences.

The trouble with wearing the scars from their experiences is that they only get them after separation, and so there can be no recognition upon return because, just like Dad's memory now, it's all new!

So, the separation has to move in stages, building difference, building recognition, until you get something like a face emerging on each machine. There has to be something familiar before re-cognition can occur.

Sure, you could seriallize all the machines with numbers. That would be a perfectly good way to distinguish each machine. Except numbers, remember, don't do anything but count sameness. Like with my cats, you'd have to look up in your memory banks which is which, and again like with my cats, it has to beg the question about how much you really care.

Now caring, of course, is not often considered to be a quality of machines. But what else would it be to direct attention inward toward memory, as it were, unless some wired-in compulsion to do so. Which brings us right back to the beginning all over again, with everyone - every machine - now pretty much acting like every other.

You could catalog which machine cares for which other in some kind of giant sorting game, but it would seem pretty far from what we mean by thinking  pretty quickly. More like a flock of birds or school of fish.

I think brain science has already determined pretty convincingly that our inward brain often makes decisions before our conscious self is aware that we "have" them. We also know that we process much more information than we could possibly be consciously aware of. Emotions seem to play a role here too, sorting among all the inward random brain impressions, to bring to the fore those which interest us. Those which might prove to be of some use, pretty much in the same way as those named shapes which we use together with our community.

And the community in the first place only works together because of some sort of caring. Some facial recognition, maybe, tying family members together more easily than a branding system. Sure, plenty of exceptions prove this rule, but I think you get the idea. And collections of families are also bound by familiar behaviors and traditions - things held in collective memory.

Eventually, this collective memory gets written down, and civilizations can endure, some few as long as China has. Although to call China China all through those years might be a matter of splitting some interesting hairs, since about the only thing that's remained constant has been the written language itself. And that took a whole lot of draconian diligence, and an almost obsessive concern with how history gets told and poetry crafted.

So, I'll leave it there, then. I'll hope to come back for the sake of amusing trifles now and then. In the meantime, I really must get to work cleaning out my house for the next guy.

Cheers!




Friday, November 13, 2009

Hoover Blanket, Inc., Launches Pikk.com

I've always wanted to write a manifesto, so here it is. This is what Hoover Blanket, Inc. sets out to accomplish. Hoover Blanket, by the way, refers to newspapers, which were all that some people had to keep themselves warm with back in the first Great Depression (this one's been "papered" over by newly minted electronic money, just as new Hoovervilles spring up all over again).

I'm lousy at memorizing things, but once you learn how to see something, you never can unlearn it. I think true learning forms a kind of permanent memory, which is helpful for people like me.

I sometimes wonder if a whole nation can learn anything. Surely if you wait long enough among people without historical memory, you can almost always pull the same stunts over and over, especially on folks sent to school for memorizing and training to be economic inputs!

When I was in school myself, I was apparently smart enough to pay attention to almost nothing that was being said. Not too much memory was required for passing multiple choice tests, and in the classes I liked it wasn't the saying that counted anyhow. So, I liked physics and math, and didn't pay a whole lot of attention to history. Literature was just intimidating, and those particular multiple choice questions had no connection whatsoever to what I thought I was reading.

Maybe I don't have all that much to unlearn about how stupid everybody else in the world is who's not American, and maybe I have yet to be convinced that partisan politics means democracy. Unlearning, as any teacher can tell you, is a lot harder than learning.

We got a lot of how stupid everyone else is in the naive anthropology of world culture and religion as it was taught. Somehow, though, I've retained a few principles about greed and robber barons and how rich folks always work things out to their ever more concentrated aggrandizement. Until everything crashes.

I also never did actually read the good stuff until college, because, well frankly the good stuff wasn't being advertised very well at school where facts and names were the focus. In college, the good stuff hit me rather like a ton of bricks, and so I had to drop out and back in several times before I hit my stride studying Chinese. Which involves lots and lots of memorization, so there you go.

And in China now, we like to call their form of market economics "capitalism" too, even though their one-party political system pretty much stretches to the point of breaking what we could possibly mean when we use that same term to describe outselves. Or are they really more similar than different, with our Coke and Pepsi parties who just make it seem like there's real choice being debated. Where the real choices get whittled right down to false alternatives among characterless compromises enforced on the party membership if they want to keep their seats.

Dad's losing his memory now. He reads and rereads the autobiographies that he's compiled. To be reminded of who he was and what he did. I guess you could call it dementia, but there is still a kind of shape to it, as he fades away and anon.

Maybe we should do the same with our own national history before we forget who we once thought we were. Questing for freedom, and away from aristocracy, we always said we wanted to give the least of us the same chances as the most powerful. Well, that's what we said anyhow.

I'm trying to establish a kind of credential here for your belief in what I want to say. I'm trying to whittle myself down to that little boy who points out that the Emperor is naked. I'm trying for something anybody could see, and I haven't got a mature voice yet. But I'll keep trying.

Among other things, Hooover Blanket hopes to bring back the news, so that we can stay warm in ways which don't require a Hooverville. We here at Hoover Blanket believe in ways to promote something other than the bottom feeding train wreck corporate press-release copy which passes for news reporting these days.

And we don't think it has to be inevitable that powers like Google can destroy what used to be a somewhat honorable dishonorable profession. Not that Google has any blood on its hands yet, except perhaps in their titanic grudge match against Microsoft. It's we who don't have time or won't pay for news that's fit to read. We're too busy driving, I guess, and it's not Google's job to pay reporters.

I learned about newspaper reporting over in Taiwan during martial law, back when America called them China. My Chinese professor had hooked me up with his sister (not that way - we didn't do that back then, or at least I didn't), the only woman reporter in Taiwan, and she in turn introduced me to these three old guys, who in turn invited me to drink down at the Temple of the City God.

I'd studied Taoism with a guy who converted from Jesuitical to Taoistic understandings right in those temple precincts, so I had a little sense ahead of time about what I would be doing there. Or wait, maybe I studied with him afterward. Hell, I can't remember.

I do know that the spirits we studied were made from sorghum, and tasted somewhere just this side of turpentine. After we were good and happy - me struggling to use Chinese - the cops would join us at our table, and then later on the local mafia.

These were the guys who knew what was really going on, and I think they had to drink hard at night just so they could publish all the happy horseshit which was all that was allowed in a place where telling the truth could get you pretty quickly killed.

Now that we've disgraced poor Dan Rather, our original Million Dollar Man, who must have been radicalized over there in Afghanistan just like Osama, all we seem to get any more is happy horseshit too.

Oh sure, we pay lots of attention when they expose scumbags, and over at Fox they've raised that to an art, where absolutely everybody is a scumbag except the person pointing fingers and shouting.

I think back in kindergarten they taught us about how many fingers point back to yourself when we're calling names. But I wouldn't remember.

The point is that there is too much money in shilling a bunch of corporate orthodoxy, and not nearly enough to staff the smoky back rooms where the real stuff might get written. And plenty of money in providing hits to your logo-ridden heart, which craves its news just as greasy as MacDonalds fries. Hits to your insecure ego, which likes to think itself that much better than those scumbags who run things.

The trouble being, of course, that the scumbags being pointed at and shouted about are only diversionary creatures to make sure you don't understand who's really distorting your world-view.

So, how is Hoover Blanket going to accomplish this on our wing and our prayer?

Very simply because we know something the big guys don't know. First of all we take it as an article of faith - it's right in our manifesto (the real one) - that computers make a lousy analog for how minds actually work.

There will be no artificial intelligence, ever, and so it looks to us like a colossal and somewhat dangerous waste of time, money and power, for corporate giants like Google to catalog and cache and data-mine the entire Internet. Which, if you were to try it at home using your really really fast broadband connection would take maybe 65 years and several petabytes worth of storage just to list the pages by their url. (I read that somewhere on the Internet, so it must be true.)

But there is no intelligence in any of that data. There is power, though, for so long as Google is getting most of the clicks when you go searching for stuff. They can hand you up what you want, prearranged according to how everyone else is searching (like someone rearranging your refrigerator the same way they arrange stuff on the shelves over at Wegman's, putting the high-paying stuff right at your sweet spot) and then serving up some nicely targeted ads, in case you weren't aware of something out there you might just like to buy.

I'm not saying, by the way, that they are really doing what Wegman's does (and Wegman's is the nicest supermarket in the nation, so I pick them on purpose here). They have their code of honor. No-one can pay them for search results to make it to the top. And they don't mix in the paid ads with the other search results the way that Yahoo! and Bing do (well, OK, so there's a new distinction without a difference - they've merged, surprise, surprise! - and we might be the collateral damage in those cloud wars).

But there is a kind of simple vicious cycle which happens here, since by promoting what everyone else is looking at they are, indirectly, bringing all the logoware right up front, and letting other things disappear somewhere down in among those bejillions of other hits on pages etcetera. There's lots and lots of money involved in all this indirection.

Let's say you're a crazy guy like me, and you want to find the five or six other people on the planet who think like you do, so that they will buy your book. (I still don't get why people only buy books they agree with though)

You pay Google to pop up an ad to those folks who use your words while emailing their sweetheart, say, and it really really works. You get a sale for maybe $20 and Google automagically calibrates the click rate so that it will be worth it for you to keep paying out maybe $19 for every sale. But that's better than nothing, since you aren't even paying for paper. Except that Google's getting a pretty darned good commission. Unless you're the size of the New York Times, in which case your commission is pretty small. You get how this can work.

Well, my buddy, the brains behind Hoover Blanket (I couldn't code to save my life, well OK, I exaggerate, but not by much) got an actual bona-fide Ph.D. (I tried several times, but I just don't have the patience for it) devising ways to model living tissue for virtual reality palpations of the body.

You can't do this by old fashioned engineering means. Living tissue is just way complex compared to bridges and skyscrapers and spaceships and other things at the limits of our computing power to model. And even some of these come crashing down in ways we never could have predicted.

Living tissue - and I have to apologize here because my buddy needs to stay in the closet a little while longer, so I might be getting some of this wrong - living tissue has to be modeled in the way that cellular automata get modeled.

Think of flocks of birds or schools of fish, which seem to move as if they were one mind, but really only relate to the one next to them, in some kind of overall similar context they can all relate to.

Cellular automata are little computer googlies which only know how to react to the ones nearby, and they end up moving around like schools of fish, as if they had a mind to, too. It makes me think of those fractal creations, which look so much like living patterns, except they depend on diminutive and trivial little formulas for their computation.

So, my buddy tells me, that's how you can model living tissue. You give properties to its parts - some being more dense and some more elastic - and then you put those parts into relation with one another, and eventually the doctor at the far end of the wire can actually palpate the virtually real body and find the tumor. Cool!

So that's how the Hoover Blanket brand of search is going to work. You don't need to store the whole darned internet. You only need to store the urls and the connections to the ones nearby, and then you let the people catalog them by their clicks.

I know it sounds real simple, but trust me if it were, all those smart folks over at Google would have come up with it by now. Well, unless they're just making too much money by leaving things alone.

Fact is, I think they just can't see it, because they're so blinded by their power. I think it's pretty much exactly like Columbus' ship on the horizon being invisible to the unschooled natives on their once pristine shores. They didn't have any categories for it, and so it was just an hallucination on the horizon, along with so many others they might have still known how to see.

There is actually no need to catalog the Internet, or any very large data set. You only have to look at things in their contexts, and the magic of degrees of separation will get you very quickly to where you want to be.

So long as the degrees of separation are defined by human discernment and not by machine extractions, you will reliably find what you're looking for not because someone else has been looking for the same thing, but because you will guide the search yourself according to paths which can reveal themselves among the forest of keywords.

We have a prototype up, and it works beyond your wildest dreams. But, built on bubblegum and paperclips, it won't exactly take your load, and is depending on someone else's keyword store. But it works. It's trivial.

We also have a way to distinguish people from computers which doesn't depend on fuzzing up letters for you to type. It depends, very simply, on people making things which only other people will recognize, and then letting people type the simple recognition.

Like how you know your friend at 100 paces in Beijing, say, even though you can't believe he's there. But your attention was pulled, first, since he stuck out from all the non-milk drinkers, and second, by the recognition, and third, yep sure is. Wow.

I know homeland security and probably the NSA think that computers can do that better than us, but they also end up profiling my Mom when she crosses the Peace Bridge, just like they tried not to do with that poor Muslim down at Fort Hood. Computers can be really stupid, unless it's the people behind them.

But sure, so called Artificial Intelligence can give you a leg up, just like all those cool spy movies, for sorting among all sorts of noise, or zooming in to the right spot if you have really high resolution cameras. And even just the threat of that can keep people honest if you put cameras up on street corners.

But it doesn't do so well at keeping out all the spam, which now overwhelms legitimate email by something like an order of magnitude. Making it that much harder for Google to give you what you want. Although, for sure, they do a pretty darned good job.

Some of us might be fine with only allowing people who could actually read a CAPTCHA (those squiggly thingies to slow down "bots" on forms) to send us email instead of the mass-emailing machines which do.

Gamers of any system, mortgage derivatives, gambling, spamming, voting, any of that stuff, will always be working a flim-flam against your trust. The Internet makes it trivial now to try it out on millions of people, and you only need one to click. Your grandma, maybe, who doesn't know any better.

Yeah, OK, another quick digression, about my other friend who knew signal noise like no-one's business, working on radar in lead shielded offices over lead shielded wire, who tried to turn his considerable prowess against the stock market, as so many before him have done. "It's all noise, Rick. All noise."

But it's not noise if you can game it. You know, like sending spam around which is guaranteed to spike some stupid little stock, and so the people who get the spam can know that it will be a spike, even though they know it's spam. They just try to buy it quicker than the next guy. Or if you hide authorship and get your news to seem legitimate. Or if you blow into the balloon, and keep quiet about its limits because then you'll get accused as the one who popped it. Even Greenspan had to eat some crow this time.

You're being gamed, folks. There isn't only one way for history to play out. There really aren't technical solutions to all our people problems. There really isn't only one reliable source for all your news.

We here at Hoover Blanket aim to make sure that the little stuff also makes it to your top. If it's real and true and nicely peer reviewed. Because we have it on good authority that there are only maybe 2% of you out there who have no conscience. And that the rest of you, whether you are corporate titans or pornshop lowlifes, would love the chance to make your livings more honestly.

We don't think things have to be organized the way they are. And we're not about ready to allow the planet to run on autopilot.

There, that's my manifesto. I hope I don't get in trouble with the boss over this.