Archive for the ‘The Big Picture’ Category

Politics, view horizons, and neural networks

Thursday, December 15th, 2016

So, one thing that has definitely come to light in recent days / weeks is that a lot of us are running around with fundamentally different views of reality at the moment. In some people’s worlds, Obama is a hero – in others, he’s a muslim terrorist or worse. What gives?

Well, part of what gives is the idea of view horizons – some people like to talk about this as ‘bubbles’, and perhaps that’s a more reasonable word, but I’d like to explore the idea from a slightly different angle briefly.

So, in a NNN, each neuron can only see information that it’s either directly connected to, or is connected to a relay source for. In the experiments involving cutting the corpus collossum, you can see this dramatically demonstrated when a placard containing instructions is placed in front of one eye of the subject and they follow the instructions on it, but when asked why they did so, they tell a story that’s completely unrelated to “Because you told me to”. The instruction on the placard is no longer on the view horizon – no longer routable via a reasonably short route – for the part of the subject’s mind that is in control of their voice.

Similarly, if you think of us as independent neurons in a very, very large neural network – with communications links like books, voice communication, the internet, etc taking the place of communication links like dendrites off of neurons – we can only know about what is on our view horizon. Most of us don’t have direct access to Obama to make up our minds based on personal interaction whether he’s a muslim terrorist, a superhero, or somewhere in between. However, we’re all connected to either clusters of other neurons – our friends – or a broadcast bus – the news – which steers our view at least somewhat.

Now, there’s a real possibility that both universes exist – we keep learning funny little things at the quantum level and it’s possible that there is both a universe where Obama is a muslim terrorist and one where he’s a superhero, and our experience here on Earth at the moment is at the confluence of two worldlines. However, it’s far more likely that what we’ve got are two teams of people, and each is spinning the story in the direction they believe is true – and because of confirmation bias, they’re drifting slowly further and further from reality.

Now, I’ve got news for you – no matter which side you’re on, it’s not likely you have a accurate view. Your view horizon is a long way from the original source, and being filtered through many, many minds in a game of telephone – and worse, those minds are influencing each other. But this opens up questions as to what exactly happens inside our own minds – we tend to think of ourselves as a single individual, a ego if you will, but there’s almost certainly a large fraction of our neurons that are ego-dissenting – these are what keeps the inhibit inputs on our neurons lit up and what keep us from becoming either narcissists or something worse, as well as what provides that all important critical judgement that we need when we, for example, want to create great works of art.

I am curious as to whether what we’re seeing in the political sphere is a similar thing on a macro level.

Neurological wealth

Thursday, November 24th, 2016

The most impressive – and disruptive – technology that we could possibly come up with would be neurological. If we could load software on our minds the way we do on computers, we could give the experience of unlimited wealth to all of us, for virtually no cost.

Now, there are some major problems with this. The security implications alone are terrifying – we already have enough problems with viral propagation of bad ideas via religion and just plain ol’ fashioned entrainment.

However, the win is equally huge. Let me give you a few examples.

First of all, whatever your ‘day job’ is, chances are it takes up a very small percentage of your total mental capacity. It would be possible for you to do whatever task helps keep this old ball spinning using background capacity, while never actually having the conscious experience of doing it.

Second of all, everything you experience in this world is made up of information. And there is no doubt that our 10^11 neurons are sufficient computing capacity to generate any experience you care to name out of whole cloth. Get them working in the right way and you can experience anything *anyone* can experience. The software to do this represents wealth of a very interesting kind. It can be copied indefinitely, without costing the creator anything. It can potentially add value to the experience of everyone who uses it. It would reduce our impact on the planet considerably – since we would no longer need physical ‘things’ for most of the adventures we might want to have.

Of course, there’s absolutely no proof that this hasn’t already happened, and that the controls of whatever network is responsible for rendering our experience of reality are just in the paws of someone who favors a less than utopic experience for everyone else. I think there are people who would enjoy the power that denying utopia to others represents.

Anyway, when I talk about giving everyone everything, I do think this is a reasonable approach to doing it. Yes, the hurdles are high – we haven’t even learned to build software that runs well on digital state machines, the idea of us writing software for our minds is a bit shiver inducing. But, the reward is even higher.

Given that everyone’s utopia is different, this is the only reasonable way I can see for us to give everyone a utopic experience at the same time.

Inevitable neurological war

Saturday, February 27th, 2016

This article is almost entirely conjecture. We sadly are not yet at a point where we can actually say exactly what is going on inside the human mind. Hopefully soon.

That said..

The way that we’re raised, and the society that we’re in, leads to a inevitable neurological war.

It’s built into us for physical touch to feel good. Depending on whether you’re wearing your evolution hat or your ID hat, this can either be the inevitable result of us needing to get very close to each other to reproduce or a design goal. (I have to say, building in things that feel good would certainly be a design goal if *I* was the designer)

On the other hand, it’s memetically built up – as far as I can tell, for very stupid and destructive reasons – for us to think that it’s wrong to be in love with more than one person, that it’s wrong to want to be involved in sexual contact below a certain age – in fact, I see some of my facebook friends encouraging the idea that trying to frighten the lovers of your female child is “protecting” her and a desirable thing to do. (In fact, teaching her about consent would seem to be a much healthier type of protection, but I digress).

Our mainstream religion – despite it not ever being clearly spelled out in the bible in the negative (the bible says that sexual love within a marriage is good, but does not actually state that sexual love outside a marriage is bad – that’s something we decided to tack on later) – teaches that if you ‘go too far’ before marriage, you’re a bad person – that sexual contact, despite feeling good, is a sin. It also teaches the idea that your lover is your property, that if someone else wants to experience sexual contact with them, they are breaking one of the “ten commandments” – even *thinking* about it is a crime against God.

Now, we all know what I think of Christianity. But another question is what do I think about what all this does to our minds? Well, by definition, it creates two sets of subnets that are always going to be in opposition. It’s wired in – on a deeper level than even any religion will ever be able to reach – that touch feels good, that petting and loving is *right*. It’s something that I personally find myself drawn to as a experience I want to have again and again. It’s what I want to dream about.

In the meantime, our parents try very hard to keep us from sexual contact – or even, in my case, nonsexual/cuddling contact that’s too prolonged. They program into us a subnet that says “this is sin, this is bad, this is wrong”. The idea that your virginity is something precious that you should give to your first and only lover also underlines this. This creates a subnet that says sex is bad, dirty, should be looked at with shame and guilt, isn’t something you should want, except in the situation of marriage – and probably not even then, if one reads the writings of the Victorians.

What happens when you have two subnets at war with each other? Well, first of all, you end up feeling the tension between them. Second of all, they eat capacity. Each one tries to claim a certain percentage of the neural Go board, and each tries to defeat the other.

So, I think some of this is jealousy.. our parents get attached to us, and don’t want to lose us to our lovers. Some of this is a amplifying effect of stupidity across the generations – one generation made something up, and then lied about it being the word of God. (If it was really the word of God, God would still be around and saying it. Probably in person. Certainly in some way that left no doubt to the fact that we were hearing from a deity). Some percentage of each successive generation after that was duped into believing they were hearing holy wisdom when in fact they were hearing damaging bull.

I don’t think that it’s immoral to love and be loved. Nor to express that love sexually if you’ve a mind to. I think that thinking of sex as shameful and wrong is a sign of a deeply broken set of memes. I think that people who think we should slut-shame are deeply confused about a whole lot of things, and are far more immoral than the sluts they would shame. I think it is a sign of how broken our culture is that we think that people who participate in a act that generally feels good and improves the attitude and mental health of both participants are immoral, while the people who seek to hurt those people for choosing to participate in something that feels good are given radio shows.

I also think that in general wars between subnets – beliefs that are diametrically opposed to observable reality tend to build these – are something we should try to remove from the meme pool, especially when it comes to things we pass on to our children. We are trimming their wings because our grandparents were afraid to fly.

Different utopias

Saturday, February 27th, 2016

So, one of the problems that I think we’re going to keep bumping up against here on Earth, at least in the USA where we ostensibly have a democratically elected set of people driving the boat, is that we all have different definitions of what winning means.

Like, I’d love to live in a world where we have sex with our friends, where automation does any job a human doesn’t care to, where we all try very hard to be excellent to each other. A world where no one conceives without having chosen to, where children are raised by all of us under the precept of being excellent to each other. Where education and mental health are based on a solid understanding of what’s happening on the iron of our minds – understanding based on science, on taking measurements and learning what’s really happening, rather than based on narrative and our storyteller nature, which clearly often is quite capable of diverging completely from what’s actually happening on the iron.

I’d love to live in a world where the video games are immersive, and so are the movies and the books – where we build each other up, where we help each other experience the things we want to experience.

I’d love to live in a world where no one was designated ‘less than’, where we have finally noticed the curve for history (blacks, gays, etc) and just started accepting that everyone is worthwhile and everyone matters.

I recognize that people should still have the option of suffering – that Hell still needs to exist, because that’s what some people are going to choose to experience. But I want to live in a world where no one is forced to suffer, either via their biology or via the actions of the group as a whole or mean-spirited individuals.

I for some reason doubt if my utopia is the same as the Christian one. If everyone who’s not religion X is going to be tortured for all eternity, I want out – not just that I want heaven, I want out of the system. I want a different deity. And I do not think I’m alone in this.

However, because my utopia and the utopia of, say, the religious right do not align, the goals we think are important to persue and the way we want to spend the resources in the public pool are going to be radically different. Putting both my people and their people in a box and trying to come to some agreement politically about what we should be doing is likely to be problematic. And I don’t think they should be denied their utopia, except where to do so would infringe on my rights to be free and loved and happy and complete.

I wonder how many different views of what a utopic experience might look like there are? I also wonder why some people need other people to be hurt as part of their utopia. I’m starting to think that might be one of the attributes commonly found in what we somewhat tropishly refer to as evil.

I do wonder what’s happening inside my neural net vs what’s happening inside the neural nets of those who fit in the mold I just described. There’s got to be something fundamentally different going on, and I don’t know what to make of it.

Rights for electronic life

Saturday, January 30th, 2016

So, recently I ran across this.

My first reaction was, holy shmoo, the singularity is almost here!

Actually, there’s all kinds of interesting problems here. I’ve talked with a number of my friends about the question of whether, if we created a accurate software model of a human, it would exhibit free will. It’s a really interesting question – if the answer is yes, that’s a serious blow to theology but a major boost to the rest of us.

But there’s a natural side question which comes up – which is, supposing we can get the neuron count up from a million to a billion per chip. If moore’s law were to hold, this would take – let’s see, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024 = 11 18-month cycles. At that point, making a 100-billion neuron mind out of the chips becomes practical. Said creature has as many neurons as we do – but is it a person?

My guess is, legally, initially, no. In fact, we’ll probably see all sorts of awful behavior as we debug, including repeatedly murdering the poor thing (turning off the power, over and over).

We may even see them turned into slaves, although I really hope we’re beyond that by now. I don’t mind enslaving small neural nets that will never show free will or understand suffering, or enslaving turing machines which are incapable of a original thought, but the idea of enslaving something that’s as capable as we are is disturbing.

At some point, however, we’ll have to acknowledge that a person’s a person, no matter what they’re made of. I see signs we’re moving in this direction with India granting personhood to dolphins (about bloody time!) and I have hopes to someday see it granted to any individual who can pass the mirror test. (If you know you’re a person, then you are)

It does remind me of “Jerry was a man”. It’s a question we’ll have to wrestle with – I hope we haven’t gotten so locked into the idea that electrons just do what we tell them to with turing machines (where that’s true) that we can’t realize that if we build a sufficiently large neural network out of transistors, it has the same rights that we do – in fact, ‘birthing’ might be a better phrase than ‘building’ here, since we are undoubtedly creating a new life form.

There’s all sorts of interesting corollaries to this as well. If we succeed in building something self-aware out of transistors, our race will be experiencing first contact. Granted, we’ll have *built* ET instead of met him out there in the sky, but that doesn’t change the fact that it is first contact. A life form made out of silicon is likely to be *different* – have different values, enjoy different things. This has been explored quite a bit in science fiction, but it was completely news to me that I was going to see it in my lifetime (assuming the actuarial tables describe me) as science fact.

If we build something 100 billion neurons in size and it’s *not* self-aware, this also has interesting implications – it asks the question “Where is the magic coming from?”. This outcome would also be incredibly cool, and lead us off in another, equally interesting set of adventures.

There’s also the question of the singularity – what happens when we build something with 200 billion neurons? There’s another article I keep meaning to write about intelligence and stability, but one interesting thing I would note is that plus or minus a few percent, all humans have the same 100 billion neurons, therefore increased intelligence or performance in our minds comes from changing the way we connect them. It’s possible that a larger neural net won’t be more intelligent at all – or that it will be completely unstable – or that it will be much, much, *much* more intelligent. All of us are going to be curious about what it has to say, in the latter case, and in any case we’re going to learn a lot of interesting things.

However, I do think we should all sit down and talk about the ethical issues *before* we build something that should have legal rights. I think we probably will – this has been addressed in numerous forums so it’s undoubtedly something people are aware of. One of my favorite Star Trek themes, addressed numerous times in TNG.

Free will

Friday, January 29th, 2016

I had a interesting talk with a friend of mine about free will. At the time, I was thinking about dimensions of free will.. I was observing that free will consisted of two dimensions – the first being the number of possible actions you can think of (the box), and the second being the ability to pick any of them (the pointer). Of course, some of the actions are interesting – for example, you can iterate back to look for more actions, and you can change the definition of desirable outcomes, which is then going to change the box. But I feel like there’s at least one more dimension to it, and I’m not sure yet what that might be.

I do not feel particularly free. For the most part, I feel constrained by the fact that this world doesn’t have a particularly good safety net for taking actions that might be particularly economically or socially risky. And, as I’ve learned, it is entirely too easy to lose the friends you really want to keep, either through dating the wrong person, saying the wrong thing, thinking the wrong thing, feeling the wrong thing, or just random acts of fate. We spend a lot of time on Earth saying goodbye – and while I feel it likely that we’re all immortal beings, the world I see doesn’t offer a lot of reassurance that that is the case, and in fact seems to go out of it’s way to underline the idea that we’re not.

And yet – part of why I study IT is that I think that it offers the greatest chance of freedom humanity will ever know. There are so many ways this could play out – the first is that we could build the singularity, and it could decide to set us free. The second is that we could automate our society and no longer have to worry about working all our lives just to be able to continue eating and living indoors. The third is we may reverse engineer DNA and be able to modify it to give us a better experience. The fourth is that we may hook computers directly or indirectly to our neural nets and be able to do all sorts of amazing things including having just about any experience we could possibly want.

And, of course, a powerful enough computer could back up and restore us just like a hard disk.

But then there’s the question – surely this is not the first time we’ve built technology of this level – surely we’re not the first time computers have ever gotten this advanced. And it’s so easy to see that from here, 20 to 30 years leads to the real possibility of a utopia. So I can’t help but suspect that said utopia already exists, and that we’re experiencing this world as either a form of punishment or a form of entertainment. (Or possibly even both). Some aspects of it seem so comedically overdone that entertainment seems by far the more likely. I should enumerate my list of “Stupid things America does” sometime – it’s very hard for me to get through all of it without laughing my ass off. I mean, it hurts, it was not a lot of fun to go through, but it is also very, very funny.

We have a big problem with not seeing the forest for the trees. And a lot of the time our most destructive ideas are the ones we get into with the best intentions. Religion, organized education, I could enumerate a bunch of them.

I have no doubt that I’m as guilty of this as the next man. It’s part of why the singularity is such a desirable idea. But, we’re not going to build it with silicon any time soon. If we wanted to build it next week, or next month, the smart thing to do would be to network human minds, as they’re the most powerful computers we can easily get our paws on.

Now my experience with talking with $future-person leaves me with the distinct feeling I’m connected to some sort of network, as do things like vibe at raves and concerts, and the repeated experience of thinking of someone I haven’t talked to in a while and having them call or write me. And then there’s watching my playlist, which I swear is sometimes scheduled by the great DJ in the sky to clue me in to what’s going on. I often think that my neural network is partitioned (well, duh, I do appear to have DID) and that I am at times my own higher power, setting myself up for all sorts of surprises. Every time I sit down at a multitrack deck and record a song in several parts, and have each just *fit* – every time I go skating and look, there’s *always a hole*..

Synchronicity is a interesting beastie.

Metademocracy

Thursday, January 21st, 2016

So, I’ve said that we need to turn the US into a metademocracy – that we need to vote on how to vote.

Specifically, I had this conversation with my friend Jeremy, and we came up with one possible model that we think might work really well.

Instead of a representative democracy, you would build a direct democracy. However, instead of having everyone vote on every issue, people would subscribe to issues that they were interested in. Participating in mailing lists and forums, taking tests and quizzes that indicated you understood all sides of a issue, would all earn you points. The more points you had, the more your vote would count.

There would be no minimum voting age. On issues with long term impact, after proving reading comprehension with a basic test, the younger you are, the more weight your vote would carry.

It’s a weighted meritocracy. The concept here is that I don’t really want a plumber flying a 747, or a pilot fixing my sink. People do have interests, and those interests do drive what they know about and should be making decisions about.

If my hunch is correct

Wednesday, January 20th, 2016

Then if all of the concrete, steel, and man-hours wasted in the War On Drugs had been used to build wastewater treatment plants in India, that continent would have fresh water available from every tap.

We need to remember to not have Wars On People. Let’s have Wars On Suffering, Wars On Disease, and Wars On Stupidity instead.

While we’re talking about stupidity, why do we build places to punish sick people? This inevitably is going to make them sicker, and as a result, they’re going to commit more crimes and cause more havoc. Surely the mass shootings, the cops shooting innocents should be hints that we broke something badly and we need to rethink the way we do things. Surely the cash for kids scandal should be giving us some kind of neon sign that we’ve done something beyond stupid and it’s time to stop. Are we incapable of thought? People, please prove to me you’re not morons. I’m begging here.

While I’m ranting, the idea that children can get busted for sexting – look, assholes, STOP HURTING THE KIDS! Sex is a normal, healthy thing, and you’ve warped their minds about it by being afraid to talk honestly with them about it, not to mention threatening them in all kinds of weird ways, insisting that they’re subhuman, and ..

I speak as someone who remembers parts of my childhood not at all, and other parts entirely too clearly and painfully. Adults shouldn’t be allowed to raise children in groups less than 5 adults – I talked earlier in my blog about entrainment signals and how two adults can *barely* provide a clean entrainment signal under the very best of circumstances. And this world – not the best of circumstances. Many feedback loops, many bad designs coming back to bite us in the ass.

I want love to win, not fear.

Optimizing for story vs. optimizing for emotion

Sunday, January 3rd, 2016

So, a long time ago I had a interesting discussion with Nick in which he was talking about wanting to have the experience of the best parts of his life on loop-repeat. And I was explaining about wireheading – that you could, in fact, pin the pleasure center of your mind on, but that I found that narratively unsatisfying. The truth is that I want to experience a range of emotions, although I’d like to experience a *weighted* range – that is, I’d like to experience a lot of joy and hope and happiness and peace/serenity and excitement and just a little bit of fear and sadness and confusion and doubt. And I want to experience a narrative – the idea of being Bhudda and achieving a homeostasis of Nirvana does not appeal to me. (Probably because I already did that, and it wasn’t enough for me)

I like the idea of a ongoing path of discovery and growth and finding mo betta and mo betta. If there is in fact a top range of mo betta, I don’t know if I’d want to hang out there for a long time and then start back at the bottom again, or just hang out there at the top. Or if it will turn out there is a level of awesome that is the maximum I can stand and it’s below the top bank that’s possible. Couldn’t tell ya. But if I have to make a choice between story and emotion I optimize for story.

What are we optimizing for? What should we be?

Sunday, January 3rd, 2016

So, this is another stream of consciousness post that may or may not go anywhere useful. Nonetheless, here it is.

One of the problems I see with society on Earth is that we are failing to do a number of basic things that would lead to a much better experience for everyone.

1) Triage – we need to figure out which problems are causing the most non-optimal experience for the most people, and address the issues we’re facing in level-of-fuckedness order

2) Finding the root cause – raising the minimum wage, for example, is a very temporary band-aid because the landlords and banks will just raise the fees and the rent. You need some way to control the ratio between pay and cost of living. But even this probably isn’t looking at the real root cause. You will often find a whole list of symptoms, and while you can treat them as individual problems, it’s usually better to figure out what the root cause is and address that.

3) Figuring out what we’re optimizing for – Yah, no kidding. We’ve got the Christians over here trying to optimize for hitting God’s Will. We’ve got me over here trying to optimize for having the experiences I want to have. We’ve got judges trying to optimize for a just society, which is probably about the dumbest thing you could do. And so on.

So, I’m optimizing for the experiences I want to have. I think we should be optimizing for, first, giving everyone what they need, then, giving everyone what they want. As a big picture thing, this is going to need for a lot of us to change, because a lot of us have managed to really build up some stupid ideas this particular time ’round. More on this later.