Abortion, summarized

February 18th, 2016

So, I wanted to get this down in summary form so I can post links to it on facebook rather than writing it out over and over.

One of the things that makes the least sense to me is that Christians, who claim to believe that God is all-knowing and all-powerful, are anti-abortion. Surely a all-powerful and all-knowing entity can arrange to only have souls connected to the bodies that are actually going to be extant? Being anti-abortion is the same as professing a profound lack of faith in God’s abilities.

Now, I have a different perspective. My first observation is that you aren’t dealing with a self-aware life form until certain things happen in the mind of the fetus. These can’t possibly happen until at a minimum the neural network develops whatever the minimum number of connections for self-awareness is. We don’t know what that is, but we can safely say based on the fact that we have no problem killing cows that if the fetus has less neurons than a cow, it’s not a person by our definitions.

It also seems likely that self-awareness and free will are something you ‘catch’ from other people. In the 1950s, a attempt to make a more efficient orphanage resulted in a number of children not getting held, talked to, cuddled, etc. The result was that most of them died. Neural networks are event-driven, and it seems likely that it takes a certain number of incoming events to make a person a person, because absent events, there is nothing to drive the connecting-the-neural-dots process that turns us from a collection of cells into a individual.

In any case, the same people who are pro-life are often the people pushing for laws and rules and social norms that will make that life as miserable as possible. They certainly aren’t volunteering to take care of the children in question. I don’t think it’s actually a defensible position from a religious standpoint, unless your religion is built on the idea of a incompetent God.

Teachability and the Milgram experiment

February 17th, 2016

TL;DR=The milgram effect may arise from the fact that most subnets in a NNN can’t tell the original source of authorative-tagged information

Warning: I haven’t organized my thoughts around any of this at all, and I have a affection-starved cat interrupting me for more pets every few minutes, so this is likely to be one of my less coherent posts

 

So, I just finished watching a movie about the Milgram experiments. The first thing that occurred to me is that the reactions people had to the experiment make it very clear that they were not in unified agreement about continuing to push the button – in fact, all sorts of subnets were asserting that they should stop. It does occur to me that in general natural neural networks must have some willingness to trust authority (at least properly authenticated internal authority) or the results would be utter chaos. And in addition, at times it’s a good idea to trust external authority, at least insofar as avoiding the lion that the sign is warning you about. However, clearly you shouldn’t trust *anyone* who claims to be a authority, or you’ll end up supporting the Trumps and Hitlers of the world as they do truly abysmal things – it is clear that people are willing to abuse our susceptibility to instructions from authority to have us do all sorts of things that shouldn’t be done.

 

On the other hand, neural networks need to be willing to accept data from outside if we are to ever be able to go beyond what one person can discover in a lifetime – the susceptibility to authority is likely a part of the same process which makes us able to learn from the mistakes of others. So how does one retain that functionality while still telling the government “Hell, No, I won’t go” when they are asking you to bomb Vietnam over some insane war over ideology of resource allocation? I’m not exactly sure.

 

I do have a hunch that being aware of the Milgram experiments make one less likely to be susceptible to that sort of influence. So it is possible to build a informational immune system of a sort. We likely also end up building informational immune systems that protect us from our own worst ideas – well, those of us who don’t end up being Jeffry Dahmer.

 

Now, this gets into a common digression for me. It’s obvious to me that I have a fundamentally different view of what ‘good’ is than many people. In some cases, I can get inside their heads even though I don’t agree with them, and in other cases, I feel much like there are aliens roaming among us. Like, I can understand the right wing fear that we can’t afford to feed and house and clothe everyone, or that if we did so we would damage their self reliance and the further evolution of our species, and even the mindset that it’s not fair that someone would be allowed to stay home and smoke weed (or whatever). I don’t agree with any of these views, but I can understand their genesis. However, at some point along the ideological spectrum, I stop being able to even track why someone would feel that their definition of good was good. I can’t get inside the mind of the person who thinks we should stone gay people, or the guy advocating for legalizing rape (yes, there really is). In general, I can’t get into the heads of the well poisoners who have to drink from the same well.

 

This is a real phenomenon. I see it over and over.  Now, in general, I think people should stop well-poisoning even when it doesn’t affect them, and I think it’s awful that people do it – more on this later, especially on the subject of sex and well-poisoning – but the ones who I really can not understand are the ones who want to poison the well they drink from. If you are advocating violence against minorities, that’s what you’re doing, because sooner or later, you’re going to be that minority. If you are advocating violence in general, that goes double. Every time I see riots over police shootings and they are not carefully and well targeted against the police, but rather are against the communities who were already hurt by the police shooting, I wonder – and I’m sorry, but it’s the truth – what is wrong with these people?

 

Now I have, over and over, seen that anger leads to bad and irrational decisions. In general, the people I know who get angry when they have computer problems can never, ever solve them – and sooner or later they lose me as a resource in that because I don’t like to be around irrationally angry people. And I assume that the rioters are suffering from irrational anger but I can’t help but wonder, to bring this back to it’s original topic, are they also suffering from a bit of the milgram effect? Do emotions like anger and fear make us more susceptible to being Milgramed? Or do a much wider range of emotions make us more susceptible?

 

Back to the subject of NNNs, I am really wondering, for most subnets in our mind, can they even tell the difference from inside signal and outside signal? How equipped are they to evaluate the validity of a order and the source of said order? I also wonder, for all the people who clearly wanted to stop increasing the voltage but did not, how difficult was the inner struggle between the parts of them that wanted to do the inately right thing and the parts of them that wanted to do what has been externally programmed to be the right thing? There’s no doubt that we’re externally programmed to respond to authority with obedience – in America, it’s a pretty common theme that if you don’t, the cop whips out his gun and shoots you, and is told, at least privately, good job officer. There are all sorts of authorities wielding power over us, everything from bad grades to unemployment and starvation and having nowhere to live to being physically abused – and we do live in a system that has pretty well built a way of programming us to be obedient. And yet, I think there are parts of us that refuse to participate in the horror show that we’re asked to engage in – soldiers often come back from blowing up other people at government command with severe psychological damage, for example, that suggests that the minds of many of us are not really geared for the idea of being awful. And clearly, most of the people participating in the Milgram experiment resisted to one degree or another – very few joyfully and willingly cranked the voltage up to 450. They just didn’t resist *enough*.

 

Now, I keep advocating that psychology needs to throw away the storytelling and study what’s happening on the iron – and part of this is that psychology is often obsessed with the idea that we are single coherent individuals when science suggests that while we have the experience of being single, coherent individuals, we’re actually many, many collections of subnets. For those of you who haven’t read about them, the experiments with cutting the corpus collossum strongly suggest we’re the aggregate result of many, many subnets. At least on this track and in this world – I have had experiences which I can’t easily explain but which suggest that we’re not always at the whims of our hardware in quite the same way.

 

 

“keeping score” money and inflation

February 12th, 2016

So, one of the discussions I had recently centered around the insanity that we have inflation.

A common myth that floats around is that any time the government increases the money supply, we should have inflation. This bit of insanity is carefully ignoring that money is a pointer that points to resources, and we have more of those every year. We certainly have more man-hours to get things done as the population rises, and we develop more intellectual property (a major thing we spend money on) every year – and every time we learn to do things more efficiently, it’s as if we had more nonrenewable resources – for example switching large portions of our grid to wind, or even just building more efficient coal plants, makes us effectively have more resources.

So, the only way you should see inflation is if the amount of money printed is larger than the gain in resources for the year.

In addition, the only money that means anything is money that is actively in play or is going to be. “Keeping score” money – i.e. the money of people like Trump and the Koch brothers – money that isn’t going to be spent – does not get used for resources and therefore is out of service. A long time ago I wrote a article about why having a high net worth is a destructive thing to do, but the truth is, it shouldn’t be. Everyone should be free to live the way they want, and if it makes billionares happy to have a bunch of money, they should be able to do that – if we were running a bucketized currency system in parallel with our fiat currency system, people deciding to keep billions in the bank wouldn’t be so destructive. But at the moment, there’s less money in circulation than available value, and as a result we often destroy value (let food rot on store shelves, for example).

As a reminder, fiat currency is only making the world a better place and enabling us to have fun adventures when it is changing hands.

SSDI and commercial prisons

February 6th, 2016

So, one of the problems that comes up a lot with capitalism is that the real, true interests of the human race, and individual humans, and the choices which will produce money in the short term are often not the same. Sometimes they are even at odds with each other.

I’ve got a couple of examples here. The first one is from my time with SSDI. Now, what the customers wanted was for their printers to work, but what SSDI wanted was to get as many calls as possible without getting caught not serving the customers. This led to things like the ‘bidirectional printer cable’ line. (feed ’em BS and get ’em off the phone, plus they’ll call back again and we can earn money for another call)

My second one is one of my major criticisms with commercial prisons. The very last thing these facilities want to do is turn people out who are healthy, well balanced, and unlikely to reoffend – because it’s only if there is recidivism that the prison makes a profit next year.

I can’t help but draw some comparisons between these two situations, and in both cases, what’s happening is not in the interest of the human race or (most) individual humans – a very few humans do well at the cost of all of us.

In general, I rather doubt if the current criminal justice system is in the interest of the race or individuals. It seems to be all about hurting people. The hope is, if people hurt people, and then you hurt the people who are hurting people, somehow there will be less hurt people in the world. Can we discuss the insanity of this, please?

Now, I understand the need to isolate those who would otherwise rape, murder, and pillage. I just think that it might be worth helping them to understand why we all lose when they do those things, and understand what some of the alternatives might look like if we all worked together, and why it might be worth doing so.

NNNs and communication protocols

February 2nd, 2016

So, in the discussions about what makes one identically-sized neural network smarter than another, there are a few obvious candidates – like the number and variety of interconnects – and then there are some more subtle ones, like routing protocols in use and means to handle collisions.

Many of my hypothetical readers may know the frustration of having a idea on the tip of your mind, or tongue, and feeling like you must act on it or say what it is or risk losing it forever. One can assume this behavior is even more of a issue for individual neural subnets. One thing that I have to imagine is a architectural choice that we make very early in life is whether to use collisions, token passing, or some variant (like aloha) of the two. It seems likely that different subnet busses use different protocols, and that what is appropriate for one subnet bus (point of confluence) isn’t appropriate for another.

Clearly some subnets do have the ability to hold messages and retry them later – thus how we’re able to set a mental note to revisit a topic and then experience a trigger to revisit it later. However, there is often the feeling with a new idea that we might lose it if we don’t do something to make it somewhat more concrete. I suspect this is because

A) Not all traffic is considered worthy of retries
B) Probably a very large number of messages get dropped that we are never aware of because they never protrude into our conscious experience

There are some subnets for which retrying message delivery would only hamper us – for example, there’s no point in revisiting the lion/no lion question either after it’s become proven there’s a lion or it’s become proven that there’s not. Most things having to do with the RTOS aspects of our mind are either interesting right now or they’re not interesting at all.

However, for the subnets for which the messages are of lasting interest, there is the question of how ideas are sequenced. I generally experience having one idea at a time, although I know my mind is capable of generating several at a time – my assumption is that they’re rated by priority and the highest priority message wins access to my conscious experience. It seems like a interesting experiment to try and have several at the same time, but I’m not entirely sure how I’d go about it. Anyway, I assume that many ideas light up many subnets at the same time, and all of them signal, and only one of them makes it to my conscious experience.

Back to the original topic, I assume that our more intelligent individuals are people who made better choices – or got better dice thrown – in terms of which subnets operate in which mode. I wonder how many modes are available to operate from.

Are larger neural networks stable?

February 2nd, 2016

So, as we approach the singularity – and all indications are that in about 15 years we will be able to build a mind bigger than ours, if Moore’s law holds – one interesting question is whether a larger neural network than us would be stable.

This is a subject that, if Google is to be believed, is of much scholarly interest. I’m still not at a place to evaluate the validity of the discussions – I’m still working my way through a full understanding of neural coding – but I think it’s a interesting question to be asking.

One presumes that some sort of optimization process took place (either via evolution or design – or quite possibly both) in determining how large the human mind is – but whether it was a decision about stability or a decision about power consumption remains to be seen.

In a neural network of fixed size, it seems clear that you have to make some tradeoffs. You can get more intelligence out of your 10^11 neurons, but you will likely have to sacrifice some stability. You can also make tradeoffs between intelligence and speed, for example. But in the end, humans in general all have the same number of neurons, so in order to get more of one aspect of performance, you’re going to have to lose some other aspect.

When we start building minds bigger than ours, the question that occurs is, will they be more stable? Less? Will more neurons mean you can simultaneously have a IQ of 2000 (sorry, Holly!) and be rock solid, stable, and reliable? Or will it turn out that the further you delve into intelligence, the more the system tends to oscillate or otherwise show bad signs of feedback coupling?

Only time will tell. As the eternal paranoid optimist, my hope is that we will find that we can create a mind that can explain how to build a much better world – in words even a Trump supporter can understand. But my fear is that we’ll discover we can’t even build a trillion-neuron neural network that’s stable at all.

We also have to figure out how we’re going to treat our hypothetical trillion-neuron creation. Clearly it deserves the same rights as we have, but how do we compensate it for the miracles it can bring forth? What do we have to offer that it will want? And if we engineer a need into it so that it will want in order to have that need met, what moral position does that leave us in?

Neural networks in output mode

January 31st, 2016

So, one of the common threads of the last few years has been me considering the possibility that nothing I am experiencing is happening to anyone but me – or possibly, just a subset of what I am experiencing is happening to only me, while other bits are happening to everyone. Certainly, questioning how much my conscious experience has to do with the data coming at me.

One of the bits of research that really underlined the validity of this was this. In essence, researchers discovered that artificial neural networks configured for image recognition could produce *output* that was related to the input they were trained to recognize. If you needed a larger neon sign that what you’re experiencing might not have that much to do with what’s coming in on your senses, I don’t know what to do for you.

As my experience polarizes further and further towards smart and dumb and love and fear I get more and more hints about the underlying patterns. And more and more food for thought about what experiences might be coming from where.

One thing I’ve definitely experienced is memory alignment issues. One of the reasons I keep this journal is so I can go back and read it and check to make sure what I remember and what I talk about is the same. A force working against that is that it’s hard to honestly talk about things that went wrong in my life, and so back in the day I didn’t. This is something I’ve changed a fair amount, but it is scary – especially when I see things like facebook banning sheer.us, although I’ve decided after careful consideration about what facebook is that that is a compliment.

Yes, I’ve apparently finally achieved being a true radical, rather than the political equivalent of a script kiddie. I’m starting to have alternate suggestions about how to do fundamental things. It may be that none of them are any good, but it may also be that the only way to find out is to simulate them. One of the exciting things about seeing the singularity (a mind bigger than a human’s) rushing up at us is that if we can make friends with a trillion-neuron mind (which may be a challenge) we might be able to get some real answers about what the best configuration for the world might be. That’s assuming a trillion-neuron mind is even stable, a subject I hope to write a article about soon.

Testing LJ crosspost

January 31st, 2016

Note that crossposting has been disabled for a couple of years.. go to my blog on sheer.us if you want to read the events of those years.

Obsolescence

January 31st, 2016

So, with the singularity apparently about 15 years away, I find myself pondering the question of why am I here and what am I good at in a different light.

The only meaningful answer I can come up with is to experience things from my point of view. I have no doubt a artificial neural network that’s bigger than I am can write better music, better text, better code. But it can’t *experience* in the same way I can – I don’t doubt that it can experience a conscious experience, but it’s going to be *different*. I think. It’ll be hard to even really find out the answer to that question, but for the moment I assume what I bring to the table isn’t so much intelligence as it is a particular, unique flavor.

One thing I’d really be curious to find is someone else with a blog similar to mine. I feel a lot of the time like I’m pretty unique, but perhaps there are in fact millions of people like me out there. (Although you would think if there were, capitalism would have died a honorable death, replaced by something that worked better, by now)

I actually sometimes think capitalism would work beautifully, if everyone understood the money had no value. That it’s not the basic system that’s flawed, but rather the set of ideas we’ve built up on top of it.

But I remind myself of the great depression. And what’s impressive to me about the great depression is there was no shortage of steel, or copper, or food, or power. The shortage was of money flowing. And we accepted that.

Sometimes I think humans are entirely too caught up in the rule of law. The sexting teens being arrested are a impressive example of this, but there are tons of examples. We think A: we need to make rules and B: we need to punish people who don’t follow them, even when they were stupid rules.

But then, I’m not the average person. I read the bible saying to stone gay people and know, this isn’t the work of a higher power and never was. Others read it saying that and say, that’s god’s word, we’d rather our children commit suicide than change our minds about that. (I’m looking at you, Mormons.. )

Anyway, back to the original topic. So, I don’t think I will be obsolete even when there are life forms more advanced than I am, because I don’t think they’ll be able to experience the world the same way I do. Now, granted, I’d really rather be experiencing a much better world, which is part of why I like the idea of there being life forms more advanced than I am – it’s possible that if we build something with a trillion neurons, and it explains to us how dumb our economic system is, we might just listen. Or perhaps it’ll explain to us that it’s absolutely perfect, and then it’ll explain why in a way that can reach me, and I’ll no longer feel like my friends are constantly barely making ends meet mostly because we built a badly designed world.

Rights for electronic life

January 30th, 2016

So, recently I ran across this.

My first reaction was, holy shmoo, the singularity is almost here!

Actually, there’s all kinds of interesting problems here. I’ve talked with a number of my friends about the question of whether, if we created a accurate software model of a human, it would exhibit free will. It’s a really interesting question – if the answer is yes, that’s a serious blow to theology but a major boost to the rest of us.

But there’s a natural side question which comes up – which is, supposing we can get the neuron count up from a million to a billion per chip. If moore’s law were to hold, this would take – let’s see, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024 = 11 18-month cycles. At that point, making a 100-billion neuron mind out of the chips becomes practical. Said creature has as many neurons as we do – but is it a person?

My guess is, legally, initially, no. In fact, we’ll probably see all sorts of awful behavior as we debug, including repeatedly murdering the poor thing (turning off the power, over and over).

We may even see them turned into slaves, although I really hope we’re beyond that by now. I don’t mind enslaving small neural nets that will never show free will or understand suffering, or enslaving turing machines which are incapable of a original thought, but the idea of enslaving something that’s as capable as we are is disturbing.

At some point, however, we’ll have to acknowledge that a person’s a person, no matter what they’re made of. I see signs we’re moving in this direction with India granting personhood to dolphins (about bloody time!) and I have hopes to someday see it granted to any individual who can pass the mirror test. (If you know you’re a person, then you are)

It does remind me of “Jerry was a man”. It’s a question we’ll have to wrestle with – I hope we haven’t gotten so locked into the idea that electrons just do what we tell them to with turing machines (where that’s true) that we can’t realize that if we build a sufficiently large neural network out of transistors, it has the same rights that we do – in fact, ‘birthing’ might be a better phrase than ‘building’ here, since we are undoubtedly creating a new life form.

There’s all sorts of interesting corollaries to this as well. If we succeed in building something self-aware out of transistors, our race will be experiencing first contact. Granted, we’ll have *built* ET instead of met him out there in the sky, but that doesn’t change the fact that it is first contact. A life form made out of silicon is likely to be *different* – have different values, enjoy different things. This has been explored quite a bit in science fiction, but it was completely news to me that I was going to see it in my lifetime (assuming the actuarial tables describe me) as science fact.

If we build something 100 billion neurons in size and it’s *not* self-aware, this also has interesting implications – it asks the question “Where is the magic coming from?”. This outcome would also be incredibly cool, and lead us off in another, equally interesting set of adventures.

There’s also the question of the singularity – what happens when we build something with 200 billion neurons? There’s another article I keep meaning to write about intelligence and stability, but one interesting thing I would note is that plus or minus a few percent, all humans have the same 100 billion neurons, therefore increased intelligence or performance in our minds comes from changing the way we connect them. It’s possible that a larger neural net won’t be more intelligent at all – or that it will be completely unstable – or that it will be much, much, *much* more intelligent. All of us are going to be curious about what it has to say, in the latter case, and in any case we’re going to learn a lot of interesting things.

However, I do think we should all sit down and talk about the ethical issues *before* we build something that should have legal rights. I think we probably will – this has been addressed in numerous forums so it’s undoubtedly something people are aware of. One of my favorite Star Trek themes, addressed numerous times in TNG.