Problems with the Intrinsic Inclination Model

Elizabeth Kasprzyk
7 min readNov 27, 2022

And what it means for AI design

Every good model has limits, and the Intrinsic Inclinations Model is no exception, and it’s worth having a clear understanding of those problems if you want to use the model.

Some of the problems are fairly trivial and actually show its strengths. The model (in its initial form) excludes certain groups of people, but it’s very easy to add extra dimensions to the model to account for that. The original model has 3 dimensions, but I use 8 dimensions for mine, and that appears to work incredibly well and much better for non-binary people than the original model does.

Another trivial problem is categorisation. The Intrinsic Inclinations Model is purely continuous on each axis. Therefore, how you create categories is up to you. With the initial model, the number of identities is 3 to the power of 3, so 27, and that means we chose to divide each axis into three segments (the two ends and an area in between). In my 8 dimensional model, I choose to divide each axis in two so that there are 2 to the power of 8 so that’s 256 identities. In both cases I am aware that both of those simple divisions would struggle with things like demisexual, which we can fix by sub-diving axes a bit more thoroughly when we make categories out of the model, at the cost of increasing the number of identities.

However the main problem with the Intrinsic Inclination Model is the idea that we have programming in our heads that give us our intrinsic inclinations. The model is absolutely silent on how this happens. The truth is, we don’t really know enough about the brain to know how such programming exists, or even how it would work.

So what do we know?

The Brain

The first key thing to know about the brain overall is that parts of it are programmed to deal with specific things (like language) and that, if these get damaged, we can sometimes repurpose other parts of our brains for these. This is both helpful and unhelpful, because it says we can both have programming around our inclinations (proving the model), and that we can change them (disproving the model). Again, it says nothing about how.

If that’s not helpful, we can try thinking about the brain as a computer. More specifically, we would need to think about our brain as an Artificial Intelligence, or AI. AI broadly come in two types: hard and soft.

Hard AI are systems written to deal with specific cases, if this, then do that. They are very capable of dealing with specific situations very well, but they fail to learn or adapt and will respond with machine-like stupidity. We’re clearly not hard AI.

Soft AI is much better for mimicking human level intelligence. They can learn how to perform tasks and then perform them regularly. They do this by a two step process: training data teaches them what the correct answers are and then they use that training data to make informed decisions about things they haven’t seen before. Using that data, they can do things like auto-categorise shoes by looking at pictures of them.

The learning and applying of data is a great soft AI trait, but programming soft AIs is something we’re in the process of learning how to do. At the moment, soft AIs suffer from catastrophic forgetting. Basically, if you taught a soft AI one thing, then you taught it the opposite, as long as the data from the opposite is more than the previous, it overwhelms what it knew previously and the AI “forgets” it.

We’re back to a place where we have no answers and can’t go further this way. So what next? Well, it turns out that, although we don’t yet know how to program soft AI, second wave feminism has accidentally tried to answer this by talking about one of the forms of programming in the human brain. Let’s divert there and see where we go.

Feminism

Prior to second wave feminism, we broadly tended to believe that men and women were completely different and knowledge of the brain led to the idea that men and women had different brains and thought differently.

The naïve early interpretations read very much like hard AI descriptions. If placed in situation X, a man would always act in way A and a woman in way B. These led to harmful stereotypes, but also worked enough that they continued to be accepted.

Second wave feminism came along and argued against it thoroughly. They broke down the different ways that such stereotyping just failed when hitting the real world. Firstly, many things are culturally determined (e.g. boys = blue and girls = pink) and have changed over time (e.g. women being primarily sexual beings in medieval times to men being primarily sexual beings in modern times). You can find different cultures that have very different stereotypes. Not only that, second wave feminism pointed out that people didn’t rigorously keep to stereotypes at all. People were much more flexible. Women might, for example, not be as violent as men, but placed in a situation of life or death danger, it did not follow that a women would never use violence and would always die.

In modern times, we have voices like Cordelia Fine and others arguing how much gender and sex are no longer programmed in useful and interesting ways. This frequently leads to fighting the Intrinsic Inclinations Model as second wave feminism and the model must be contradictory.

However, I actually think the opposite. I think the most interesting questions get asked if you accept that we have programming, but it seems to be incredibly flexible to the point that good scientists can’t seem to find it. How can this be?

Inclinations and Identity

The truth is, we don’t know, and I want to be clear about this. What follows are my personal ways of thinking about this, since everyone needs a model in their life for the difficult unknown questions. This is mine, and it not only turns out it works for me, it also turns out that many others have thought this interesting and, even if I’m likely wrong, the way I construct this argument should be interesting and worth reading. So let’s go.

If humans are programmed, we know that we can be made to deal with common evolutionary tasks like procreation so the species won’t die out. However, we become rigid and inflexible. If we are too flexible, we’ll adapt to our situations too well, but have no directives to save the species.

So, how do you design a flexible soft AI that can be utterly flexible when necessary but programmed when necessary also?

The only thing that has ever made sense to me (and I’ve seen some researchers talk about this, so yey!) are emotions. Why do we have them? Well, they’re a way of steering us to do the right things our brains need us to do.

Two very basic emotions are hunger and thirst. We feel hungry and feel thirsty when we’re low on food or water. Our flexible brains can know that eating and drinking are good and they are things we should do. But emotions help enforce and override that and we not only can but have to respond to those emotion states.

We’re pre-programmed to know that thirst is bad, and less thirst is good and to think of nothing else when we are thirsty. We may also have some programming about drinking when thirsty. But everything else is up for learning about how to reduce thirst and the many ways we can do that. We learn the things that are necessary because we need to.

Intrinsic inclinations probably work like this. Let’s go back to a scenario I mentioned previously, that of a woman committing violence because her life is in danger. We can probably guess that we have some sense of what concept that women shouldn’t engage in violence (our inclination). The woman probably does too. She may regard it as wrong, the experience as emotionally traumatic and that trauma will work to prevent her from putting herself in a situation where there is a repeat, but it doesn’t stop that woman from performing violent actions in that moment. Her intrinsic inclinations are intervening and telling her how she should feel in the situation above and beyond what her own normal feelings would, but that doesn’t stop her learning other lessons about violence and its use.

In that way, this is the genius of nature every time I find myself thinking about this topic and how it ties in with making me transgender. From what I can tell, there’s nothing really stopping me from acting out a male life and role, in moments I can do it easily. But doing it all the time just makes me feel so very empty, dead and wrong inside as the feelings build up. In the moment, I can discard my programming and feel a little bad but nothing that can’t be dispelled easily (say by chocolate). Over a lifetime, I cannot.

It’s also in this way I feel identity works. We look at other people and our brain intrinsic inclination brain programming kicks in. We look at others and we see them doing things, and for reasons we don’t know, we identify with (or against) what they’re doing. Our intrinsic inclinations take what we’re seeing and flag it as important to us, or something to avoid, by causing us to feel things when we see them. We may not know how to achieve or avoid the things we’re seeing, and for that we need to learn and have societies and culture constructed around that type of learning to help get us there, but we know somehow that this is something we would like to feel.

Conclusion

Such AI design is pretty amazing, as as we try to create self-learning AI, we’ll want to learn from nature and try to imitate nature to ensure that the AI we create are suitable for the tasks we design them for (also making them not murder us would be great).

There undoubtedly may be other ways to achieve the same thing, but every time I listen to my own thoughts and feel my intrinsic inclinations twist and change my feelings about them, I’m in awe at how much of a brilliant design that is and how it gets me to do the things my brain reckons are important.

--

--

Elizabeth Kasprzyk

Elizabeth works writing software for an educational video streaming service and is also transgender.