Some time ago, Sam and I refined our model and came to the conclusion that mutually recursive trust escalation, a cognitively complex technique requiring at least two people and a convert external representation for maintaining the call stack, is best. Note that this fails if the affection receptor also has a convert (or otherwise) external representation with which to track the recursion. In such cases we make appeal to the standard techniques of talking shite followed by application of I’ll get my coat.
The situation is analogous, but better, when unknown receptors are chaperoned by friends—a significantly more probable means of successful binding. It is conjectured that such cases are in fact generalisations of the mutually recursive trust escalation conjecture, but this has yet to be supported by empirical evidence or analytical proof.
Law’s Law states that mutually recursive trust escalation fails if an attempt is made by a female to invoke it, if and only if her friends also intend to find a receptor with which to bind and the intersection of binding targets is non-empty. Proof outline: girls are unable to cooperate whilst on the pull. QED.
I reckon eventually we’ll reach the stage where what our conscious awareness can deal with just isn’t good enough to make sense of the phenomena that the devices we create can detect—a pretty graph just won’t do. Around the same time we’ll finally accept that restricting our notion of self to the bit that is aware and experiencing is terribly limited. So the obvious next step: realise that we evolved in an environment, we’re adapted and continue to adapt to that environment, that the whole point of science is just to continue that adaptation. Therefore eventually scientists will invent devices, the output of which can’t be experienced—all they can do is change a person’s behaviour to make them more (or less) adapted to the environment.
Journals and conferences will become more interesting, since graphs and equations won’t be about the measured phenomena. They’ll be about how people’s behaviour is changed once they’re plugged into a machine for detecting the phenomena.
So in a sense the output of these new and exciting devices that don’t yet exist will be experienced, but only in terms of, for example, better avoidance of buses when out and about on one’s bike, and more sex.
I’d like to write a book where the phenomena is this: there’s some kind of Bayesian bubbly aura around all particles. The world actually is a physical belief net and does the computations for us—all we have to do is build a device that reads off the results.
“It’s amusing that our model of ourselves is that of an impenetrable machine we somehow need to decode and predict—then and only then can we make the right decisions in order to be happy. We set up miniature experiments and carefully monitor our responses and how others react to us to see if we should repeat or continue the experience. Frantically moving from one friend, lover, job, university, project, political cause, to the next, each briefly improving the situation and giving us the status and self-importance we need to get out of bed in the morning. Worrying about the global issues, reading the news religiously every day so we’re informed individuals and can ramble on for hours about the pains of people in the world we’ll never meet. Ignoring people we could share happiness with or—worse—learning methods of manipulation so we can influence those closest (proximal) to us, the satisfaction of a person molded feeding back into our personal status machine. Eye contact, use first name, soft tone, develop a rapport but not for too long lest honesty and humility creeps in. Helping and diplomacy rather than sharing and empathy.”