1. Given a possibly-true thought, I try to find alternatives; in fact I try to think of all the alternatives: I think of them as books and I set them on a Bayesian Bookshelf. That’s an imaginary shelf, but all the books (and magazines, pamphlets, individual ultra-thin sheets of paper, but I think of them all as books) fill it perfectly with a total length of exactly one meter. (A little more than the bookshelf behind me, a little less than the built-in bookshelf that was in front of me when I started thinking of alternatives this way.) One meter. I have a bookshelf labeled “2050”, with books describing quite a number of apocalyptic outcomes in some detail, how things went wrong and got worse and worse… and some which might seem absurdly idyllic, but I think of all of them as possible.
2. Each book has an expositor: a little spider who spins this particular story-thread forever, who will pop up with 100% confidence that her story is the One True Model. I’m fond of my story-spiders and I’m well aware that they are part of my mind, but doing this helps me keep a distance from whichever story I think most likely, as well as whichever story I like the best (especially if they’re the same). They’re not actually me, I don’t identify with them, and maybe if I can keep telling myself that I don’t identify with a story then I’ll be readier to accept evidence against it.
3. Most of the time, the thickness of a book is my current feeling for its probability: 1% for a one-centimeter book, 99% for a 99-centimeter “book” which magically transforms into an encyclopedia. Yes, one-in-a-million gets a one-micron-thick pamphlet whose cover is marked with a reminder to take it seriously, specifically marked with a picture of the rather ugly scar-hole in my arm: the doctor told my parents, back in 1960 or so, that an infected injection like that was a “literally one in a million event” in Baltimore that year. One in a billion? That’s a graphene bilayer sheet, just a little less than a nanometer with the story-spider in between. Less than that? So far I stop at Plank’s length, which is never less than the thickness of my Ultimate Other Story, where the story-spider comes up and speaks in the (rather gravelly) old-man’s voice of Jubal Harshaw from Stranger in a Strange Land: “Maybe Mumbo-Jumbo, God of the Congo was Big Boss all along.” That’s a perpetual reminder that there’s stuff I haven’t thought of and couldn’t have thought of, stuff that doesn’t make sense and might be true anyway. The world is under no obligation to make sense to me.
4. There are two other factors, which I’m prone to confuse with the probability: they are salience (to what extent am I hearing about this from my usual information sources and reminders?) and importance (to what extent am I thinking that if this is true then it’s important that I know about it and think about it, which makes it more salient, even if it’s low-probability.) So sometimes I deliberately say to myself something like “on the one hand, it’s not probable, and the other hand, it’s salient, but on the gripping hand it really is important.”
5. What makes this Bayesian, or roughly Bayesian? The updates. When I get information that reminds me of a specific set of alternatives, a specific shelf, I go to that shelf in my mind. Expositor-spiders who are more surprised than average will tend to find that their books shrink, expositor-spiders who are less surprised than average will tend to find that their books grow. How much? Well, I do have a precise rule, but I don’t have precise data to put into it so I hardly ever do an actual calculation.
6. Let’s look at an example from last fall, when I read articles about drone swarms used successfully by Azerbaijan against Armenia, and they seemed to me to be more convincing than what I’d seen about that before: military AI from middle-ranked countries is moving faster than I’d expected overall. I have no opinion about the specifics of that conflict, and it doesn’t seem to be spreading in any medium-term-worrisome way, but the incentives for developing such systems have grown. Which narratives, which 2050-expositors, had been expecting that? Unfortunately, this didn’t tend to happen much in the expositions leading to futures I like. An arms race of militarily-effective AI figures in a number of scenarios, and sometimes it comes out okay, but my 2050-shelf has become somewhat more grim: it really does have an increased probability of our eventual robotic overlords being distinctly unFriendly AIs developed from neural nets designed and trained to create models of human beings in order to track their behavior, predict their actions, and kill them. You don’t think this can happen? How nice. I do. (Perhaps we can find a double-crux? Or then again, perhaps not.)
7. OTOH, I used to be suspicious of minimum-wage increases because of the obvious effect on the labor-automation trade-off: a McDonald’s manager facing an increased cost in labor will presumably just accept it or go out of business in the short run, but long-run is always looking for the cheapest available solution and if you raise the cost of labor then the alternatives, even with higher capital cost, become attractive….we’ve been slowly creating a 2050 world in which low-skilled people are pushed into a permanently unemployable underclass. Those narratives are still there in my head, in spring 2021 I think they’ve become more probable, but the military AI aspect makes them less important to me: I hope for cybernetic or quasi-cybernetic companions which in some narratives develop from neural net systems that create models of human beings in order to track their behavior, predict their actions, and bring their burgers or beer. (Okay, I’ve been a vegetarian since the late 60s and rarely drink beer, but you get the idea.) So my 2050-shelf has become somewhat less grim.
8. Overall, this is not a formal method, it’s not even the beginnings of a formal method, but it does serve to keep me reminded of context, of causal connections, and of alternatives — all the way out to the Ultimate Other, the Harshaw Hypothesis. I think it’s something I should do more, rather than less.
One thought on “Bayesian Bookshelf — 2050”