LessWrongness#1 — Utilitarianism

Thoughts on LessWrongness

I want to have a small list of fundamental thoughts about which I’m probably wrong and need to update. Let’s start with 

Utilitarianism 

  Once upon a time I did think of myself as a utilitarian, trying clumsily to judge my actions by their contributions towards the greatest good for the greatest number and all that. Nowadays I think of utilitarianism as the classical mechanics of ethical philosophy: immensely useful (pun unavoidable) but fundamentally wrong in that it assumes that there is a reference frame from which this can be evaluated; it leaves it for quantum mechanics and general relativity to deal with the fact that information is always local. Classical mechanics lets you talk about rigid bodies — intuitive and helpful, but if there were rigid bodies in the real world then we could use them to send information faster than light and we could use them for arbitrarily small manipulations. In ethics, it leads to statements like

WebMD, And The Tragedy Of Legible Expertise – by Scott Alexander – Astral Codex Ten

 “What’s the best form of government? Benevolent dictatorship, obviously, just get the best person in the country and let her fix everything. But everyone realizes this is easier said than done; the procedure to pick the best person is corruptible. At one point we tried a very simple best-person-picking procedure that really should have worked and ended up choosing Donald Trump as the best person.”

I think that the “obviously” proceeds from a view towards an aggregate utility calculation, one which is getting frequent Bayesian updates, by someone who’s best at doing it, and we just need a less-corruptible person-picking process. But…No, that’s not the problem. The problem is that there is no such best person in the country for fixing “everything”, there never will be such a person, there cannot be such a person, and the attempts by left and right and others to elect such a person and let them fix everything are inherently corrupt. So it is not the case that “everyone realizes” any part of this. In particular, I don’t.

   I now think of myself as a hierarchically local Hayekian consensualist, and isn’t that a mouthful? Don’t try to maximize other people’s utilities, don’t try to outguess their “revealed preferences” to see what they really value, just see what they consent to. If a decision can be made by an individual, it should be. If it needs a group decision procedure, then let the group be as small as possible in order to maximize the extent to which those affected consent, not only to the decision procedure (voting? Well, sometimes)  but to the actual decision. 

 Another way to say it is that I now value autonomy, autonomy of person or group within group, somewhat more than “being right.” 

  Of course this immediately raises issues such as — who counts as consenting, anyway?  Who is a Who, and can Horton really hear them? What if nobody consents to anything except “To crush your enemies. See them driven before you. And to hear the lamentations of their women.” Apart from Conan’s gender stereotyping there, that does sound like a description of the desires of self-described progressives and conservatives alike.  And maybe my consensualism doesn’t actually get anywhere anyway…. But actually, I think it might. 

  In the specific context of Scott’s quote above, think about the power of Dr. Fauci to “make decisions that will affect billions of dollars in wealth, Senate seats, Twitter likes, and other extremely valuable resources.” That’s an Inadequate Equilibrium and we can go on thinking about incentives, but those incentives are necessary (I think) consequences of saying that the Officially Approved Experts should choose for everybody. In my world, the CDC and FDA and so on would not exist in their current form. Instead, there would be a category of Officially Calibrated Experts with score-cards, and they would be not be in the business of making requirements or even recommendations because that doesn’t respect people’s (or groups’) autonomy, they’d just be making models to imply structures of conditional predictions so that maximally local groups could set recommendations or even requirements for themselves. Not power to the people, but autonomy (as far as possible) to each person.

  Or then again, maybe not.

(Update: Perhaps I should mention that there would therefore be no such thing as an FDA “approval”; I’d expect individuals to choose what they pay for themselves, insurance companies to choose what specific policies should include, and governmental units to choose what they require taxpayers to subsidize…but the issue would not be one of approval or not. And yes, it would get complicated. Or then again, maybe not.)

Bookshelf 2050 — update

1. The last few days of June 2021 have prompted a significant update in my Bayesian bookshelf for 2050. That’s the imaginary 100-centimeter bookshelf on which I keep mental models of what I expect for that year, with each book getting shelfspace proportional to my current guess as to its probability, and the basic rule that models shrink when their narrators are more surprised than the alternatives. The surprise in this case was the “heat dome” in the Pacific Northwest (specifically 116°F in Portland, OR and 121°F in Lytton, BC). It was announced  by cbsnews as a “once in a millennium” event, quoted by Slashdot (news for nerds) as 

Pacific Northwest Bakes Under Once-In-a-Millennium Heat Dome – Slashdot

In our historical record of North America’s Pacific Northwest this heat dome registers a statistical standard deviation from the average of greater than 4. In layman terms, that means it falls more than 4 deviations to the right of the center of a typical bell curve (shown below) and that equates to values with less than a 99.99% chance of happening.

In other words, statistically speaking, there is a 1 in 10,000 chance of experiencing this value. So, if you could possibly live in that spot for 10,000 years, you’d likely only experience that kind of heat dome once, if ever.…. 

2. As some of the commenters note, that’s at least somewhat bogus: it assumes a normal distributed variable over many centuries, which is not very likely. In fact I believe it conflicts fatally with the standard stuff about the Medieval Warm Period, the Little Ice Age (ending in 1850 or so, as our records were about to begin?), and all the rest. Climate is clumpy. On the other hand, we do see a reasonably normal distribution within the current clump. So — how to think about it? As I see it, there are three basic ways to look at this: wobble, slide, and hop. (Really, that’s “just a wobble” vs “just wobble and slide” vs all three.) Each is a narrative with a narrator.

Wobbler: Take it at face value, guys: we’re within a climate clump. That means that values do wobble but the distribution is reasonably normal through the clump (over a century now). So, the heat dome stasis is really enormously surprising but it’s possible and it happened and we don’t need to adjust any priors: it will be equally, or almost-equally, unlikely next year. The improbability shrinks when you remember that “Portland” and “Lytton” weren’t pre-specified: if one in ten thousand places has a one-in-ten-thousand event, that’s okay, right? Right? Well, that doesn’t quite fit: this is one in ten thousand improbability for the occurrence of a (North Pacific) heat dome like this, and there are less than a dozen substitute places, not ten thousand of them. It is not the case that we have independent random variables for each small town or even each substantial city. Still, the point is that this event could just be a random wobble in the data: (“Statistics Means Never Having To Say You’re Certain.”) 

I didn’t give the wobbler much shelf-space before, maybe 5%, but now he rather suddenly shrank to around 2%. 

Slider:  We’re not just within a clump, we’re within a clump in which the temperature has risen a couple of degrees over the past century and it’s still going up..  When people talk about once-in-a-millenium becoming once-in-a-century or once-in-a-decade, surely that’s what they’re usually doing, and until just now that has seemed pretty much adequate, indeed I dominated the shelf, like so:

  1. Think of the distribution of temperatures in 1880 as a normal bell-curve sort of graph; think of the distribution of temperatures in 2021 as another which is similar but the mean is two degrees up. 
  2. Now, think about a large college classroom (mine, teaching at Colgate in 1990) with a lot of young men and young women. Graph the male heights, graph the female heights: very similar bell curves, but with the means separated by about two inches. Oh, looky here! Almost all the very tall are male. 
  3. In just the same way, if you could look at two distributions of temperatures, from long ago and now, almost all the very high temperatures would be now: that’s the way normal curves normally work. So the kind of graph that the Economist shows here, where extreme highs seem to behave more dramatically than it sounds to say “the mean is two degrees up”, is not a surprise.

I’m reasonably sure slide is part of the problem — in fact I’d give that a 99.9%. But just slide? (Or rather, just wobble and slide together?) Does slide lead me to expect events like this? No, even though I don’t have the data to work out the odds for myself: the event was described as a major surprise to weather forecasters who are certainly well aware of what I just said, but who initially thought that the weather models predicting the crisis must be wrong. This was simply not within their zone of plausible outcomes. In fact that one-in-ten-thousand was based on current data within the models, i.e. based on how far we’ve been sliding. The slide is real and certainly makes static heat domes slightly more probable each year than the year before, but as an explanation for What’s Going On Overall it has to shrink as long as there’s an alternative narrator who is less surprised and does less shrinking. And there is, and we call him

Hopper: Yes, climate is clumpy, we’ve been within a clump, but this isn’t within the reasonably probable values of that clump, so just consider that maybe we’ve hopped out of it into a new clump, a clump in which things like “heat domes” of stalled weather happen with drastically increased frequency. 

That’s disturbing, and I tend to downrate it on the grounds that most of the time things keep on keeping on and predictions of phase change turn out to have been motivated reasoning. Still, if I have to downrate wobbler as very improbable and slider as inadequate, then some sort of a hop needs an uptick because they have to add up to very nearly 100%. (I’m always trying to allow  0.00….01% for various extreme improbabilities which I won’t start listing except in desperation.) 

3. So — what models are out there that fit with a hop? The main hop model, the only plausible one I’ve got, is that the (polar) jet stream is getting more wiggly, less reliable as a weather-pusher. This model is almost trivial: the jet stream is powered by a temperature/pressure gradient, climate change has been warming the poles more than the equator, so the jet stream weakens, fails in its usual task of guiding the prevailing west-to-east temperate-zone winds. A century ago, says hopper, the heat dome would have been pushed along and scattered by those prevailing winds, but this time they did not prevail. Specifically, that may have to do with the warming of the western Pacific and other incidentals, but generically we can expect this to happen more and more often, even beyond the merely exponential (!) rise that’s due to the part of a normal curve that we’re in.

4. On the failing-jet-stream hypothesis, Hopper is not talking about the temperature shift in itself, simply weather (temperature, humidity, pressure, wind) that doesn’t move along as much as it used to so the local and global averages may stay put as time goes on, the local and global standard deviations may stay put, but the local correlation of each day with the next and with the next week will tend to rise because the weather formations themselves will tend to stay put geographically. (If I’m visualizing correctly; at any rate, hopper is expecting an altered distribution of the daily, weekly, monthly rates of change more than an altered distribution of actual values.)

5.  I’ve personally read and talked about the wiggly jet stream notion in the past few years, but I’ve mostly held with Scott Johnson’s view from last February:

     Blaming a wiggly jet stream on climate change? Not so fast | Ars Technica “The hypothesis is easy to understand, but it’s far from a consensus.” And there are plenty who don’t buy it and they’re not change-deniers. Sure — it’s a “maybe so, maybe no: plausibility high but data insufficient…probability significant but not so high.” I was giving it about 25%….but that’s when slide seemed sufficient for what was happening.

6. And of course it’s not going to be a consensus even now, but on my Bayesian bookshelf it now gets quite a bit more space than it did. Overall, in advance of the heat wave I’d have been at 5:70:25, and now I’m more at  Hmmm…. I’m tempted to say 2:45:53, with hop being slightly dominant in my world-view. Why only barely dominant? Looking at my reactions when I try to set it higher, I think that it’s partially an issue of well-earned modesty: I know that I don’t know much, so I’m not prepared to pull hard against what I see as the prevailing trend. Ouch. 

7. Yes, that means I should probably split off that factor somehow to consider it explicitly; after all, my general confidence in Official Professional Expertise went down significantly during the pandemic, after going down from reading Yudkowsky’s “Inadequate Equilibria”, after going down from certain financial events in 2008. For now, I was just going to say 2:45:53. However, I’m in the process of reading Julia Galef’s “Scout Mindset” book and she has various mental exercises, hypothetical questions/statements you can work through to get a better sense of what you really think about something, and in the spirit of her exercises I just asked myself: what if I were talking with a few of the experts who seem convinced that it’s not true, and then with others who seem convinced that it is? I find that I can’t imagine being willing to move hop below 50% no matter what the first group said, but I’d be barely willing to move it up to 90% when talking with the others. So, I’m going to split that difference (50+90)/2 = 70, and go to 2:28:70, more or less swapping slide with hop while maintaining a low epistemic confidence.

8. That leaves me two tasks: one is to try to reduce the level of my ignorance, so I’ve been writing a post about jet streams and other “geostrophic flows”. The other is to put down at least part of my sense of consequences. What trends in future agriculture, forestry, fishing, industry, politics, and war do I now give more weight to, and how does technological change interact with each? 

9. Basically, hopper thinks open-ground agriculture is going to be even more difficult than shifter has been expecting. That will push several technological options, already in progress: Fast Company notes, in Is indoor farming about to have its moment?, that “it’s getting harder to grow food outside as climate change makes it more likely that farms face droughts, heat waves, flooding, and other disasters. Corn and wheat and other crops that likely don’t make sense to grow inside will have to find other solutions—such as new varieties that can better resist drought, for example—but for some foods, vertical farming could help fill a gap.” Forbes, in Is The Future Of Farming Indoors? has the same attitude but more of a focus on: “One of Square Roots’ indoor farms, for example, produces the same amount of food as a two- or three-acre farm annually, just from 340 square feet. This yield is achieved by growing plants at 90 degrees, and by using artificial intelligence (AI) to ensure the environment is optimal for each specific plant, including the day and night temperatures and amount of CO2 needed.”

10. And where does that trend stop? Does it stop? In ScienceDirect I see  Cellular agriculture — industrial biotechnology for food and materials: “It is somehow surprising that the explicit use of entire plant cells as food has only recently been suggested despite the established practice to exploit tissue and organ cultures of ginseng for food supplement production in Asia.

Plant cell culture medium is chemically fully defined and consists mostly of inorganic ingredients, that is, salts, sugar (usually sucrose) as carbon source and some low concentration vitamins and phytohormones.” So, can we turn sunlight & carbon dioxide into sugar more efficiently than green leaves do? That’s a pretty low bar, and the fact that we don’t want most of the plant makes it lower. At this point, my 2050 shelf has expanded sections on a variety of possible technologies that could be involved in generating corn meal and wheat flour, without corn or wheat fields… and the biggie is AI/robotics/InternetOfThings, with genetic engineering having expanded space too. And that need not be bad in itself, but the road from here to there is kinda bumpy. 

Very Basic Bayes # 2

   Dr. X: Hi, Joe! Here are your test results — notice that the first two are not actually tests you took, but you can use them the same way — and here’s Jack, a nurse who’s learning to explain Bayesian reasoning because that really wasn’t covered well in nursing school. He has paperwork or rather computer work to do, but he’ll answer questions, and I’ll see you in a little while — Bye-for-now! [Turns away, turns back.] I promise that if I think things are getting really urgent I’ll tell you right away. [Goes.]

   Joe: but… but…

   Jack: It’s okay, I’ll work in here. [sits.] Most of your questions you’ll probably want to address to the duck, though, so just say “Jack!” before you ask anything of me because otherwise I probably won’t be listening.

    Joe: the duck? 

    Jack: Yes, this perfectly ordinary plastic bath-toy duck. [QUACK-QUACK!] It’s an old trick — to get your thoughts in order and answer most of your own questions, just ask the duck, then think — did that question even make sense? Often you’ll realize that to ask the right question you need to figure out something else first, and once you finally do ask the right question it answers itself. But you usually have to ask out loud. So a helpful rule is sometimes to have somebody like me that might be able to answer an actual question once you’ve figured out the question that you really need to ask, but every such question has to be asked twice: you ask the duck, and only if you’re sure you can’t improve that question on your own do you repeat it with your helper’s name in front. 

     Joe: I don’t understand what’s going on here. Do I have Ickitis or not? [waits]. Oh, right. Jack, do I have Ickitis or not?

     Jack: That’s a very good question, and the answer to that question is the same as it is for a whole lot of questions: we don’t know for 100%-certain-sure, but we can say something about   

   [1 finger] how likely it is that you have it, and about 

   [2 fingers] what will likely happen if you have it and don’t treat it, and about 

   [3 fingers] how likely it is that you’ll have bad effects from treating it even if you don’t have it. 

Then you have to choose what comes next — or you can go back to asking Dr. X what to do. He said you came in after getting a test for yourself, which a lot of people do, and that you were willing and able to think about it, which a few people are, and so maybe you’re the kind of patient that can learn to do this kind of thinking. If so, he’d really like to encourage that, but if not, we’ll go back to traditional practice. Okay?

   Joe: Umm, yeah, I guess. I mean, yeah! I’ll try. So what’s this handout here? No, I mean, never mind. Duck, what’s this handout? I guess you don’t know, so I’d better read it…written specially for me? Umm… Jack, is this written specially for me?

   Jack: Well, I filled in some blanks for you from Dr. X’s note, with the “ickitis” name and the odds ratios for the ickitis test you got for yourself, and for the next test which you can read aloud to the duck — I think it might make sense that way.

   Joe: Oh. Okay. [clears throat]. Duck, “You started with yes-to-no odds of one to 999 in favor of having ickitis, and took a test that said you had it. The test had right-to-wrong odds of 99 to 1, and that gave you new yes-to-no odds of (1/999) * (99/1) or 99 to 999 that you had it, which was 11 to 111 or 11 chances out of 122 which is about 9%. And you know how to interpret that by visualizing four bunches of people: the infected who test positive or negative, and the uninfected who test positive or negative. There were 1000 infected with 990 of them testing positive, and 999,000 uninfected with 9,990 testing positive anyway.” Okay, duckie, I remember that. 

   Joe: Duck, “Test#2 is not a medical test. You’re asymptomatic. Most infected people are not asymptomatic, and most infected people are. With ickitis, 60% of infected people have symptoms, 40% don’t, and we call that a sensitivity of 60%. 5% of uninfected people have symptoms anyway, presumably from being infected by something else that could be confused with ickitis, and 95% don’t. We call that a specificity of 95%.”

   Joe: Duck, that actually makes sense. But I have no idea what to do with it except to go on reading. “You can use those proportions to update your probability. Imagine four groups of people arranged in rows and columns: 122 columns of 100 rows. The first 11 columns are of infected people, the remaining 111 are the uninfected, because a random person in that array starts out to be  just as likely as you are to be infected. Among the infected, the first 60 rows do show symptoms and the rest don’t: that’s the sensitivity. Among the uninfected, the first 95 rows don’t show symptoms and the rest do: that’s the specificity. If you don’t show symptoms, what is the probability that you’re actually infected?”

Joe: Duck, I think maybe I can do this. Let me think.

Very Basic Bayes

1. Joe: Doctor X! I have Ickitis, I need a Pill! Quick! 

  Dr. X: Hold on a minute Take a deep breath — how do you know? How sure are you?

  Joe: (breathes) I read there was an epidemic with about 1000 cases in town so I bought the test and it’s 99% accurate, so okay I guess there’s a 1% chance that I don’t really have it and I don’t have any symptoms yet but they say people often don’t get symptoms until really late and I’ve got it and I need a cure.

   Dr. X: You do know that there are a million people in this city, right? And we estimate that 1000 of them have Ickitis.

   Joe: Yes, and I’m one of them — just my luck.

   Dr. X: So before you took the test, if you didn’t know anything else, the odds of your having Ickitis were 1 yes to 999 no, okay? That’s an odds ratio, 1 to 999, right?

   Joe: Yeah — actually I was really surprised.

   Dr. X: And the odds of the test being right, if you didn’t know anything else, were 99 yes to 1 no, okay? That’s an odds ratio too.

   Joe:  Umm…yeah, I guess. 

   Dr. X: So when you get new evidence, you don’t forget the old evidence, you combine them. 

   Joe: Uhhh… What?

   Dr. X: Actually you multiply them, like fractions: 1 / 999 * 99 / 1 = 99 / 999 = 11 / 111 and that’s 11 yes to 111 no, or you can call it 11 yes out of 122 total and that’s about 9% of the total. So you probably don’t have Ickitis, but you might and we’ll do some more tests.

  2. Joe: Doctor, I’m really not a numbers guy and I don’t understand. Can you make that a little bit simpler? 

    Dr. X: Probably. Imagine we give everybody this test — that’s one million tests, right?

    Joe: Well, sure.

    Dr. X: And we know that about ten thousand of them will be wrong. That’s one percent.

    Joe:  It sounds pretty bad when you say it like that.

    Dr. X: Not really, it’s better than most tests. Very few tests are conclusive all by themselves, but they help and a 99% –  accurate test helps a lot. Anyway there are ten thousand wrong tests in this situation, some positive and some negative. And there are one thousand who really are infected, with ten of them having negative tests.

    Joe: Okay, and you’re saying there are nine-hundred ninety-nine thousand who are not infected, but one percent of them will have positive tests?

    Dr. X:  Yes, and that’s nine thousand, nine hundred and ninety with positive tests that are wrong. That’s four groups of people  — Okay so far?

    Joe:  I think so… there’s a very big bunch of people who are not infected and they know it; that’s almost a million. There’s a big bunch of people who are not infected but think they are, that’s almost ten thousand. There’s a small bunch of people who are infected and know it, that’s almost a thousand, and there’s a very small bunch — ten — who are infected but think they aren’t. Right?

    Dr. X: Right. And what you know right now is that you have a positive test, which means that EITHER you’re one of the small bunch — 990 — who are infected and “know” it, OR you’re one of the big bunch — 9,990 — who aren’t infected but think they are. So your odds ratio for actually being infected is 990 yes to 9990 no, which is the same 11 yes to 111 no that we had before: it’s your chance of being in the 990 true positives out of a total of, umm, let me write this down, a total of 10,980 positives true and false, and it’s just about nine percent. See?

3.   Joe: Yeah, so I guess it was pretty silly to take the test….

   Dr. X: No — not at all. You were worried about a one-in-a-thousand chance, and that’s not silly. Now you have a one-in-eleven risk, and that’s definitely reason for more thorough testing. See the vampire down the hall, I’m writing a prescription for two blood tests and I’ll see you — probably very briefly — tomorrow, same time.

Bayesian Bookshelf — 2050

1.  Given a possibly-true thought, I try to find alternatives; in fact I try to think of all the alternatives: I think of them as books and I set them on a Bayesian Bookshelf. That’s an imaginary shelf, but all the books (and magazines, pamphlets, individual ultra-thin sheets of paper, but I think of them all as books) fill it perfectly with a total length of exactly one meter. (A little more than the bookshelf behind me, a little less than the built-in bookshelf that was in front of me when I started thinking of alternatives this way.) One meter. I have a bookshelf labeled “2050”, with books describing quite a number of apocalyptic outcomes in some detail, how things went wrong and got worse and worse… and some which might seem absurdly idyllic, but I think of all of them as possible. 

2.  Each book has an expositor: a little spider who spins this particular story-thread forever, who will pop up with 100% confidence that her story is the One True Model. I’m fond of my story-spiders and I’m well aware that they are part of my mind, but doing this helps me keep a distance from whichever story I think most likely, as well as whichever story I like the best (especially if they’re the same). They’re not actually me, I don’t identify with them, and maybe if I can keep telling myself that I don’t identify with a story then I’ll be readier to accept evidence against it.

3.  Most of the time, the thickness of a book is my current feeling for its probability: 1% for a one-centimeter book, 99% for a 99-centimeter “book” which magically transforms into an encyclopedia.  Yes, one-in-a-million gets a one-micron-thick pamphlet whose cover is marked with a reminder to take it seriously, specifically marked with a picture of the rather ugly scar-hole in my arm: the doctor told my parents, back in 1960 or so, that an infected injection like that was a “literally one in a million event” in Baltimore that year. One in a billion? That’s a graphene bilayer sheet, just a little less than a nanometer with the story-spider in between. Less than that? So far I stop at Plank’s length, which is never less than the thickness of my Ultimate Other Story, where the story-spider comes up and speaks in the (rather gravelly) old-man’s voice of Jubal Harshaw from Stranger in a Strange Land: “Maybe Mumbo-Jumbo, God of the Congo was Big Boss all along.” That’s a perpetual reminder that there’s stuff I haven’t thought of and couldn’t have thought of, stuff that doesn’t make sense and might be true anyway. The world is under no obligation to make sense to me.

4. There are two other factors, which I’m prone to confuse with the probability: they are salience (to what extent am I hearing about this from my usual information sources and reminders?) and importance (to what extent am I thinking that if this is true then it’s important that I know about it and think about it, which makes it more salient, even if it’s low-probability.) So sometimes I deliberately say to myself something like “on the one hand, it’s not probable, and the other hand, it’s salient, but on the gripping hand it really is important.” 

5.  What makes this Bayesian, or roughly Bayesian? The updates. When I get information that reminds me of a specific set of alternatives, a specific shelf, I go to that shelf in my mind. Expositor-spiders who are more surprised than average will tend to find that their books shrink, expositor-spiders who are less surprised than average will tend to find that their books grow. How much? Well, I do have a precise rule, but I don’t have precise data to put into it so I hardly ever do an actual calculation.

6. Let’s look at an example from last fall, when I read articles about drone swarms used successfully by Azerbaijan against Armenia, and they seemed to me to be more convincing than what I’d seen about that before: military AI from middle-ranked countries is moving faster than I’d expected overall. I have no opinion about the specifics of that conflict, and it doesn’t seem to be spreading in any medium-term-worrisome way, but the incentives for developing such systems have grown. Which narratives, which 2050-expositors, had been expecting that? Unfortunately, this didn’t tend to happen much in the expositions leading to futures I like. An arms race of militarily-effective AI figures in a number of scenarios, and sometimes it comes out okay, but my 2050-shelf has become somewhat more grim: it really does have an increased probability of our eventual robotic overlords being distinctly unFriendly AIs developed from neural nets designed and trained to create models of human beings in order to track their behavior, predict their actions, and kill them. You don’t think this can happen? How nice. I do. (Perhaps we can find a double-crux? Or then again, perhaps not.)

7. OTOH, I used to be suspicious of minimum-wage increases because of the obvious effect on the labor-automation trade-off: a McDonald’s manager facing an increased cost in labor will presumably just accept it or go out of business in the short run, but long-run is always looking for the cheapest available solution and if you raise the cost of labor then the alternatives, even with higher capital cost, become attractive….we’ve been slowly creating a 2050 world in which low-skilled people are pushed into a permanently unemployable underclass. Those narratives are still there in my head, in spring 2021 I think they’ve become more probable, but the military AI aspect makes them less important to me: I hope for cybernetic or quasi-cybernetic companions which in some narratives develop from neural net systems that create models of human beings in order to track their behavior, predict their actions, and bring their burgers or beer. (Okay, I’ve been a vegetarian since the late 60s and rarely drink beer, but you get the idea.) So my 2050-shelf has become somewhat less grim.

8. Overall, this is not a formal method, it’s not even the beginnings of a formal method, but it does serve to keep me reminded of context, of causal connections, and of alternatives — all the way out to the Ultimate Other, the Harshaw Hypothesis. I think it’s something I should do more, rather than less.