gjm

Hi. I'm Gareth McCaughan. I've been a consistent reader and occasional commenter since the Overcoming Bias days. My LW username is "gjm" (not "Gjm" despite the wiki software's preference for that capitalization). Elsewehere I generally go by one of "g", "gjm", or "gjm11". The URL listed here is for my website and blog, neither of which has been substantially updated in about the last four years. I live near Cambridge (UK) and work for a small technology company in Cambridge. My business cards say "mathematician" but in practice my work is a mixture of simulation, data analysis, algorithm design, software development, problem-solving, and whatever random engineering no one else is doing. I am married and have a daughter born in mid-2006. The best way to contact me is by email: firstname dot lastname at pobox dot com. I am happy to be emailed out of the blue by interesting people. If you are an LW regular you are probably an interesting person in the relevant sense even if you think you aren't.

If you're wondering why some of my old posts and comments are at surprisingly negative scores, it's because for some time I was the favourite target of old-LW's resident neoreactionary troll, sockpuppeteer and mass-downvoter.

Wiki Contributions

Comments

Do you mean that you expect OpenAI deliberately wrote training examples for GPT based on Gary Marcus's questions, or only that because Marcus's examples are on the internet and any sort of "scrape the whole web" process will have pulled them in?

The former would surely lead to GPT-4 doing better on those examples. I'm not sure the latter would. Scott's and Marcus's blog posts, for instance, contain GPT-3's continuations for those examples; they don't contain better continuations. Maybe a blog post saying "ha ha, given prompt X GPT continued it to make X Y; how stupid" is enough for the training process to make GPT give better answers when prompted with X, but it's not obvious that it would be. (On the face of it, it would e.g. mean that GPT is learning more intelligently from its training data than would be implied by the sort of stochastic-parrot model some have advocated. My reading of what Marcus wrote is that he takes basically that view: "What it does is something like a massive act of cutting and pasting, stitching variations on text that it has seen", "GPT-3 continues with the phrase “You are now dead” because that phrase (or something like it) often follows phrases like “… so you can’t smell anything. You are very thirsty. So you drink it.”", "It learns correlations between words, and nothing more".)

I don't know anything about how OpenAI actually select their training data, and in particular don't know whether they deliberately seed it with things that they hope will fix specific flaws identified by their critics. So the first scenario is very possible, and so I agree that testing different-but-similar examples would give more trustworthy evidence about whether GPT-4 is really smarter in the relevant ways than GPT-3. But if I had to guess, I would guess that they don't deliberately seed their training data with their critics' examples, and that GPT-4 will do about equally well on other examples of difficulty similar to the ones Marcus posted.

(I don't have access to GPT-4 myself, so can't test this myself.)

I'm not claiming "low probability implies expectation is negligible", and I apologize if what I wrote gave the impression that I was. The thing that seems intuitively clear to me is pretty much "expectation is negligible".

What is the actual effect on Microsoft of a small decrease in their share price? Before trying to answer that in a principled way, let's put an upper bound on it. The effect on MS of a transaction of size X cannot be bigger than X, because if e.g. every purchase of $1000 of MS stock made them $1000 richer then it would be to their advantage to buy stock; as they bought more the size of this effect would go down, and they would keep buying until the benefit from buying $1000 of MS stock became <= $1000.

(MS does on net buy stock every year, or at least has for the last several years, but my argument isn't "this can't happen because if it did then MS would buy their own stock" but "if this happened MS would buy enough of their own stock to make the effect go away".)

My intuition says that this sort of effect should be substantially less than the actual dollar value of the transaction, but if there's some way to prove this it isn't known to me. This intuition is the reason why I expect little effect on MS from you buying or selling MS shares. But let's set that intuition aside and see if we can make some concrete estimates (they will definitely require a lot of guesswork and handwaving). How does a higher share price actually help them?

  • If they choose to raise money by selling shares, they can get a bit more money by doing so.
  • If they choose to use shares for money-like purposes (e.g., buying another company with MS shares rather than cash, incentivizing employees by giving them shares), any fixed fraction of the company they are willing to part with is worth a bit more.
  • If they choose to borrow money, they can probably get slightly better terms because their company is seen as more valuable, hence better able to raise money, hence less likely to default.

To estimate the impact of the first two of these, we could e.g. look at how the number of outstanding Microsoft shares changes each year; if we (crudely?) suppose that this doesn't depend strongly on small changes in share price, then a decrease of x in the Microsoft share price lasting a year costs Microsoft (number of MS shares issued that year) times x, because that's how much less money, or perceived money-equivalent benefit, they got from issuing those shares.

It turns out, as I mentioned above, that for the last several years the number of outstanding Microsoft shares has decreased every year, so that crude model suggests that a decrease in their share price actually helps them, because as they (on net) buy back shares they are paying less money to do it. Oops.

The third effect seems like it's definitely of the right sign, since whatever the net chance in MS's debt over time it isn't literally taking out any anti-debt. Do we have any plausible way to estimate its size? We could look at e.g. Microsoft bond coupon figures and try to correlate them with the MSFT share price after correcting somehow for generally-prevailing interest rates, but I don't have the expertise to do this in a meaningful way and also don't have access to any information to speak of about Microsoft bond sales. Let's try an incredibly crude model and see what we get: suppose that right now MS can borrow money at 2% interest, and that if their stock price dropped 3x then they would be seen as a much bigger risk and the figure would go up to 4%, and that what happens in between is linear. The current stock price is about $300, so this is saying that a $200 fall in stock price would mean a 2% rise in annual interest rate, so each cent of stock price change means a 1/10000% change in annual interest rate.

How much borrowing does MS do? Hard to tell. (At least for me; perhaps others with more info or more business expertise can do better.) Over the last several years their total debt has consistently decreased, but not by very much; it sits at about $60B. With total debt roughly steady, that total debt should be the number of "debt dollar-years incurred per year". So one year of 1c-higher stock prices would mean a reduction of $60B x 1/10000 x 1% in debt interest payments; that is, of $60K.

There are also the other two effects, but crudely those appear to point in the other direction given MS's decision to buy back more stock than it issues. I'll assume they're zero. Maybe there are other mechanisms too (e.g., some sort of intangible thing where MS is a more attractive employer if its stock price is high) but I expect these to be smaller than the three concrete ones listed above.

What should we assume about the lasting effect of your buying or selling MS stock? If we assume the price is a pure random walk then a 1c decrease in price lasts for ever, but that seems incredibly implausible (at least in the case where, as here, your reasons for selling don't have anything to do with your opinion about the future value of the company). On the basis of pure handwaving I'm going to suppose that the effect of your selling your stock is to depress the share price by one cent for one month. (Both figures feel like big overestimates to me.) That would suggest that selling $300K of MS stock costs the company, via this particular mechanism, about 1/12 of $60K, or about $5K.

That doesn't mean $5K less for AI work, of course. I think MS spends about $50B per year. MS's total investment in OpenAI is $10B but that isn't happening every year, and presumably they do some internal AI work too. Let's suppose they spend $5B on AI per year, which feels like a substantial overestimate to me. Then giving them an extra $5K means an extra $500 spent on AI.

So, after all this very rigorous and precise analysis, my crude estimate is that by selling a $300K stake in Microsoft you might effectively reduce their AI spending by $500. I'm pretty comfortable calling that negligible. But, of course, the error bars are rather larger than the number itself :-).

(Counter-argument: "Duh, the value of a company as measured by the market, which knows best, is just its total market capitalization. So if something you do changes the share price by x and there are y outstanding shares, then you changed the value of the company by exactly xy. This value is much larger than your estimate, so your estimate is bullshit." But I claim that this argument is bullshit: if, as I believe, transactions that are independent of any sort of estimation the actual future value of the company have only transient effects, then it is just not true that you have changed any actually meaningful measure of the value of the company by exactly xy.)

I think that unless you are investing a very large amount of money it's reasonable to round the effect of your investment choices to zero. You presumably didn't buy shares directly from Google, Microsoft, Nvidia and Meta, so the only way for your choice to invest in them or not can affect those companies' ability to work on AI is via changes in the share price. When you sold those shares, did the price change detectably between the first share you sold and the last? If not, the transaction probably wasn't large enough to alter the price by as much as a cent per share. A sub-1c-per-share difference in the amount of money MS or whoever can raise by selling shares doesn't seem likely to have much impact on their ability to do whatever they might want to do.

One that gives good reason for someone hearing it who wasn't previously aware of it to increase their credence in the thing it's an argument for. (And, since really we should be talking about better and worse arguments rather than good and bad ones, a better argument is one that justifies a bigger increase.)

For instance, consider the arguments about how COVID-19 started infecting humans. "It was probably a leak from the Wuhan Institute of Virology, because you can't trust the Chinese" is a very bad argument. It makes no contact with anything actually related to COVID-19. "It was probably a natural zoonosis, because blaming things on the Chinese is racist" is an equally bad argument for the same reason. "It was probably a leak from the WIV, because such-and-such features of the COVID-19 genome are easier to explain as consequences of things that commonly happen in virology research labs than as natural occurrences" is a much better argument than either of those, though my non-expert impression is that experts don't generally find it very convincing. "It was probably a natural zoonosis, because if you look at the pattern of early spread it looks much more like it's centred on the wet market than like it's centred on the WIV" is also a much better argument than either of those; I'm not sure what the experts make of it. In the absence of more cooperation from the Chinese authorities (and perhaps even with it) I would not expect any argument to be very convincing, because finding this sort of thing out is really difficult.

I think it's completely wrong that

if you disagree on the point at issue, you must believe that there are no good arguments for the point.

There absolutely can be good arguments for something that's actually false. What there can't be is conclusive arguments for something that's actually false.

(Also, if I had been more precise I would have said "... prefer better arguments to worse ones"; even in a situation where there are no arguments for something that rise to the level of good, there may still be better and worse ones, and I may be disappointed that I'm being presented with a particularly bad one.)

I think "you'll never persuade people like that" means several different things on different occasions, and usually it doesn't mean what Zack says it always means.

(In what follows, A is making some argument and B is saying "you'll never persuade people like that".)

It can, in principle, mean (or, more precisely, indicate; I don't think it's exactly the meaning even when this is what is implicitly going on) "I am finding this convincing and don't want to, so I need to find a diversion". I think two other bad-faith usages are actually more common: "I am on some level aware that evidence and arguments favour your position over mine, and am seeking a distraction" (this differs from Zack's in that the thing that triggers the response is not that A's arguments specifically are persuasive to B) and "I fear that your arguments will be effective, and hope to guilt you into using weaker and less effective ones".

It can mean "I at-least-somewhat agree with you on the actual point at issue, and I think your arguments are bad and/or offputting and will push people away from agreeing with both of us, and I don't like that".

It can mean "I disagree with you on the actual point at issue, but prefer good arguments to bad ones, and I am disappointed that you're putting forward this argument that's no good".

It can mean "I disagree with you on the actual point at issue, and it's hard to tell whether your actual argument is any good because you're being needlessly obnoxious about it and that's distracting".

Zack suggests that "you'll never persuade people like that" is an obvious bad-faith argument because A isn't trying to persuade "people", they're trying to persuade B, and it's weird for B to complain about "people" rather than saying that/why B in particular isn't persuaded. But I don't buy it. 1. "You'll never persuade people like that" does in fact imply "you aren't persuading me like that". (Maybe sometimes dishonestly, but that is part of what is being claimed when someone says that.) 2. If A is being honest, they aren't only trying to persuade B. (Most of the time, if someone says something designed to be persuasive to you rather than generally valid, that's manipulation rather than honest argument.) So it's of some relevance if B reckons A's argument is not only unhelpful to B but unhelpful generally.

gjm14d2019

Whether it matters what other broadly similar groups do depends on what you're concerned with and why.

If you're, say, a staff member at an EA organization, then presumably you are trying to do the best you could plausibly do, and in that case the only significance of those other groups would be that if you have some idea how hard they are trying to do the best they can, it may give you some idea of what you can realistically hope to achieve. ("Group X has such-and-such a rate of sexual misconduct incidents, but I know they aren't really trying hard; we've got to do much better than that." "Group Y has such-and-such a rate of sexual misconduct incidents, and I know that the people in charge are making heroic efforts; we probably can't do better.")

So for people in that situation, I think your point of view is just right. But:

If you're someone wondering whether you should avoid associating with rationalists or EAs for fear of being sexually harassed or assaulted, then you probably have some idea of how reluctant you are to associate with other groups (academics, Silicon Valley software engineers, ...) for similar reasons. If it turns out that rationalists or EAs are pretty much like those, then you should be about as scared of rationalists as you are of them, regardless of whether rationalists should or could have done better.

If you're a Less Wrong reader wondering whether these are Awful People that you've been associating with and you should be questioning your judgement in thinking otherwise, then again you probably have some idea of how Awful some other similar groups are. If it turns out that rationalists are pretty much like academics or software engineers, then you should feel about as bad for failing to shun them as you would for failing to shun academics or software engineers.

If you're a random person reading a Bloomberg News article, and wondering whether you should start thinking of "rationalist" and "effective altruist" as warning signs in the same way as you might think of some other terms that I won't specify for fear of irrelevant controversy, then once again you should be calibrating your outrage against how you feel about other groups.

For the avoidance of doubt, I should say that I don't know how the rate of sexual misconduct among rationalists / EAs / Silicon Valley rationalists in particular / ... compares with the rate in other groups, nor do I have a very good idea of how high it is in other similar groups. It could be that the rate among rationalists is exceptionally high (as the Bloomberg News article is clearly trying to make us think). It could be that it's comparable to the rate among, say, Silicon Valley software engineers and that that rate is horrifyingly high (as plenty of other news articles would have us think). It could be that actually rationalists aren't much different from any other group with a lot of geeky men in it, and that groups with a lot of geeky men in them are much less bad than journalists would have us believe. That last one is the way my prejudices lean ... but they would, wouldn't they?, so I wouldn't put much weight on them.

[EDITED to add:] Oh, another specific situation one could be in that's relevant here: If you are contemplating Reasons Why Rationalists Are So Bad (cf. the final paragraph quoted in the OP here, which offers an explanation for that), it is highly relevant whether rationalists are in fact unusually bad. If rationalists or EAs are just like whatever population they're mostly drawn from, then it doesn't make sense to look for explanations of their badness in rationalist/EA-specific causes like alleged tunnel vision about AI.

[EDITED again to add:] To whatever extent the EA community and/or the rationalist community claims to be better than others, of course it is fair to hold them to a higher standard, and take any failure to meet it as evidence against that claim. (Suppose it turns out that the rate of child sex abuse among Roman Catholic clergy is exactly the same as that in some reasonably chosen comparison group. Then you probably shouldn't see Roman Catholic Clergy as super-bad, but you should take that  as evidence against any claim that the Roman Catholic Church is the earthly manifestation of a divine being who is the source of all goodness and moral value, or that its clergy are particularly good people to look to for moral advice.) How far either EAs or rationalists can reasonably be held to be making such a claim seems like a complicated question.

gjm15d3420

It is. But if someone is saying "this group of people is notably bad" then it's worth asking whether they're actually worse than other broadly similar groups of people or not.

I think the article, at least to judge from the parts of it posted here, is arguing that rationalists and/or EAs are unusually bad. See e.g. the final paragraph about paperclip-maximizers.

Thanks for the feedback on your original downvote.

I was not offering "feedback on [my] original downvote"; there was no original downvote; as I already said, I did not downvote your earlier post. (Nor this one, nor any of your comments.)

You said you downvoted the original post because "It presents as new something that's not new at all".

No, I neither downvoted that post nor said I had. But I do think that that post presents as new something that isn't new, and I stand by that.

My original point was that we don't need to artificially limit ourselves to non-philosophical, universe-centric thinking

Sure. But when you post something saying "Isn't it a mistake to think X?", the implication even if you don't say it explicitly is that people generally think X. No one is posting articles saying "Isn't it a mistake to think that the sky is green?".

You describe my writing as “complaining,” and my discussion of risks as “complaints.” These words have an obvious negative connotation. [...] Why do you choose to use demeaning words to describe my concerns, but not the concerns of others?

I didn't particularly intend any negative connotation. (If you wanted to describe some of the things I said about your post as "complaints", that would be pretty reasonable too.) The term certainly isn't "demeaning". Out of curiosity I put my own username and "complaint" into the LW search bar, and the first thing it found (which admittedly is from years back) is a comment of mine in which I describe myself as "whining" and having a "complaint". Other search results also show me calling things "complaints" without any derogatory meaning. I think your, er, complaint here is just off-base; the demeaning you think you see is not real.

My original post had only one sentence about reddit.

Two, I think (the one after the one that explicitly contains the word "reddit" is surely continuing the thought of the previous one), out of only 12 sentences in the whole post, and the Reddit physics moderators are one of only two specific groups of people you call out as having a "theocratic" attitude. I don't think my description of what you wrote is unreasonable.

Again, the word "annoyed", is like the word "complained".

Again, I didn't intend it derogatorily. I get annoyed all the time. There is nothing wrong with getting annoyed (other than the fact that it's an unpleasant experience).

someone asked if Brian Greene's perspectives were "BS" and there was a pile-on of people criticizing Brian's ideas.

It sounds as if you're talking about this reddit discussion, ... except that I had a look at it and I didn't see a pile-on of people criticizing Brian Greene's ideas.

I don't think moderators are "evil"

The grandiose language and capital letters were intended as a signal that I wasn't being terribly serious, and was neither stating my own opinion nor making actual claims about yours. I'm sorry if that came across as dismissive. (Also, I think that when you begin by calling something "theocratic" you lose the right to complain that others are presenting your position as extremist.) Anyway, the point is that discussions like e.g. the "is what Greene and Kaku say BS?" one do happen, despite being somewhat philosophical and speculative, which to my mind indicates that whatever bias you might perceive it doesn't prevent such discussions taking place.

your choice of wording seems a little passive-aggressive when you say "I am not claiming that you are a crank; I’m not in a position to judge. But the evidence we have available at present doesn't give us much grounds for confidence that you're not."

I'm not sure how I could have expressed that in a way you would find less "passive-aggressive". Since the point I am making is "one reason why your post wasn't well received may have been that it gives the impression that you might be a crank", there's no way to say it without using the term "crank" or something broadly equivalent. I can't say "but of course it's obvious you aren't in fact a crank" because I've got no way to tell whether you are or not, because you've said very little about your opinions and ideas. I appreciate that it isn't pleasant to be told "what you wrote sounds like you might be a crank", but I think that is actually one of the reasons why you got downvoted.

DM me if you'd like a free PDF copy of one of the books [...] then you can have more evidence [...] I'm not going to engage further

It doesn't seem like there's much point in my having more evidence, if you've decided there isn't value in further engagement.

gjm16d1310

I did not downvote your original post, but it did not make a positive impression on me. I'll explain some reasons why, in case they are helpful.

  • It presents as new something that's not new at all. (Maybe your specific take on the idea of multiple universes and the like is new, but you don't say anything about what that specific take is.) E.g., there's the "string theory landscape" (example book by famous author with major publisher for popular audience: Leonard Susskind's "The cosmic landscape") and "eternal inflation" (example book by famous author with major publisher for popular audience: Michio Kaku's "Parallel worlds", though I admit much of the book is about other things) and Tegmark's "mathematical universe" (example book by famous author with major publisher for popular audience: Tegmark's own "Our mathematical universe").
  • Its complaints seem exaggerated. "Theocracy", really? You complain here that you only said some scientists are "theocratic" and everyone wrongly assumed you meant science as a whole. But what "theocratic" (as opposed to, say, "religious") means is that the people in charge, specifically, are driven by religion and probably suppressing all dissent. It makes no sense at all to say "these few people, a subset of a subset, are theocratic". So of course you were taken to be making a claim about The Scientific Establishment more generally. Anyway, it seems very untrue that there's anything like a theocratic prohibition on multiverse theories; the nearest thing to a theocracy in fundamental physics at present is the priesthood of string theory, and I think on the whole string theorists are for multiverses.
  • Calling the "shut up and calculate" approach "theocratic" seems to me to be simply a misunderstanding of what that phrase means. (Possibly also relevant: the person who first used it did so to criticize the Copenhagen interpretation's supposedly dismissive attitude towards more philosophical questions, and furthermore has since then come to think that the criticism wasn't really fair.) Physicists of the SUAC school don't, so far as I can tell, generally hold that talk of philosophical issues should be suppressed; they just don't themselves find it helpful and think physicists can make more progress by focusing on other things.
  • The focus on "moderators of the physics subreddits" seems weird. Who cares about the moderators of the physics subreddits? (And why would Less Wrong be a suitable place to discuss their alleged deficiencies?)
  • In fact, that focus strongly suggests a specific scenario: you have some theory of physics, you've been trying to promote it on Reddit, it hasn't gone down well, and you're annoyed about that. And, unfortunately for you, in the great majority of such cases the problem is not primarily with the Reddit moderators. Of course you might be the exception! But the expectation you're setting up is not a good one.
  • Taking a look at e.g. /r/physics, there is discussion of philosophy, multiverses, etc., from time to time. So if the Evil Theocratic Moderators are suppressing such discussion, they aren't doing it very carefully.

Also:

  • I don't think shminux's assumption that you could stand to learn some fundamental physics rather than handwaving about it was unreasonable, whether or not it was correct. Your original post doesn't read like it's written by someone with substantial expertise.
  • Having self-published books about relativity is not strong evidence about whether a person has expertise in fundamental physics. (I actually suspect it may be evidence against.) Again, I am not claiming that you are a crank; I'm not in a position to judge. But the evidence we have available at present doesn't give us much grounds for confidence that you're not.

[EDITED to add:] Oh, one other thing: your profile gives a URL for your web page, but it is the wrong URL. The real one has .org on the end, not .com.

Load More