All of David Hugh-Jones's Comments + Replies

A little meta-advice... you're in a weird community – not nec bad, but quite unusual – and one that is far from guaranteed to have a high level of experience and expertise with childrearing. But yet also, one that is quite likely to confidently express numerous opinions on the topic.

I understand your point. Probably I'm overestimating. Which quotes were hard? I'm guessing that e.g. “the commandement, or example of our superiours” and “grant [sin] but her little, and this little will quickly come to a great deale” are relatively clear.

Reading C17 English isn't hard to learn: it's modern English (not Middle or Old) but just in an antiquated style and sometimes words have different shades of meaning. By "not hard" I mean you can teach yourself, simply by reading stuff and picking it up as you go along - I did Shakespeare at school, then ... (read more)

There are definitely differences. One is that NNs are trained on training data and then let loose on real world (or testing) data. Markets are always training online. Another is that NNs (are supposed to) approximate a true hidden function, whereas markets are adapting to changing conditions not necessarily to a single underlying truth. But markets do adapt to inputs they haven't seen before, and there are economic theories describing that process, like adaptive expectations and tatonnement. I suspect that markets are more likely to adjust quite quickly, and also to "forget" old data quite quickly.

Thank you! Yes, this guy adds details.

Also, the nuclear family is absurd and unnatural, and we evolved to be raised by entire communities, not just by one mother and maybe one father. With too few adult caretakers, of course outcomes will be far worse.

Citation needed. Parents are biologically related to their children. Entire communities are not. Is there any evidence that humans have any evolved characteristics which make the nuclear family worse than community upbringing?

Historically, the "community" did not refer to strangers who by accident live on the same street, but to people of the same village or tribe, which typically included cousins, aunts, uncles.

I think almost everyone (who isn't daft) accepts IQ is partly genetic - and the author does too. But the question is whether there's a gene-environment interaction in parenting styles, which is slightly different.

So the argument is based on substitutability. If you don't do (global good thing X) - and if it is truly important - someone else probably will.

That is true, but I also think journal editors will internalize that. And it's easy to fetishize this stuff – electronic formats die out, so let's engrave all our journals on stone tablets! – but arguably, any important article exists in 50 versions on the web, and will eventually be preserved, so long as anyone cares about it. At least, that seems to have happened so far.

I think that's a weird take. A cooperation game typically has actions where you lose, but others gain more (whatever actions others take). Prisoner's Dilemmas and public goods games are simple examples. The only wrinkle is "what counts as more" if you take seriously the idea that utility is non-comparable across persons. But a weaker criterion is just "everyone would be better off if everyone cooperated", which again the PD and public goods games satisfy.

You're right. I didn't distinguish between the two concepts, because I think cooperation in the colloquial sense – working together for a shared goal – typically involves elements of both. 

At its simplest, the internet makes communication easier, especially public communication. That should certainly help to solve coordination problems. It'll also help solve cooperation problems insofar as (1) communication shapes preferences; (2) people are susceptible to social norms, and communication helps to spread norms, clarify them and make them salient; (3) p... (read more)

For social science, here are some I'd throw in:

  • Are there questions nobody is asking?
  • Is there a real world phenomenon that nothing in the field addresses?

The best social science follows George Orwell's dictum: it takes huge effort to see what is in front of your face.

The Scheidel review is worth reading, though it's rather hurried. Their argument is quite complex and sometimes a bit fuzzy, and the book has a lot of detail; I think it'll take some time for academics to chew through it and understand exactly what's at stake.

 Other academic reviews I've read haven't been great - the typical stuff has been "how can they ignore [my narrow narrow subfield]".

You might be interested in the concept of "license", which was widely used until about the 18th century. License was like liberty, but license was bad, liberty was good, and the difference was that liberty presupposed self-restraint. So, liberty would be in the middle of your line, license on one end of it, and on the other maybe "tyranny".

Sure, I find that take on moral intuitions plausible. But if society has to make a real choice of the order of "how much to tax carbon", I think that collectively we would not want to make the decision based on people saying "meh, no strong opinions here, future world X just seems kinda prettier". We need some kind of principled framework, and for that... well, I guess you need moral philosophy! 

Sorry, missed this somehow. I don't think it's plausible that there'll ever be widespread agreement on any philosophical framework to be used to make policy decisions. In fact, I think that it's much easier to make public policy decisions without trying to have a framework, precisely because the intuitions tend to be more shared than the systematizations. I've never seen an actual political process that spent much time on a specific framework, and I've surely never heard of a constitution or other fundamental law or political consensus, anywhere, that said, let alone enforced, anything like "we're a utilitarian society and will choose policies accordingly" or "we're a virtue ethics society and will choose policies accordingly" or whatever.
2M. Y. Zuo1y
The curious thing about your wording is that you go from ‘we would not want to make‘ to ‘we need some kind of principled framework’. The former does not automatically imply the latter. Additionally, you presuppose the possibility of discovering a ‘principled framework’ without first establishing that such a thing even exists. I think the parent comment was trying to get at this core issue.

The point is whether they exist conditional on us taking a particular action. If we do X a set of people will exist. If we do Y, a different set of people will exist. There's not usually a reason to privilege X vs. Y as being "what will happen if we do nothing", making the people in X somehow less conditional. The argument is "if we do X, then these people will exist and their rights (or welfare or whatever) will be satisfied/violated and that would be good/bad to some degree; if we do Y then these other people will exist, etc., and that would be good/bad ... (read more)

Now I hear the Life of Brian playing in my head: "Always look on the bright side of life! De-duh, de-duh de-duh de-duh!" 

Hume didn't always take his own rhetoric or ideas too seriously. He said he couldn't prove that his friends even existed, but when he played billiards with them, these doubts vanished.

Here's another thought experiment for those convinced by this gloomy view... suppose you find a large red switch marked "Universe: on/off". Flipping it will cause the immediate painless non-existence of everyone everywhere. Do you flip it? Think how mu... (read more)

Yes, I wouldn't say suicide is the be-all and end-all indicator, though it is quite suggestive. I'd also lay weight on simple common sense and intuition here. Most people today like life. If you read about ordinary people from 200 years ago or before, it doesn't seem like unremitting misery. (Piers Plowman, the "rude mechanicals" in Shakespeare, the peasants in the Georgics or in medieval Books of Hours, the ordinary people in the Old and New Testaments. Maybe these were just elites idealizing peasants? Hmm... up to a point.) Reporters and anthropologists who live with peasants and the poor today similarly paint a picture with light as well as shade.

Saying "it's good to be alive" is not the same as saying people have a moral imperative to bring children into the world. It would probably improve human welfare if I gave all my assets to the poor and starved to death, but I don't have a moral imperative to do it. Judgments of overall welfare are ways of deciding what to do collectively, but no individual has an absolute duty to maximize overall welfare at the expense of his own basic desires and life choices. 

(This is my personal view, not especially carefully thought-out. Some people probably do th... (read more)

Well, he says he does. I think it would be very sad if he acted on the idea, and I bet you agree.

I don't know about population sizes either. Maddison (cited at says that in 1500, Americans were only about 4% of global population, with 20% being in Europe and 68% in Asia. 

There is substantial disagreement about New World population levels. Maddison seems to have written before 1491 [] publicized evidence that smallpox killed most people there before Europeans made much contact with them. I used the estimate of 100 million that I got from 1491's Wikipedia page, but that's likely near the high end of expert guesses.

Yup, definitely true that I haven't considered the effect on non-humans. I think you'd have to be very pessimistic to say that agriculture was a mistake from this perspective. That might be true (1) if agriculture involved so much animal suffering that it outweighed the human good involved. Or (2) if from agriculture on, humans were set on a path that inevitably led to the destruction of life on earth, e.g. by runaway global warming or nuclear war. I think (1) might be true of modern factory farming, but is less likely true of traditional farming. (2) is as yet unknown, but I hope that it will not be so.

I don't think this argument makes sense. Of course the people who will be born are "imaginary". If I choose between marrying Jane and Judith, then any future children in either event are at present "imaginary". That would not be a good excuse for marrying Jane, a psychopath with three previous convictions for child murder. More generally, any choice involves two or more different hypothetical ("imaginary") outcomes, of which all but one will not happen. Obviously, we have to think "what would happen if I do X"? It would be silly to say that this question i... (read more)

If they will come into existence later, they have moral weight now. If I may butcher the concept of time, they already exist in some sense, being part of the weave of spacetime. But if they will never exist, it is an error to leap to their defense -- there are no rights being denied. Does that make more sense?

As a basis for purely personal morality that may be fine, but as a way of evaluating policy choices or comparing societies it won't be enough. Consider the question "how much should we reduce global warming"? Any decision involves alternative futures involving billions of people who haven't been born yet. We have to consider their welfare. Put another way, the word "imaginary" is bearing a lot of weight in your argument: people who are imaginary in one scenario become real in another.

This logic holds if it is an unassailable given that they will be born. If you remove that presupposition and make it optional, then these people can be counted as imaginary as jbash says. They become a real part of the future, and thus of reality, only once we decide they shall be. We might not. Maybe we opt for the alternative of just allowing the currently alive human beings to live forever, and decline to make more. PS: Anyone know a technical term for the cognitive heuristic that results in treating hypothetical entities that don't exist yet as real things with moral weight, just because we use the same neural circuitry to comprehend them, that we use to comprehend a real entity?

Well, that's true, but I think it's less a problem for me than it is for a lot of people here, because I don't think there's any respectable moral/ethical metric that you can maximize to begin with.

Ethics as a philosophical subject is on very shaky ground because it basically deals with creating pretty, consistent frameworks to systematize intuitions... but nobody ever told the intuitions that they had to be amenable to that. All forms of utilitarianism, specifically, have horrible problems with the lack of any defensible way to aggregate utilities. There ... (read more)

The argument isn't that simply having more people alive is better. That's why I spend time arguing that people's lives are worthwhile.

I mention two intuitions. The intuition that it's good to be alive is quite widely shared, no? Even people who claim to disagree often act as if they agree. (My uncle repeatedly said he didn't want to live any more, yet he carefully avoided Covid.) 

The intuition that people's lives have value in themselves, and not in relation to what else is going on, isn't just a gut feeling. It relates to the idea that what has value... (read more)

Actually, the worlview "it's NOT good to be alive; the fact that almost everyone think that it's good to be alive is just failure of human reflectivty" is pretty consistent. I don't endorse it, but my best friend do.
There are also ethical (even utilitarian) frameworks that consider hypothetical people to be fundamentally different than real current people. I can say that I think we should maximize the average utility of all current people going into the future while also thinking that I should choose the future where the hypothetical people have the highest average happiness. How you weigh current people versus future hypothetical people is complex but beyond the scope of this post I think. That is, if there are ten people alive today and I’m choosing between an option where the ten people each have 10 utils or 100 utils, obviously I should choose the 100 utils. But if I’m choosing between a future where 100 people will exist with 5 utils each or a future where 10 people will exist with 10 utils each, there is no person who is worse off in the second future compared to the first future, so no person is harmed by choosing the second future. Frankly I don’t think that people have a moral intuition that actually matches your suggestions. Almost any couple in a developed world could probably support raising ten children, and all of those children would be happy to exist, but it just seems wrong to say that couples have a moral imperative to have as many children as possible. (I think that would still hold true even if pregnancy and childbirth were painless and free.)

The thing is that I don't give imaginary people equal weight to real ones. It seems obvious to me that somebody who doesn't exist anywhere in space or time doesn't get any consideration. And that means that I am under no obligation to bring them into existence or to care whether anybody else does.

As for agression, all I can say is that I processed it that way.

It sounds as if you have a number of political positions you want to get across, but I'm not sure how they relate to my argument, which is about how discursive conflict takes place. If you don't like my examples, of course you can just replace them in your mind with examples you prefer.

I was mostly just disagreeing with this: There isn't a single, all encompassing consensus. Discursive conflict doesn't just involve everyone agreeing on stuff. It also includes: * Widespread narratives that a lot of people will at a minimum, pay lip service to * But also people disagreeing with those, fundamentally This means that there is still fighting over what a) values are important, b) narratives. These can be won but not completely. There's also some positions here and there that are compromises like: * 'copyright is fine, but it shouldn't keep getting extended just so Disney can hold onto its intellectual property forever.' It's not that 'I have a number of political positions' - it's that they exist. The fight over narratives is not entirely over in a lot of areas - it's just strong, broadly. And this happens in politics because people still care, and stuff is still going on. At this point, do a lot of people care who invented calculus? I would guess, not a lot.

Sure. But the most interesting dependent variable isn't usually "how many standard deviations of Y will I gain", it's e.g. "how many years of education will I gain". In any case, on either scale, is there a PGS where a 1 s.d. change does something big? You might say the most recent EA is a candidate. In one dataset a 1 s.d. increase causes (i.e. within-siblings) about a 4.5 percentage point increase in the probability of university attendance. 

I agree that SD units are strictly speaking meaningless and something like this is reelvant. However I'm just saying that R2 does not help over R with this, and in fact makes it worse because R2 is nonlinearly related to the meaningful quantities while R is linearly related to the meaningful quantities. I do not know how EA PGS relates to meaningful quantities, and to be honest I would not recommend selecting for EA PGS because (to paraphrase one of gwern's articles) EA measures an input rather than an output (unlike intelligence PGS), and so it is more likely to contain bad stuff too. (IIRC EA PGS contributes to a bunch of mental illnesses, whereas intelligence PGS only contributes to autism and anorexia. And realistically GD too but I haven't seen explicit data on it yet.)

I am not sure that "earlier is better". It's true that the biology favours early parenthood. But the sociology goes the other way: it's better to have children when you're high-income and worldly-wise. So there might be a trade-off between e.g. health and wealth. 

There's a big literature on this, you could start with e.g. Powell, B., Steelman, L.C. and Carini, R.M., 2006. Advancing age, advantaged youth: Parental age and the transmission of resources to children. Social Forces, 84(3), pp.1359-1390; or for health, Myrskylä, M. and Fenelon, A., 2012. Ma... (read more)

I suspect "it depends" is going to dominate here. I'd argue that for many people, parental health (including sleep resiliency) is most important for young kids, and income/wisdom is more important for tween/teen kids. I'm also very unwilling to have opinions generally about "earlier" or "later" without reference points - specifics matter. For people in the Western Intellectual class (for whom a university degree and a white-collar job is the default), I'd recommend that 23-27 is a good target, with delays of up to 5 years being quite reasonable, but not first-best for most). Please do discount my opinion - my wife and I chose not to have kids. This is based on observations and discussions with quite a few people in my extended friends circle, some who had kids "early" (only one at 19, but lots in early 20s), and a lot who had kids "late" (29-35, and one at 43!). Each has specific joys and frustrations and on balance I hear minimal "I wish I'd..." from the earlier crowd. Also, and importantly, there are almost always overriding factors that make the optimization of parent's age at birth to be at best a secondary concern. When you find to and agree with your partner and have a stable/trusted situation such that you feel ABLE to commit to having kids is far more important than statistical benefits of age band. Thus, my advice: "when you've decided to have kids with this person, my recommendation is not to delay much".

I'm not sure what you mean by selective power. I suppose the natural question is "how many extra (e.g.) IQ points do I get for an extra standard deviation of a PGS?" In other words, you want the regression coefficient, where the dependent variable is on some meaningful scale. I stand by my comment, unless you can show a PGS where a 1 s.d. change currently does something big.

Genetic correlations: maybe, but we haven't looked at genetic correlations for many things, and indeed we don't have other polygenic scores to correlate them with for many things, and indeed we haven't collected questions on big enough samples to create those polygenic scores for many things, so again, we aren't there yet.

The R value is equivalent to the standardized version of the regression coefficient (modulo some statistical details that don't make a difference here). Therefore it will be linearly related to the regression coefficient, in whichever scale you choose. Meanwhile, the R2 will be nonlinearly related to the regression coefficient, due to being a nonlinear function of R. See also Marco Del Giudice's paper on the same topic: Are we comparing apples or apples squared? The proportion of explained variance exaggerates differences between effects []

I stand by my comment, unless you can show a PGS where a 1 s.d. change currently does something big.

A 1SD change on a latent variable can have a big absolute risk effect for liability-threshold traits like schizophrenia depending on the pre-existing absolute risk / where one is on the latent spectrum.

(This is the nonlinearity of normal distributions and thin tails again - if the risk is ~0 SD, perhaps because there are 2 schizophrenic parents carrying a very high risk burden, then shifting a fraction of a SD drops the absolute risk down from 40% to 11% ... (read more)

In my opinion, as of 2021, no.

  1. Our polygenic scores are predictive, not causal. That is, we estimate them by maximizing predictive power in a non-causal setting. They do have some causal effects. For example, about half a score for educational attainment's correlation with EA is causal. The other half is correlated environments, genetic nurture etc.
    In future, sibling-based designs may lead to truly causal scores. I don't know how long that will take. 
  2. Scores typically have low predictive power. The R2 of scores for EA is IIRC about 10%. By definition, t
... (read more)

Scores typically have low predictive power. The R2 of scores for EA is IIRC about 10%. By definition, they will never go above the heritability of EA which is only about 40%. Unless you're prepared to pump out tons of eggs, test them all, and pick the best, you are probably not going to do much to change the phenotype. Again, with larger sample sizes this will eventually change.

The power of a PGS is more strongly related to the R than to the R2. So a PGS with an R2 of 10% corresponds to sqrt(10%)=0.32, which is sqrt(10%)/sqrt(40%)=half of the total selecti... (read more)

Re your first paragraph: polygenic scores that directly predict cognitive ability are also being selected against. Polygenic scores designed to predict very high intelligence also turn out to be good at predicting ordinary intelligence, so it doesn't seem likely that "Einsteins" work in some fundamentally different way [1].

I agree that ironing out "errors" could be risky, especially given the current state of our knowledge. But equally, that does not imply that it's no big deal if people's genetics are getting less healthy or smart. There are two risks her... (read more)

So instead of pointing in different directions the other indicators point in the same direction. A belief that "humanity stays extant because of our intelligence" might be common but it might have ideological roots. Say for reference there was the property of being tall, being able to derive calories from food and being smart. A society that would be fearful and taking precautions to avoid evolving tall would seem silly. Being able to derive calories from food seems like it could have a connection of thriving and the extinction chances of pandas would suggest that it is possible to go extinct via that route. If we were following singularity narratives we might argue that intelligence without allignment would be dangerous and if we found that kidness (or any aligment analog) is being selected with the cost of intelligence we could use this to argue that "even nature agrees with us". If we condem societies that do not take it as a problem to become/upkeep being kind and are ambivalent whether they guard against stupidity that would still be more of an expression of our values rather than application of fact. And that basic situation doesn't chance if we condem based on intelligence upkeep. On average features that are being selected for tend to ward of extinction even thought every extinct species has evolved to that dead end. Because most species can only directly think about survival of individuals, family units and herds there is no "artifical selection" for evolution direction. However if we become able to see where the direction is going then we can choose to conciously make our own mistakes and the helm and steer it or not. We are already enduring unconcious evolution so I would be very careful about beliefs that think they can one up that. But lets be clear that if we steer we wil be going where we are steering and not neccesarily where it would be good for us to go. On that level handing out free cash is on equally suspect level as murdersprees if the goal

I'm not sure I understood all of your points. But overall, yes, we might just get rid of rare mutations, but I wonder if realistically people will stop there. (That is indeed a slippery slope argument.) 

I think that's basically correct. Or maybe put another way: they act as if finding such genetic differences would plausibly legitimize racial discrimination.

That may not be nuts. Suppose there is real racial discrimination (not a big ask). Then if we discover substantively large differences between ethnic groups, it might be easier to "get away with" racial discrimination because someone can just claim "oh well, ethnic groups are different and that's why we see different outcomes". Similarly, non-deliberate (e.g. unconscious or "structural") discrimination might be harder to spot, if everyone just assumes that different outcomes between groups are the result of different genetics. 

I should add that I use images to help make my point. The Teach A Man To Fish theory of argumentation: if someone sees something themselves, they understand it better than if you hold their hand through it. I'm guessing that Lesswrong readers can appreciate who Ataturk and Erdogan are and why they're relevant to the topic. Not sure that justifies Peepshow clips, though....

I did say "ultimately". I know about the possibility of horizontal cultural transmission, and I discuss it later in the article. I should read TMM, maybe there are great examples of horizontal cultural transmission beating out vertical transmission. In the case we're discussing here, I doubt it. I think the West's cultural infectivity will weaken as its economic dominance slips.

3Daniel Kokotajlo1y
TMM IIRC has an a priori argument that we should expect horizontal transmission to beat vertical transmission often: It happens faster. Replication rate of memes is orders of magnitude faster than replication rate of genes. It is surprisingly analogous to viruses/parasites in that regard. Genes do evolve over time to be more resistant to viruses/parasites; this is why the native Americans were disproportionately wiped out by disease compared to Europeans, who were disproportionately wiped out compared to Africans. However, despite this, viruses/parasites remain prevalent in the world, even (especially) in Africa. The solution to the puzzle is that the diseases are evolving too.

I'm not sure Kaufmann does making that mistake. He focuses on extreme sects within each religion, not on Islam as a whole, and mostly on Western countries rather than the Middle East. You could say I'm making the mistake, because I discuss the probability of non-Westerners buying into Western values. Yeah, that could be. But I also would distinguish between secularization (and other kinds of modernization) and Westernization. (Japan did the one but not the other, for example.)

You're right that marriage and family structure are "deep". A friend of mine sugg... (read more)

Where your argument is concerned, it's a distinction without a difference. Secularization absolutely destroys birthrates. When Japan secularized, its total fertility rate (TFR) dropped to 1.4. China's TFR was 6.32. It is 1.6 today. India's TFR has dropped from 5.9 to 2.2. The decline in the Arab world, while not as severe as that in Asia, is still pronounced. Egypt's TFR has dropped from 6.7 to 3.3. Jordan dropped from 8.0 (!) to 2.8. Morocco dropped from 7.0 to 2.4. I think secularization is a nontrivial cause of these declines.

I think shared is too broad. You like Coke, I like Coke - we share that. But it's shared because we both have sugar-loving taste buds. To be cultural, you need something more. Hence the biologists' emphasis on the transmission mechanism via learning.

Does it matter? My argument is that a lot of what gets called "Western culture" is really just "stuff that is appealing to human taste buds", in a broad sense. So yes, it is spreading, but no cultural learning is required. Coca Cola sells Coke, people in India like it and buy it; but this doesn't have implications for things that are actually cultural, such as attitudes to gender, political values, etc.

I think there are two phenomena: 

(1) General Westernization. That certainly still takes place, as you point out. The question is how deep that Westernization is - to put it crudely, is it at "Magna Carta" or "Magna Mac" level? 

(2) The emergence of "hardened" subcultures which are resistant to Westernization and which have high birth rates. The evidence from Kaufmann is pretty persuasive about (2).

I am still thoroughly unpersuaded. Birth rates are one thing. Retention rates are quite another. As we've seen from the evidence of Quiverfull, and other Evangelical Christian communities in the US, most children do not remain in the community and continue its practices. The Middle East is experiencing high population growth but is also the most rapidly secularizing region in the world []. Kaufmann seems to be making the mistake of assuming that because many Middle Eastern countries mandate Islam as a state religion, that means that the people residing in those countries are necessarily devout. Finally with regards to the "depth" of Westernization, I would argue that changes to marriage practices and family structure are an even deeper form of Westernization than adoption of particular political values.

It's a hopeful story, but again I think this is a version of "in the best of all possible worlds". Sure, if everybody is in a long-run repeated game, then anything can be an equilibrium, including all possible efficient outcomes. That might be possible sometimes, but we don't see many firms pursuing a strategy of recommending their rivals' products.

I frequently am allowed to leave shops without buying a product so atleast some baseline non-loyalty is around. "Neccesarily" is a possiblity claim so at that level inconvenient worlds are relevant. If the situation is a long iteration game then the relevance of short iteration analyses can be questioned. In theory a hyper-loyal seller might be tempted to give wrong change to a customer giving money in excess to agreed price. However in practise the PR fallout of trying to do anything like this is so great that they are forbidden from doing so on multiple levels. There are lots of situation where tribalism would be so abhorrent that we don't even register it as a relevant possibility.

So, if there are zero per-individual fixed costs from hiring, then it doesn't matter how many sales any salesperson makes. It seems reasonable to assume that fixed costs are non-zero, so that there is a breakeven below which hiring someone wouldn't be worthwhile. Here's some evidence on that which suggests that indeed fixed costs are large.

Right, if both salespeople agreed to swap customers they could cooperate and improve the equilibrium. Standard Coase theorem reasoning applies. But as in many other real-world cases, that kind of enforceable agreement may not be feasible. (What if the salesmen don't know each other? Or there's 1000 firms instead of 2? Note that the salesmen have to know each other. They can't just recommend the other firm's products, because then they're not worth hiring. They're only worth hiring if they are recommending your own products when appropriate, and also being ... (read more)

I don't fully get why they need to know each other. The kind of norms that keep this behaviour up run much with "you would have done the same to me" which works to upkeep the situation if there indeed are other following similar principles but following the principles doesn't check for their existence. Should the firm choose to replace a recommender with a loyal seller then they are likely to also destroy other firms recommending customers to them. Then the caused sales can be more directly attributed to the seller but the total output remains the same. I think you are implicitly arguing that firms should always split ie firms should fire people that are not known to be linked to a profit generation. But this runs the risk of cutting down and destroying profit generating processes that can't be well attributed to be the cause of single actors. Part of the reason for the firm is that the employes can cooperate instead of competing against each other. So there are scenarios where competition is destructive. If the effect would be super mandatory then it would mean that two deparments of the same company would be forced to only play for their own benefit and the larger company trying to force them to cooperate would neccesarily fail. Splitting might be ineffective for other reasons so it is not an autorecommendation.
While the salespeople cannot unilaterally recommend other firms products, the firm as a whole can have a strategy of recommending the best product, and use that reputation to land more customers (the Miracle on 34th Street / Progressive insurance model)

Can't believe nobody's mentioned Pascal's wager. Surely this is the simplest reason not to sell your soul.

The other reasons seem to me like the irrational tail wagging the rational dog. If you are sure you don't have a soul, then selling it for $10 is not a big deal, just as if someone offered to buy my Thetan and I'm not a scientologist.