Recent Ph.D. in physics from MIT, Complex Systems enthusiast, AI researcher, digital nomad. http://pchvykov.com

Wiki Contributions


So yes, I agree that intolerance can also be contagious - and it's sort of a quantitative question of which one outweighs the other. I don't personally believe in "evil" (as you sort of hint there, I believe that if we are sufficiently eager to understand, we can always find common humanity with anyone) - but all kinds of neurodivergences, such as biological lack of empathy, do exist, and while we need not stigmatize them, they may be socially disruptive (like torching a city). Again, the question of whether our absolutely tolerant society can be stable in face of psychopaths torching cities once in a while I think is a quantitative one. 

But what I'm excited about here is that in the case that those quantities are sufficient (tolerance is sufficiently contagious, psychopaths are sufficiently rare, etc), then we could have an absolutely tolerant society - even in that pacifist way you don't quite like. And that possibility in itself I find exciting. And that possibility is something that I think Popper did not see. 

While these are relevant elaborations on the paradox of tolerance, I'd also be curious to hear your opinion on the proposal I'm making here - could tolerance be contagious, without any intentional action to make it so (violent or otherwise)? If so, could that make the existence of an absolutely tolerant society conceivable? 

I think your perspective also relies on an implicit assumption which may be flawed. Not quite sure what it is exactly - but something around assuming that agents are primarily goal-directed entities. This is the game-theoretic context - and in that case, you may be quite right. 

But here I'm trying to point out precisely that people have qualities beyond the assumptions of a game-theoretic setup. Most of the times we don't actually know what our goals are or where those goals came from. So I guess here I'm thinking of people more as dynamical systems. 

For what it's worth, let me just reply to your specific concern here: I think the value of anthropomorphization I tried to explain is somehow independent of whether we expect God to intervene or not. If you are saying that this "expectation" may be an undesirable side-effect, then that may be so for some people, but that does not directly contradict my argument. What do you think?

just updated the post to add this clarification about "too perfect" - thanks for your question!

I like the idea of agency being some sweet spot between being too simple and too complex, yes. Though I'm not sure I agree that if we can fully understand the algorithm, then we won't view it as an agent. I think the algorithm for this point particle is simple enough for us to fully understand, but due to the stochastic nature of the optimization algorithm, we can never fully predict it. So I guess I'd say agency isn't a sweet spot in the amount of computation needed, but rather in the amount of stochasticity perhaps? 

As for other examples of "doing something so well we get a strange feeling," the chess example wouldn't be my go-to, since the action space there is somehow "small" since it is discrete and finite. I'm more thinking of the difference between a human ballet dancer, and an ideal robotic ballet dancer - that slight imperfection makes the human somehow relatable for us. E.g., in CGI you have to make your animated characters make some unnecessary movements, each step must be different than any other, etc. We often admire hand-crafted art more than perfect machine-generated decorations for the same sort of minute asymmetry that makes it relatable, and thus admirable. In voice recording, you often record the song twice for the L and R channels, rather than just copying (see 'double tracking') - the slight differences make the sound "bigger" and "more alive." Etc, etc. 

Does this make  sense?

ah, yes! good point - so something like the presence of "unseen causes"? 
The other hypothesis the lab I worked with looked into was the presence of some 'internally generated forces' - sort of like an 'unmoved mover' - which feels similar to what you're suggesting? 
In some way, this feels not really more general than "mistakes," but sort of a different route. Namely, I can imagine some internal forces guiding a particle perfectly through a maze in a way that will still look like an automaton  

Just posted it, feels like the post came out fairly basic, but still curious of your opinion: https://www.lesswrong.com/posts/aMrhJbvEbXiX2zjJg/mistakes-as-agency

yeah, I thought so too - but I only had very preliminary results, not enough for a publication... but perhaps I could write up a post based on what I had

thanks for the support! And yes, definitely closely related to questions around agency. With agency, I feel there are 2 parallel, and related, questions: 1) can we give a mathematical definition of agency (and here I think of info-theoretic measures, abilities to compute, predict, etc) and 2) can we explain why we humans view some things as more agent-like than others (and this is a cognitive science question that I worked on a bit some years ago with these guys: http://web.mit.edu/cocosci/archive/Papers/secret-agent-05.pdf ). I didn't get to publishing my results - but I was discovering something very much like what you write. I was testing the hypothesis that if a thing seems to "plan" further ahead, we view it as an agent - but instead was finding that actually the number of mistakes it makes in the planning is more important. 

Load More