Kaj_Sotala

Sequences

Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind
Concept Safety
Multiagent Models of Mind
Keith Stanovich: What Intelligence Tests Miss

Wiki Contributions

Comments

Casus Belli:

A casus belli is a justification for declaring war, right? Does using that term here mean that you are viewing this post as an act of war against someone? I'm confused.

I haven't read the posts that you're referencing, but I would assume that GPT would exhibit learned modularity - modules that reflect the underlying structure of its training data - rather than innately encoded modularity. E.g. CLIP also ends up having a "Spiderman neuron" that activates when it sees features associated with Spiderman, so you could kind of say that there's a "Spiderman module", but nobody ever sat down to specifically write code that would ensure the emergence of a Spiderman module in CLIP. 

Likewise, experimental results like the Wason Selection Task seem to me explainable as outcomes of within-lifetime learning that does end up creating a modular structure out of the data - without there needing to be any particular evolutionary hardwiring for it.

Deep learning systems require huge amounts of data to approach human-level generalizations. This indicates, to an extent, that what's learned from a single example is "shallow". Perhaps this could be seen as closer to plagiarism.

The lawsuit against Stable Diffusion argues that SD works by amassing a huge library of images that the system then interpolates between in order to generate the desired kinds of images, but struggles to create the kinds of image combinations that don't appear in the training data and thus can't be interpolated between. Some of my friends have also remarked on this, e.g. that there are many contexts where it's a struggle to get the system to draw women in a non-sexualized way. (See also Scott Alexander on the way that DALL-E conflates style and content.) 

This is then different from the kind of learning that a human artist does - humans don't only store a huge library of reference photos in their mind and interpolate between them, but they actually get a conceptual understanding of the world as well. Because of that, they could easily draw pictures even of things they've never seen before ("a dog wearing a baseball cap while eating ice cream" is the example used in the complaint). In contrast, systems like Stable Diffusion are limited to only being able to draw things that are a sufficiently close match to images they've already seen. In that sense, a human artist who draws the kind of a picture that would otherwise not have existed in SD's training set is much more directly enabling the system to draw those kinds of pictures, than they would be enabling another human artist to do the same. (Or so the argument goes.)

From the complaint:

Ho showed how a latent image could be interpolated—meaning, blended mathematically—to produce new derivative images. Rather than combine two images pixel by pixel—which gives unappealing results—Ho showed how Training Images can be stored in the diffusion model as latent images and then interpolated as a new latent image. This interpolated latent image can then be converted back into a standard pixel-based image.

The diagram below, taken from Ho’s paper, shows how this process works, and demonstrates the difference in results between interpolating pixels and interpolating latent images.

In the diagram, two photos are being blended: the photo on the left labeled “Source x0,” and the photo on the right labeled “Source x'0.”

The image in the red frame has been interpolated pixel by pixel, and is thus labeled “pixel-space interpolation.” This pixel-space interpolation simply looks like two translucent face images stacked on top of each other, not a single convincing face.

The image in the green frame, labeled “denoised interpolation”, has been generated differently. In that case, the two source images have been converted into latent images (illustrated by the crooked black arrows pointing upward toward the label “Diffused source”). Once these latent images have been interpolated (represented by the green dotted line), the newly interpolated latent image (represented by the smaller green dot) has been reconstructed into pixels (a process represented by the crooked green arrow pointing downward to a larger green dot). This process yields the image in the green frame. Compared to the pixel-space interpolation, the difference is apparent: the denoised blended interpolation looks like a single convincing human face, not an overlay or combination of images of two faces. [...]

Despite the difference in results, these two modes of interpolation are equivalent: they both generate derivative works from the source images. In the pixel-space interpolation (the red-framed image), the source images themselves are being directly interpolated to make a derivative image. In the denoised interpolation (the green-framed image), (1) the source images are being converted to latent images, which are lossy-compressed copies; (2) those latent images are being interpolated to make a derivative latent image; and then (3) this derivative latent image is decompressed back into a pixel-based image.

In April 2022, the diffusion technique was further improved by a team of researchers led by Robin Rombach at Ludwig Maximilian University of Munich. These ideas were introduced in his paper “High-Resolution Image Synthesis with Latent Diffusion Models.”

Rombach is also employed by Stability as one of the primary developers of Stable Diffusion, which is a software implementation of the ideas in his paper.

Rombach’s diffusion technique offered one key improvement over previous efforts. Rombach devised a way to supplement the denoising process by using extra information, so that latent images could be interpolated in more complex ways. This process is called conditioning. The most common tool for conditioning is short text descriptions, previously introduced as Text Prompts, that might describe elements of the image, e.g.—“a dog wearing a baseball cap while eating ice cream”. This metric uses Text Prompts as conditioning data to select latent images that are already associated with text captions indicating they contain “dog,” “baseball cap,” and “ice cream.” The text captions are part of the Training Images, and were scraped from the websites where the images themselves were found.

The resulting image is necessarily a derivative work, because it is generated exclusively from a combination of the conditioning data and the latent images, all of which are copies of copyrighted images. It is, in short, a 21st-century collage tool.

The result of this conditioning process may or may not be a satisfying or accurate depiction of the Text Prompt. Below is an example of output images from Stable Diffusion (via the DreamStudio app) using this Text Prompt—“a dog wearing a baseball cap while eating ice cream”. All these dogs in the resulting images seem to be wearing baseball hats. Only the one in the lower left seems to be eating ice cream. The two on the right seem to be eating meat, not ice cream.

In general, none of the Stable Diffusion output images provided in response to a particular Text Prompt is likely to be a close match for any specific image in the training data. This stands to reason: the use of conditioning data to interpolate multiple latent images means that the resulting hybrid image will not look exactly like any of the Training Images that have been copied into those latent images.

But it is also true that the only thing a latent-diffusion system can do is interpolate latent images into hybrid images. There is no other source of visual information entering the system.

Every output image from the system is derived exclusively from the latent images, which are copies of copyrighted images. For these reasons, every hybrid image is necessarily a derivative work.

A latent-diffusion system can never achieve a broader human-like understanding of terms like “dog,” “baseball hat,” or “ice cream.” Hence, the use of the term “artificial intelligence” in this context is inaccurate.

A latent-diffusion system can only copy from latent images that are tagged with those terms. The system struggles with a Text Prompt like “a dog wearing a baseball cap while eating ice cream” because, though there are many photos of dogs, baseball caps, and ice cream among the Training Images (and the latent images derived from them) there are unlikely to be any Training Images that combine all three.

A human artist could illustrate this combination of items with ease. But a latentdiffusion system cannot because it can never exceed the limitations of its Training Images.

In practice, the quality of the latent-diffusion images depends entirely on the breadth and quality of the Training Images used to generate the latent images. If that weren’t true, then it wouldn’t matter where Stable Diffusion (or any other AI-Image Product) got its Training Images.

In actuality, the provenance of an AI-Image-Product’s Training Images matters very much. According to Emad Mostaque, CEO of Stability, Stable Diffusion has “compress[ed] the knowledge of over 100 terabytes of images.” Though the rapid success of Stable Diffusion has been partly reliant on a great leap forward in computer science, it has been even more reliant on a great leap forward in appropriating copyrighted images.

I was a bit surprised to see Eliezer invoke the Wason Selection Task. I'll admit that I haven't actually thought this through rigorously, but my sense was that modern machine learning had basically disproven the evpsych argument that those experimental results require the existence of a separate cheating-detection module. As well as generally calling the whole massive modularity thesis into severe question, since the kinds of results that evpsych used to explain using dedicated innate modules now look a lot more like something that could be produced with something like GPT.

... but again I never really thought this through explicitly, it was just a general shift of intuitions that happened over several years and maybe it's wrong.

Scott Alexander makes and rates predictions yearly, this post has the results of his 2021 predictions (with links to earlier years). He has also held a prediction contest where some community members got high scores.

Various community members have an account on predictionbook and post predictions there, e.g. here's gwern's profile.

gwern also discusses his success trading on various prediction markets in this article.

I recall other users also participating in various forecasting competitions and prediction markets and doing well, but I don't remember the details. 

The first three core claims reminded me of an argument that I once saw, which claimed that one of the primary functions of philosophy is to make conceptual distinctions (I don't know if I necessarily agree with that argument, but I thought it was interesting anyway):

Philosophy may be viewed as a science, on the one hand, or as an art, on the other. Philosophy is, indeed, uniquely difficult to classify, and resembles both the arts and the sciences.

On the one hand, philosophy seems to be like a science in that the philosopher is in pursuit of truth. Discoveries, it seems, are made in philosophy, and so the philosopher like the scientist has the excitement of belonging to an ongoing, cooperative, cumulative intellectual venture. If so, the philosopher must be familiar with current writing, and keep abreast of the state of the art. On this view, we twenty-first-century philosophers have an advantage over earlier practitioners of the discipline. We stand, no doubt, on the shoulders of other and greater philosophers, but we do stand above them. We have superannuated Plato and Kant.

On the other hand, in the arts, classic works do not date. If we want to learn physics or chemistry, as opposed to their history, we do not nowadays read Newton or Faraday. But we read the literature of Homer and Shakespeare not merely to learn about the quaint things that passed through people’s minds in far-off days of long ago. Surely, it may well be argued, the same is true of philosophy. It is not merely in a spirit of antiquarian curiosity that we read Aristotle today. Philosophy is essentially the work of individual genius, and Kant does not supersede Plato any more than Shakespeare supersedes Homer.

There is truth in each of these accounts, but neither is wholly true and neither contains the whole truth. Philosophy is not a science, and there is no state of the art in philosophy. Philosophy is not a matter of expanding knowledge, of acquiring new truths about the world; the philosopher is not in possession of information that is denied to others. Philosophy is not a matter of knowledge; it is a matter of understanding, that is to say, of organizing what is known. [...]

The most visible form of philosophical progress is progress in philosophical analysis. Philosophy does not progress by making regular additions to a quantum of information; as has been said, what philosophy offers is not information but understanding. Contemporary philosophers, of course, know some things that the greatest philosophers of the past did not know; but the things they know are not philosophical matters but the truths that have been discovered by the sciences begotten of philosophy. But there are also some things that philosophers of the present day understand that even the greatest philosophers of earlier generations failed to understand. For instance, philosophers clarify language by distinguishing between different senses of words; and, once a distinction has been made, future philosophers have to take account of it in their deliberations.

Take, as an example, the issue of free will. At a certain point in the history of philosophy a distinction was made between two kinds of human freedom: liberty of indifference (ability to do otherwise) and liberty of spontaneity (ability to do what you want). Once this distinction has been made the question ‘Do human beings enjoy freedom of the will?’ has to be answered in a way that takes account of the distinction. Even someone who believes that the two kinds of liberty in fact coincide has to provide arguments to show this; he cannot simply ignore the distinction and hope to be taken seriously on the topic.

-- Anthony Kenny: A New History of Western Philosophy

Larks has done some AI charity comparisons, e.g. here's the one for 2021.

(It also was relevant to the entire "does God exist?" debate – an eventually cruxy point for me is that you totally can build up simulations of people in your head, and I'd expect that to be hard to distinguish from God speaking to you)

There's in fact at least one book about this. From one of the reviews:

Luhrmann specifically examines how evangelicals come to experience God as a close, intimate, and invisible but very real friend and confidant with whom they can communicate on a daily basis through prayer and visualization, clearly recognizing His voice. [...]

Luhrmann investigated the new evangelical movement as a participant-observer. She attended services and small group meetings for several years at local branches of the Vineyard, an evangelical church with hundreds of congregations throughout the country and the world, and had hundreds of conversations with evangelicals, learning how they believed themselves able to communicate with God, not just through one-sided prayers but with discernible feedback--some seeing visions, others claiming to hear the voice of God Himself.

After countless interviews with Vineyard members reporting either isolated or on-going supernatural experiences with God, Luhrmann concluded that the practice of prayer could train a person to hear God's voice--to use their mind differently and focus on God's voice until it became clear. A subsequent experiment conducted between people who were and weren't practiced in prayer further confirmed and illuminated her conclusion. For those who have trained themselves on their inner experiences, she found, God is experienced in their brains as an actual personal social relationship: His voice was identified, and felt to be real and interactive.
 

From a friend to who I linked this post (reshared with permission):

I have a friend who just recently learned about ChatGPT (we showed it to her for LARP generation purposes :D) and she got really excited over it, having never played with any AI generation tools before. I send this post to her kinda jokingly "warning" her not to get too immersed.

She told me that during the last weeks ChatGPT has become a sort of a "member" of their group of friends, people are speaking about it as if was a human person, saying things like "yeah I talked about this with ChatGPT and it said", talking to it while eating (in the same table with other people), wishing it good night etc. I asked what people talking about with it and apparently many seem to have to ongoing chats, one for work (emails, programming etc) and one for random free time talk.

She said at least one addictive thing about it is the same thing mentioned in the post, that it never gets tired talking to you and is always supportive.

While I'm not particularly optimistic about BCI solutions either, I don't think this story is strong evidence against them. Suppose that the BCI took the form of an exocortex that expanded the person's brain functions and also significantly increased their introspective awareness to the level of an inhumanly good meditator. This would effectively allow for constant monitoring of what subagents within the person's mind were getting activated in conversation, flagging those to the person's awareness in real time and letting the person notice when they were getting manipulated in ways that the rest of their mind-system didn't endorse. That kind of awareness tends to also allow defending against manipulation attempts since one does not blend with the subagents to a similar degree and can then better integrate them with the rest of the system after the issue has been noticed.

Ordinary humans can learn to get higher introspective awareness through practices such as meditation, but it's very hard if not impossible to get to a point where you'd never be emotionally triggered since sufficiently strong emotions seem to trigger some kind of biological override. But an exocortex might be built to remain unaffected by that override and allow one to maintain high introspective awareness regardless. In that case, one might be able to more directly communicate with untrusted entities without getting hacked by them.

Load More