# Wiki Contributions

FWIW this matches my own experience with one as a company car pretty exactly.

(On mine (UK, right hand drive) the wipers can be activated manually by pressing the button on the end of the left stalk. This also brings up an on-screen menu for selecting constant wiping)

Get out of our houses before we are driven to expend effort killing them, and similarly for all the other places ants conflict with humans (stinging, eating crops, ..)

Ant mafia: "Lovely house you've got there, wouldn't it be a shame if it got all filled up with ants?"

I can't tell you whether this is right or not but this is what ChatGPT thinks:

"Is it possible to put a latex table in a footnote?"

"Yes, it is possible to include a LaTeX table in a footnote. You can do this by using the footnote or footnotetext commands provided by the footmisc package.

Here is an example of how to use these commands to include a table in a footnote:

\usepackage{footmisc} 

 ... 

 \begin{table}[h] 

\centering 

\begin{tabular}{c c c} 

A & B & C \\ 

1 & 2 & 3 \\ 

4 & 5 & 6 

\end{tabular} 

\caption{Table caption} 

\end{table} 

 ... 

 \footnote{ 

\begin{table}[h] 

\centering 

\begin{tabular}{c c c} 

D & E & F \\ 

7 & 8 & 9 \\ 

10 & 11 & 12 

\end{tabular} 

\caption{Table caption in footnote} 

\end{table} 

} 

Note that when including a table in a footnote, you should use the table environment as you would normally, but place it inside the footnote or footnotetext command. You should also make sure to use the [h] option for the table environment to prevent the table from floating.

I hope this helps! Let me know if you have any other questions."

I think the article undersells the problems of ChatGPT's hallucinations. One example from the article where ChatGPT is said to win is a recipe for risotto. However, I wouldn't follow a risotto recipe for ChatGPT just because I can't be confident it hasn't hallucinated some portion of the recipe but would happily follow one from Google, even if the format is a bit more annoying. Same issue with calculating load bearing capacity for a beam only more serious!

Having said that, it does seem like there are definitely specific areas where ChatGPT will be more useful. Coding is a good example as verifying the code is usually straightforward and/or would need to be done anyway. In many cases ChatGPT for an overview followed by Google for more detail/verification is probably a good way to go - I think this would be a good idea for the load bearing wall example.

One thing I've found useful is to make sure I identify to the supplier what specifically I need about the product I'm ordering - sometimes they have something similar in stock which meets my requirements.

One thing I think makes a big difference to me is whether I feel like the provider is taking a collaborative or adversarial stance.

1. I don't usually skip ads on Youtube content but if the channel is often clickbaity/misrepresenting content then I will
2. The printer/ink thing feels very out to get me. The alternative model of printer subscription (e.g. hp) feels alot more collaborative so I don't feel the need to ensure that every page I print is as filled with ink as possible so as to get the "best" deal.
3. If the premium charged on foods in an amusement park/movie theatre feels excessive then I will feel less bad about eating my own food.

For the six/man thing my first association was six pack. Obviously the prototypical image would be topless but my guess is topless images aren’t in the training set (or Dall-E is otherwise prevented from producing them)

I realised something a few weeks back which I feel like I should have realised a long time ago.

The size of the human brain isn’t the thing which makes us smart, rather it is an indicator that we are smart.

A trebling of brain size vs a chimp is impressive but trebling a neural network’s size doesn’t give that much of an improvement in performance.

A more sensible story is that humans started using their brains more usefully (evolutionarily speaking) so it made sense for us to devote more of our resources to bigger brains for the marginal gains that would give.

As I said, I feel like I should have known this for ages. I had a cached thought that human’s big brains (and other things) cause us to be smarter and had never re-examined the thought. Now I think that the “and other things” is doing almost all of the heavy lifting and the size is more incidental to the process.

Thanks for publishing this. I’ve been around the rationality community for a few years and heard TAPs mentioned positively a lot without knowing much about them. This roughly matches my best guess as to what they were but the extra detail is super useful, especially in the implementation.

This suggests a different question. For non-participants who are given the program which creates the data, what probability/timeframe to assign to success.

On this one I think that I would have put a high probability to be solved but would have anticipated a longer timeframe.