• 5 Posts
  • 430 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle





  • To me the main thing is, this is about utility of tools for acquiring general domain knowledge in a one-off event. The effects on overall intelligence, which is a separate thing from knowledge or ability to give effective advice on a topic, are a totally different scope.

    What it’s actually testing doesn’t seem like it’s finding anything surprising, because the information itself the subjects are getting from ChatGPT is likely lower quality. So it could just be that the people reading blogposts or wikihow articles about starting a garden learned more and/or more accurate things about it, rather than, research using AI negatively affects the way you think, something that would make more sense to test over a longer period of time, and with a greater variety of topics and tasks.



  • I don’t like the idea of recurring payments especially for something I’m not actively using because then I have to remember to shut it off at some point.

    Maybe you decide $10/mo is such a small number (the price of two coffees in any country where I’ve lived over the past 15 years) that you’re happy to keep on donating at the end of one year?

    What, like I’m going to remember a year in advance to look into this? I’m willing to donate a little to things on rare occasions but I don’t think I would do it this way because I don’t want an accumulation of little monthly payments I have stopped thinking about draining my finances.













  • What you confuse here is doing something that can benefit from applying logical thinking with doing science.

    I’m not confusing that. Effective programming requires and consists of small scale application of the scientific method to the systems you work with.

    the argument has become “but it seems to be thinking to me”

    I wasn’t making that argument so I don’t know what you’re getting at with this. For the purposes of this discussion I think it doesn’t matter at all how it was written or whether what wrote it is truly intelligent, the important thing is the code that is the end result, whether it does what it is intended to and nothing harmful, and whether the programmer working with it is able to accurately determine if it does what it is intended to.

    The central point of it is that, by the very nature of LKMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-exoerimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.

    I feel like “not even possible to assess their usefulness for programming by self-exoerimentation(!)” is necessarily a claim that reading and testing code is something no one can do, which is absurd. If the output is often correct, then the means of creating it is likely useful, and you can tell if the output is correct by evaluating it in the same way you evaluate any computer program, without needing to directly evaluate the LLM itself. It should be obvious that this is a possible thing to do. Saying not to do it seems kind of like some “don’t look up” stuff.