

I run local models. The other day I was writing some code and needed to implement simplex noise, and LLMs are great for writing all the boilerplate stuff. I asked it to do it, and it did alright although I had to modify it to make it actually work because it hallucinated some stuff. I decided to look it up online, and it was practically an exact copy of this, down to identical comments and everything.
It is not too diluted to matter. You just don’t have the knowledge to recognize what it copies.
Maybe online models can, but local has no access to the internet so it can’t. However, it’s likely generating a response that is predictable that can cite a source, but it could totally make that up. Hopefully people would double check it to make sure it actually is and says what it’s claiming, but we both know most won’t. Citing a source is just a way to make it look intelligent while it still generates bullshit.
You’re saying this like they’re equal. People put thought into it. LLMs do not. Yes, con men exist. However, not everyone is a con man. You can follow authors who are known to be accurate. You can do the same with LLMs. The problem is consistency. A con man will always be a con man. With an LLM you have no way to know if it’s bullshitting this time or not, so you should always assume it’s bullshit. In which case, what’s the point? However, most people assume it’s always honest, because that’s what the marketing leads you to believe