• 1 Post
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • I’d just have to ignore most “user-generated” content.

    Dead Internet hypothesis is only applicable to user-generated content platforms.

    AI will affect far more than just social media shit posts, though.

    All news (local, national, global). Educational sites. Medical information. Historical data. Almanacs/encyclopedias. Political information. Information about local services (i.e. outages). Interviews. Documentaries.

    I mean, all of these can be easily manipulated now in any medium. It’s only a matter of how quickly AI saturates these spaces.

    Trustworthy sources will few and far between, drowned out by content that can be generated thousands of times faster than real humans can.

    What then?

    I even worry about things like printed encyclopedias being manipulated, too. We stand to lose real human knowledge if AI content continues to accelerate.

    There really aren’t viable alternatives to those things, unless they are created (again), like before the internet was used for everything.


  • Gen ai has been around for a while. It’s made things worse, but it’s not like there aren’t real users anymore. I don’t see why that would suddenly change now

    For context, we went from AI generated images and videos (i.e. Will Smith eating spaghetti) being laughably bad and only good for memes, to essentially video content that is convincing in every way - in under two years!

    The accessibility, scale, quality, and power of AI has changed things, and RAPIDLY be improved even further in a much shorter period of time.

    That’s the difference. AI from 2023 couldn’t fool your grandma. AI from 2025 and beyond will fool entire populations.


  • I think there are going to be tools to identify networks of people and content you don’t want to interact with. This website is pushed by that social media account, which is boosted by these 2000 account that all exhibit bot-like behavior? Well let’s block the website, of course, but also let’s see who else those 2000 bots promote; let’s see who else promotes that website.

    In an ethical, human-first world, that would be the case.

    Do you think that social media platforms, who run on stealing attention from users so they can steal their private data and behaviour history, would want to block content that’s doing exactly that? No way. Not ever.

    And the incentive to make easy money drives users, who otherwise wouldn’t have the skill or talent to be able to create and present content, to type in a prompt and send it as a post… over and over, automated so no effort at all needs to be made. Do this a million times over, and there’s no way to avoid it.

    And once we get to the point where AI content can be generated on-the-fly for each doom-scrolling user based on their behaviour on the platform, it’s game over. It’ll be like digital meth, but disguised to look funny/sexy/informant/cute/trustworthy.

    I’m using tools to blacklist AI sites in search, but the lists aren’t keeping up, and they don’t extend beyond search.

    There will come a point, probably very soon, where companies will figure out how to deliver ads and AI content as if it were from the original source content, which will make it impossible to block or filter out. It’s a horrific thought, TBH.



  • Thank you for your thoughtful reply.

    I grew up when the internet was still dial-up, so I think I could adapt to going back to the “old way” of doing things.

    But unless society moves in that same direction, it would seem that things would become more and more difficult. We can’t rely on old books and already-created content to move us forward.

    I’ve been finding more value in IRL contact with other people these days. But I don’t think everyone has that luxury, I’m afraid.


  • There will always be a place like Lemmy, where AI-generated content will be filtered through real, intelligent, moral, empathetic people. So we’ll continue to block and analyze and filter as much of the churn as we can…

    As much as I appreciate the optimism, that’s not realistic seeing how fast technology is going.

    If you amped up indistinguishable-from-real bot activity by 1000 or a million times, there would be no effective filter. That’s what I know is coming, to every corner of the internet.

    Other options such as paywalling, invite-only, and other such barriers only serve to fragment and minimize the good that the internet can do for humanity.


  • It’s probably taking my content more seriously than necessary, but I take pride in what I post and I want to be seen as a trusted person in the community.

    Plot twist: How do I know you aren’t a bot? /s

    As information multiplies, and people have less time to apply critical thinking or skepticism to what they see, we’ll have an extremely hard time sorting through it all.

    Even if we had a crowdsourced system that generates a whitelist of trusted sites, bots could easily overwhelm such a system, skewing the results. If Wikipedia, for example, had bots tampering with the data at a million times the rate that it does now, would anyone still want to use it?

    One option might be an invite-only system, and even that would create new problems with fragmentation and exploitation.

    Trust is almost becoming a thing of the past because of unprecedented digital threats.




  • I’ve been pretty reserved on my opinion about AI ruining the internet until a few days ago.

    We’re now seeing videos with spoken dialogue that looks incredibly convincing. No more “uncanny valley” to act as a mental safeguard.

    We can only whitelist websites and services for so long before greed consumes them all.

    I mean, shit, you might already have friends or family using AI generated replies in their text messages and email… not even our real connections are “real”.



  • And we’ve had fake news forever.

    Yes, limited in their scope.

    The fake news of yesterday still needed real people to spread disinformation. Fake news of tomorrow will have convincing videos, photos, “verified sources”, interviews, article upon article expanding on the disinformation, and millions of bots to promote the content through social media and search.

    “Fake” will be real enough to have catastrophic effects on actual people.

    It’s like going from spitting wads of tissue out the end of a straw to dropping hydrogen bombs. We aren’t prepared for what’s to come in the landscape of fake news.


  • I appreciate the thoughtfulness of your answer.

    To expand on a few points:

    Lets start with the attempt to define “usefulness” as the degree to which connection to humans happens. Human connection on the internet has always been illusory. Yet we still find utility in it.

    While “usefulness” and human connection can be linked, you can also separate them.

    For instance, if the majority of websites become content farms, with information that (likely) isn’t accurate because an LLM hallucinated it. Can you find it useful compared to when an expert wrote the content?

    This could even apply to how-to content, where now you might have someone with actual experience showing you how to fix something or work something. But with AI content farms, you might get a mishmash of information, that may not be right, and you’d never be able to ask for clarity from a real person.

    What about a travel site that fakes photos, generates convincing videos of your destination, and features stories from other travellers (AI bots) without you knowing the difference? This might have been hard to pull off five years ago, but you can generate 1000 such websites in a few days. When does the usefulness of using such a site become diminished?

    As for human connection. I disagree that it has always been illusory. When you chatted with strangers online 10 years ago, you knew for a fact that they were a real person. Sure, they could have been deceptive or have an “online personality”, but they were real.

    A step up from that would be people using a fake identity, but there was still a person on the other end.

    But in the near future, every stranger you connect with online might end up being a bot. You’d never know. At what point would you consider not spending time or energy interacting on a platform?

    This planet has been a soulless hellscape longer than any of us have been alive, and LLMs are more likely to improve the situation than make it worse.

    I’ve been around long enough to say that’s not true in the slightest. Being online and consuming content online was very, very different 10+ years ago as it will be in the next 10 years.

    The internet of old was mostly a force for good. The internet of tomorrow will be weaponized, monetized, and made to be unrecognizable from what we’ve had.


  • I watch Youtube

    Then you’ve noticed a lot of fake videos, too.

    Fake product reviews (fake script, fake voice, fake video footage). Videos with AI hosts (that you wouldn’t even realize is AI). Low effort video production that’s been handed off to an app to do all the work. Scripted content based on AI generated text (Youtube now offers creators AI generated ideas and scripts, btw).

    You can sift through some of it now, but what will you do when you can’t tell the difference? Will you invest time watching fake content?

    I have channels that I watch. Real people (who I’ve met), and others who are verifiably real. But those creators will be few and far between in the near future.


  • Depends what you read. Blogs are still a thing and on many there is not the slightest hint of AI and in some there is even not even a single ad to be seen. It’s still people talking about what they truly care. Not people trying to farm likes or views by spreading some low-effort shit content, be it videos, pictures or text.

    To illustrate my point: Say, five years from now, you come across a “blog”. It’s got photos of a friendly person, she shares images and video of her and her family, and talks about homesteading.

    What if that entire “person” was just AI generated, and the “blog” was just fake AI stories? How would you even know? Would you want to spend time even reading blogs, knowing that it may not even be written by an actual person?

    We will be at that point very soon. You will never know who’s real and who’s fake (online), because their entire life can be faked.

    The corporate/marketing-owned Web is filled to the brim with utter crap but that’s not new, and it has been so well before AI became a thing.

    While true, and I agree, at least it was people being evil/greedy. And the speed at which they could be evil/greedy was capped.

    With AI, you could generate a lifetime of greedy/evil corporate/marketing-owned web in a matter of hours, and just flood every corner of the internet with it.

    It’s a very different threat.

    But the human-made web is still a thing. It’s just not promoted as much as it once was and certainly it’s not promoted where nowadays crowds gather to get spoon-fed content.

    Per my point above, you’ll never know what’s human-made in the very near future. At some point, bots with human identities will flood websites, then what?


  • Yes, of course. I’m not talking about that.

    Even here, Lemmy. How long before the replies you get are from bots, and you’re posting for bot users? Will there even be a point to continue wasting time on that?

    When you see news being reported, at some point, you’ll have no idea what’s real or fake. And it will be so ubiquitous that you’ll need to spend a considerable amount of time to even attempt to verify whether it’s true or trustworthy. At what point will you simply stop paying attention to it?


  • Maybe (?) thiat’s controversial but “human connection” is not the first thing that comes to my mind when I consider whay I’m currently online.

    What I mean by that is when you looked back at content from 5+ years ago, you know that a real person wrote, drew, recorded, thought of, put effort into it.

    We had interconnectedness, and as human beings, we really should do what it takes to not lose that.

    There will be no more looking at photography, artwork, music, or movies as a marvel of human effort, skill, and talent. To me, that’s a huge loss.

    When you read a blog years ago, you were reading another person’s experience, and that had value.

    Information from a resource was researched and had input from an expert human being, and/or a team of them. That had value.

    So losing the humanity of the internet sucks but I can find way to work around it.

    Online? If so, how long do you think you can sustain it? If the majority of the internet or digital content you see becomes AI generated, with no way of knowing, what then? Will you invest time to use a future Lemmy where your interactions are probably all with bots?