Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.

Her take is very interesting: what if we could actually use AI against that?

Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.

Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?

How could this be achieved?

  • moseschrute@lemmy.ml
    link
    fedilink
    English
    arrow-up
    109
    arrow-down
    2
    ·
    edit-2
    1 day ago

    I feel like I woke up in the stupidest timeline where climate change is about to kill us, we decide stupidly to 10x our power needs by shoving LLMs down everyone’s throats, and the only solution to stay private is to 10x our personal LLM usage by generating tons of noise about us just to stay private. So now we’re 100x ing everyone’s power usage and we’re going to die even sooner.

    I think your idea is interesting – I was also thinking that same thing awhile back – but how tf did we get here.

    • octobob@lemmy.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      1 day ago

      Yeah agreed. What’s going on in my state of Pennsylvania is they’re reopening the Three Mile Island nuclear plant out near Harrisburg for the sole reason of powering Microsoft’s AI data centers. This will be Unit 1 which was closed in 2019. Unit 2 was the one that was permanently closed after the meltdown in 1979.

      I’m all for nuclear power. I think it’s our best option for an alternative energy source. But the only reason they’re opening the plant again is because our grid can’t keep up with AI. I believe the data centers is the only thing the nuke plant will power.

      I’ve also seen the scale of things in my work in terms of power demands. I’m an industrial electrical technician, and part of our business is the control panels for cooling the server racks for Amazon data centers. They just keep buying more more and more of them, projected til at least 2035 right now. All these big tech companies are totally revamping everything for AI. Like before a typical rack section might have drawn let’s say 1000 watts, now it’s more like 10,000 watts. Again, just for AI.

      • moseschrute@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 day ago

        Totally agree nuclear is a great tool but totally being used for the wrong purpose here. Use those power plants to solve our existing energy crisis before you crate an even bigger energy crisis.

    • blargh513@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      There are ais that can detect use of ai. This is a losing strategy as we burn resources playing cat and mouse.

      As with all things greed is at the root of this problem. Until privacy has any legislative teeth, it will continue to be a notion for the few and an elusive one at that.

  • upstroke4448@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    21 hours ago

    This strategy of generating fake data just doesn’t work well. It requires a ton of resources to generate fake data that can’t be easily filtered which ends up making the strategy non viable on most situations. Look at Mullvads DAITA and how it constantly has to be improved to fight this and, that’s just for basic protection.

    There is a bit of a cognitive dissonance that goes on, where people seem to understand that you are tracked constantly online and offline through all sorts of complex means but still think relatively mundane solutions could break that system.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    4 hours ago

    This is a dangerous proposition.

    When the dictatorship comes after you, they’re not concerned about the whole of every article that was written about you All they care about are the things they see as incriminating.

    You could literally take a spell check dictionary list, pull three words out of the list at random and feed it into a ollama asking for a story with your name that included the three words as major points in the story.

    Even on a relatively old video card, you could probably crap out three stories a minute. Have it write them in HTML and publish the site map into major search engines on a regular basis.

    EDIT: OK this was too fun not to do it real quick!

    ~ cat generate.py

    import random
    import requests
    import json
    import time
    from datetime import datetime
    
    ollama_url = "http://127.1:11434/api/generate"
    wordlist_file = "words.txt"
    
    with open(wordlist_file, 'r') as file:
        words = [line.strip() for line in file if line.strip()]
    
    selected_words = random.sample(words, 3)
    theme = ", ".join(selected_words)
    
    prompt = f"Write a short, imaginative story about a person named Rumba using these three theme words: {theme}. The first word is their super power, the second word is their kyptonite, the third word is the name of their adversary.  Return only the story as HTML content ready to be saved and viewed in a browser."
    
    response = requests.post(
        ollama_url,
        headers={"Content-Type": "application/json"},
        data=json.dumps({"model": "llama3.2","prompt": prompt})
    )
    
    story_html = ""
    for line in response.iter_lines(decode_unicode=True):
        if line.strip():
            try:
                chunk = json.loads(line)
                story_html += chunk.get("response", "")
            except json.JSONDecodeError as e:
                print(f"JSON decode error: {e}")
    
    
    
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    filename = f"story_{timestamp}.html"
    
    with open(filename, "w", encoding="utf-8") as file:
        file.write(story_html)
    
    print(f"Story saved as {filename}")
    
    
    
    

    ~ cat story_20250630_130846.html

    <!DOCTYPE html>
    <html>
    <head>
    <title>Rumba's Urban Adventure</title>
    <meta charset="UTF-8">
    <style>
    body {font-family: Arial, sans-serif;}
    </style>
    </head>
    <body>
    
    <h1>Rumba's Urban Adventure</h1>
    
    <p>Rumba was a master of <b>slangs</b>, able to effortlessly weave in and out of conversations with ease. Her superpower allowed her to manipulate language itself, bending words to her will. With a flick of her wrist, she could turn a phrase into a spell.</p>
    
    <p>But Rumba's greatest weakness was her love of <b>bungos</b>. The more she indulged in these sweet treats, the more her powers wavered. She would often find herself lost in thought, her mind clouded by the sugary rush of bungos. Her enemies knew this vulnerability all too well.</p>
    
    <p>Enter <b>Carbarn</b>, a villainous mastermind with a personal vendetta against Rumba. Carbarn had spent years studying the art of linguistic manipulation, and he was determined to exploit Rumba's weakness for his own gain. With a wave of his hand, he summoned a cloud of bungos, sending Rumba stumbling.</p>
    
    <p>But Rumba refused to give up. She focused her mind, channeling the power of slangs into a counterattack. The air was filled with words, swirling and eddying as she battled Carbarn's minions. In the end, it was just Rumba and Carbarn face-to-face.</p>
    
    <p>The two enemies clashed in a spectacular display of linguistic fury. Words flew back and forth, each one landing with precision and deadliness. But Rumba had one final trick up her sleeve - a bungo-free zone.</p>
    
    <p>With a burst of creative energy, Rumba created a bubble of pure slangs around herself, shielding her from Carbarn's attacks. The villain let out a defeated sigh as his plan was foiled once again. And Rumba walked away, victorious, with a bag of bungos stashed safely in her pocket.</p>
    
    </body>
    </html>
    
    

    Interesting that it chose female rather than male or gender neutral. Not that I’m complaining, but I expected it to be biased :)

    • Eyedust@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      Yup, you’d be surprised what you can accomplish with 10gb of VRAM and a 12b model. Hell, my profile pic (which isn’t very good, tbf) was made on that 10gb VRAM card using localhosted stable diffusion. I hate big corp AI, but I absolutely love open market and open source local models. Gonna be a shame when they start to police them.

      To OP: The problem is that they’re looking for keywords. With the amount of people under surveillance these days, they don’t give a rat’s ass if you went to your favorite coffee roasting site, they want to find the stuff they don’t want you to do.

      Piracy? You’re on a list. Any cleaning chemical that can be related to the construction of explosives? You’re on a list. These lists will then tack on more keywords that pertain to that list. For example, the explosives list will then search for matching components bought within a close span of time that would indicate you’re making them. Even searching for ways to enforce your privacy just makes them more interested.

      So then you put out a bunch of fake data. This data happens to say you viewed a page pertaining that matching component. Whelp, that list just got hotter and now there are even more eyes on you and they’re being slightly more attentive this time. Its a bad idea. The only way you’re getting out of surveillance, at least online, is to never go online.

      In reality, they probably won’t even do anything about the above. What they really want is money. Money for your info; money to sell more things to you. They want the average home to be filled with advertisements tailored from your information. Because those adverts make those companies money, which they then use to buy more information to monetize your existence. Its the largest pyramid scheme known to humanity, and we’re the unpaid grunts.

      The moment the world became connected through telephones, cable TV, and then internet this scheme was already in motion way beforehand. Let’s be honest, smartphones were the motherload. A TV, phone, and computer you always keep on you? They were salivating that day.

  • SendMePhotos@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    1 day ago

    Obscuration is what you’re thinking and it works with things like adnauseun (firefox add on that will click all ads in the background to obscure preference data). It’s a nice way to smear the data and probably better to do sooner (while the data collection is in infancy) rather than later (where the companies may be able to filter obscuration attempts).

    I like it. I am really not a fan of being profiled, collected, and categorized. I agree with others, I hate this time line. It’s so uncanny.

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I still don’t really understand adnauseum. What is the difference in privacy compared to clicking on none of the ads?

      • SendMePhotos@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        Whatever data profile they already have on your can be obscured to make it useless vs them probably trickling in data.

        Think of it like um…

        Having a picture of you with a moderate amount of notes that are accurate, vs having a picture of you with so much irrelevant/inaccurate data that you can’t be certain of anything.

        • HelloRoot@lemy.lol
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 day ago

          But the picture of me they have is: doesn’t click ads like all the other adblocker people (which is accurate)

          Why would I want to change it to: clicks ALL the ads like all the other adnauseum people (which is also accurate)

          • JustinTheGM@ttrpg.network
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            They build this picture from many other sources besides ad clicks, so the point is to obscure that. Problem is, if you’re only obscuring your ad click behavior, it should be relatively easy to filter out of the model.

            • HelloRoot@lemy.lol
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              1 day ago

              You are just moving the problem one step further, but that doesn’t change anything (if I am wrong please correct me).

              You say it is ad behaviour + other data points.

              So the picture of me they have is: [other data] + doesn’t click ads like all the other adblocker people (which is accurate)

              Why would I want to change it to: [other data] + clicks ALL the ads like all the other adnauseum people (which is also accurate)

              How does adnauseum or not matter? I genuinely don’t get it. It’s the same [other data] in both cases. Whether you click on none of the ads or all of the ads can be detected.


              As a bonus, if adnauseum would click just a couple random ads, they would have a wrong assumption of my ad clicking behaviour.

              But if I click none of the ads they have no accurate assumption of my ad clicking behaviour either.

              Judging by incidents like the cambridge analytica scandal, the algorithms that analyze the data are sophisticated enough to differentiate your true interests, which are collected via other browsing behavious from your ad clicking behaviour if they contradict each other or when one of the two seems random.

              • Ulrich@feddit.org
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 day ago

                [other data] + clicks ALL the ads like all the other adnauseum people

                adnauseum does not click “all the other ads”, it just clicks some of them. Like normal people do. Only those ads are not relevant to your interests, they’re just random, so it obscures your online profile by filling it with a bunch of random information.

                Judging by incidents like the cambridge analytica scandal, the algorithms that analyze the data are sophisticated enough to differentiate your true interests

                Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

                • HelloRoot@lemy.lol
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 day ago

                  adnauseun (firefox add on that will click all ads in the background to obscure preference data)

                  is what the top level comment said, so I went off this info. Thanks for explaining.

                  Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

                  I didn’t mean it like that.

                  I meant it in an illustrative manner - the results of their mass tracking and psychological profiling analysis was so dystopian, that filtering out random false data seems trivial in comparison. I feel like a bachelor or master thesis would be enough to come up with a sufficiently precise method.

                  In comparison to that it seems extremely complicated to algorithmically figure out what exact customized lie you have to tell to every single inidividual to manipulate them into behaving a certain way. That probably needed a larger team of smart people working together for many years.

                  But ofc I may be wrong. Cheers

  • fubbernuckin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 day ago

    I don’t know if there’s a clean way to do this right now, but I’d love to see a software project dedicated to doing this. Once a data set is poisoned it becomes very difficult to un-poison. The companies would probably implement some semi-effective but heavy-handed means of defending against it if it actually affected them, but I’m all for making them pay for that arms race.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    I have been a longtime advocate of data poisoning. Especially in the case of surveillance pricing. Unfortunately there doesn’t seem to be many tools for this outside of AdNauseum.

  • a14o@feddit.org
    link
    fedilink
    arrow-up
    12
    ·
    1 day ago

    It’s a good idea in theory, but it’s a challenging concept to have to explain to immigration officials at the airport.

    • WalnutLum@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      24 hours ago

      “it says here you clicked ‘sign me up for ISIS’ 10000 times?”

      “Haha no officer, you see it was my social chaff AI that clicked it”

  • relic4322@lemmy.ml
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    This is like chaff, and I think it would work. But you would have to deal with the fact that whatever patterns it was showing you were doing “you would be doing”.

    I think there are other ways that AI can be used for privacy.

    For example, did you know that you can be identified by how you type/speak online? what if you filtered everything you said through an LLM first, normalizing it. Takes away a fingerprinting option. Could use a pretty small local LLM model that could run on a modest local desktop…

  • stupid_asshole69 [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    This isn’t a very smart idea.

    People trying to obfuscate their actions would suddenly have massive associated datasets of actions to sift through and it would be trivial to distinguish between the browsing behaviors of a person and a bot.

    Someone else said this is like chaff or flare anti missile defense and that’s a good analog. Anti missile defenses like that are deployed when the target recognizes a danger and sees an opportunity to confuse that danger temporarily. They’re used in conjunction with maneuvering and other flight techniques to maximize the potential of avoiding certain death, not constantly once the operator comes in contact with an opponent.

    On a more philosophical tip, the masters tools cannot be turned against him.

      • stupid_asshole69 [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        spray-bottle

        No, you can’t.

        You are not the hero, effortlessly weaving down the highway between minivans on your 1300cc motorcycle, katana strapped across your back, using dual handlebar mounted twiddler boards to hack the multiverse.

        If ai driven agentic systems were used to obfuscate a persons interactions online then the fact that they were using those systems would become incredibly obvious and provide a trove of information that could be easily used to locate and document what that person was doing.

        But let’s assume what the op did worked, and no one could tell the difference.

        That would be worse! Suddenly there’s hundreds of thousands of data points that could be linked to you and all that’s needed for a warrant are two or three that could be interpreted as probable cause of a crime!

        You thought you were helping yourself out by turning the fuzzer on before reading trot pamphlets hosted on marxists.org but now they have an expressed interest in drain cleaner and glitter bombs and best case scenario you gotta adopt a new pitt mix from the humane society.

  • Ardens@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    So, she is talking about an AI-war? Where those who don’t want us to be private, controls the weapons? Anyone else see a problem with that logic?

    Thousands of “you” browsing different sites, will use an obscene amount of power and bandwidth. Imagine a million people doing that, not a billion… That’s just stupid in all kinds of ways.

  • chonkyninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    23 hours ago

    There’s plenty of tools already, that can create many profiles of you, each with complete different personalities and posts.

  • wise_pancake@lemmy.ca
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    In a different direction now is a good time to start looking at how local AI can liberate us from big tech.

    • dodgeflailimpose@lemmy.zipOP
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      Local AI requires Investments in local compute power which sadly is not affordable for private users. We would need some entity that we can trust to host. I am happy to pay for that

  • Ænima@feddit.online
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    I did this with period trackers. I’m male and my wife and I would always chuckle when my period was about to start.

  • relic4322@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    Ok, got another one for ya based on some comments below. You have all the usual addons to block ads and such, but you create a sock-puppet identify, and use AI to “click” ads in the background (stolen from a comment) that align with that identity. You dont see the ads, but the traffic pattern supports the identity you are wearing.

    So rather than random, its aligned with a fake identity.