Google on Wednesday began inviting Gemini users to let its chatbot read their Gmail, Photos, Search history, and YouTube data in exchange for possibly more personalized responses.

Josh Woodward, VP of Google Labs, Gemini and AI Studio, announced the beta availability of Personal Intelligence in the US. Access will roll out over the next week to US-based Google AI Pro and AI Ultra subscribers.

  • MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    2
    ·
    24 hours ago

    the amount of corporate-personalized shit I want in my life

    would be zero when unsolicited. Don’t send me SMS, don’t send me e-mails, NEVER have my home speaker announce things I didn’t explicitly ask for.

    However, when I search for things, make requests of “the cloud” to bring me information, I do appreciate having my personal history influence those results. I don’t want to sift through all the NFL, NBA, NHL, etc. score results and commentary just to get a weather forecast. I don’t want to see all the “big celebrity / entertainment news” mixed in with my local news. And, this means that some degree of customization of my feeds and search results is necessary to steer those results to my preferences.

    Would I appreciate having more direct, intuitive, transparent control of the filtering? Hell yes. Is anybody offering anything better than Google out there right now? Very few, and mostly of very limited capability. Please prove me wrong with links to examples in your responses.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      20 hours ago

      Lemmy doesn’t have an algorithm that feeds me just the things I want to see. I have to shape it. I have to block people and subscribe to boards. And I have largely deterministic control over what I see.

      But look at Facebook. Look at Twitter. Look at YouTube. Look at … gestures at everything. It’s obvious that personalized services manipulate people to their detriment. They make people hate one another. They make people hate themselves.

      But that’s not even my personal objection, really I’m an AI enthusiast. I’ll have entire conversations just to see how it will react. I’ve jailbroken them. I’ve run identical scenarios over and over for countless hours just to tweak prompts to be slightly better. And I want a blank slate when I talk to AI. I want to tell it exactly what it needs to know about me to answer a given question, and no more.

      Because as we can see, an algorithm that really understands what we want to see and tweaks every single response to match — is manipulating us. And I don’t want to be manipulated. I want my thoughts, such as they are, to be my own.

      do appreciate

      I don’t want

      I can’t prove you wrong. If you are happy with a machine picking what you get exposed to, then you’ll do that and be happy. But I know how thoughts can be manipulated, and I know I’m not immune, so yeah, I don’t want AI that I don’t strictly control the context of. I don’t want my thoughts shaped by how the AI believes someone like me could most effectively be steered in a desired direction. Because I look around me and I know it can. If not to me then to thousands of others

      But you do you. I wouldn’t presume to tell anyone my opinion is the only correct one.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        42 minutes ago

        It’s obvious that personalized services manipulate people to their detriment. They make people hate one another. They make people hate themselves.

        I’d say that depends on who is in control of those services. The “big ones” like FB and X - sure, obviously. Others like BlueSky… less so. Reddit? Depends on how you use it. New Digg? Too early to tell.

        And I want a blank slate when I talk to AI

        In theory, yes, that’s what I want. In practice, I find that I get the best, most productive, results from AI when I just run a continuing conversation which it periodically “compacts” as its context window gets overloaded, but that remaining context almost always helps me get what I want out of the AI better than trying to re-state exactly everything I want for every interaction. Some of that is laziness, sure I could build my own context descriptions and “control” the LLM better, and I do create a body of specification documents as I go in an AI project, for the LLM to refer back to as needed, but for the main “conversation” I think it maintains the context window automatically better than I am capable of doing manually.

        an algorithm that really understands what we want to see and tweaks every single response to match — is manipulating us. And I don’t want to be manipulated.

        Some days, Google feels “in control” - I tell it what I like, what I don’t like, and content is shaped accordingly. Here, in the past month or so, I have felt a massive shift in what Google News is presenting me, tons of crap from X - much of it “aligned” to my point of view, but I don’t want “introductions to X” thank you very much, just switch it all off - but they don’t. And other news stories are quite a bit more “diverse” in their viewpoints than I was seeing several months ago, and I really don’t want to read the Proud Boys take on current events, thanks, no matter how elegantly dressed up it is.

        If you are happy with a machine picking what you get exposed to, then you’ll do that and be happy.

        It’s not that I’m happy, it’s that I really don’t have a choice. I can’t travel the whole world and make my own observations daily, and even if I did I wouldn’t have access to most of what matters… so, some form of curation in the news that reaches me is inevitable. I would like my sources to be as unfiltered and unbiased as possible (with the exception of filtering out sports and “entertainment”), but that’s always going to be an illusion. Cronkite and Brokaw were filtered and biased, they just did a good job of looking like they might not be.

        I don’t want AI that I don’t strictly control the context of.

        Good luck with that. Proto-AIs that you don’t control have been shaping the information that reaches you and everyone you know for decades now.