Google on Wednesday began inviting Gemini users to let its chatbot read their Gmail, Photos, Search history, and YouTube data in exchange for possibly more personalized responses.
Josh Woodward, VP of Google Labs, Gemini and AI Studio, announced the beta availability of Personal Intelligence in the US. Access will roll out over the next week to US-based Google AI Pro and AI Ultra subscribers.



Mates, I’m positively effusive about AI compared to your average Lemmster. But I can’t for the life of me figure out why I would want personalized AI any more than I want personalized ads. Which is zero — that’s the amount of corporate-personalized shit I want in my life.
I would love an AI that was personalized to me and could answer the questions I want rather than shitting out generic shit that barely applies to my question. Something that knows me enough to help me with all the shit I don’t remember or have the energy to make myself start. That’s something I’ve dreamed about ever since I saw it in scifi media.
But I want to OWN and CONTROL the data and the way the AI handles the data. Unfortunately I don’t have the hardware to make them run very well, so I don’t really bother. Corpo AI is just opening your shirt and telling them where to swing the golf club.
I can see that, but also if I don’t own the AI, then knowledge it has about me could be used to manipulate me maybe in ways too subtle for me to notice.
would be zero when unsolicited. Don’t send me SMS, don’t send me e-mails, NEVER have my home speaker announce things I didn’t explicitly ask for.
However, when I search for things, make requests of “the cloud” to bring me information, I do appreciate having my personal history influence those results. I don’t want to sift through all the NFL, NBA, NHL, etc. score results and commentary just to get a weather forecast. I don’t want to see all the “big celebrity / entertainment news” mixed in with my local news. And, this means that some degree of customization of my feeds and search results is necessary to steer those results to my preferences.
Would I appreciate having more direct, intuitive, transparent control of the filtering? Hell yes. Is anybody offering anything better than Google out there right now? Very few, and mostly of very limited capability. Please prove me wrong with links to examples in your responses.
Lemmy doesn’t have an algorithm that feeds me just the things I want to see. I have to shape it. I have to block people and subscribe to boards. And I have largely deterministic control over what I see.
But look at Facebook. Look at Twitter. Look at YouTube. Look at … gestures at everything. It’s obvious that personalized services manipulate people to their detriment. They make people hate one another. They make people hate themselves.
But that’s not even my personal objection, really I’m an AI enthusiast. I’ll have entire conversations just to see how it will react. I’ve jailbroken them. I’ve run identical scenarios over and over for countless hours just to tweak prompts to be slightly better. And I want a blank slate when I talk to AI. I want to tell it exactly what it needs to know about me to answer a given question, and no more.
Because as we can see, an algorithm that really understands what we want to see and tweaks every single response to match — is manipulating us. And I don’t want to be manipulated. I want my thoughts, such as they are, to be my own.
I can’t prove you wrong. If you are happy with a machine picking what you get exposed to, then you’ll do that and be happy. But I know how thoughts can be manipulated, and I know I’m not immune, so yeah, I don’t want AI that I don’t strictly control the context of. I don’t want my thoughts shaped by how the AI believes someone like me could most effectively be steered in a desired direction. Because I look around me and I know it can. If not to me then to thousands of others
But you do you. I wouldn’t presume to tell anyone my opinion is the only correct one.
I’d say that depends on who is in control of those services. The “big ones” like FB and X - sure, obviously. Others like BlueSky… less so. Reddit? Depends on how you use it. New Digg? Too early to tell.
In theory, yes, that’s what I want. In practice, I find that I get the best, most productive, results from AI when I just run a continuing conversation which it periodically “compacts” as its context window gets overloaded, but that remaining context almost always helps me get what I want out of the AI better than trying to re-state exactly everything I want for every interaction. Some of that is laziness, sure I could build my own context descriptions and “control” the LLM better, and I do create a body of specification documents as I go in an AI project, for the LLM to refer back to as needed, but for the main “conversation” I think it maintains the context window automatically better than I am capable of doing manually.
Some days, Google feels “in control” - I tell it what I like, what I don’t like, and content is shaped accordingly. Here, in the past month or so, I have felt a massive shift in what Google News is presenting me, tons of crap from X - much of it “aligned” to my point of view, but I don’t want “introductions to X” thank you very much, just switch it all off - but they don’t. And other news stories are quite a bit more “diverse” in their viewpoints than I was seeing several months ago, and I really don’t want to read the Proud Boys take on current events, thanks, no matter how elegantly dressed up it is.
It’s not that I’m happy, it’s that I really don’t have a choice. I can’t travel the whole world and make my own observations daily, and even if I did I wouldn’t have access to most of what matters… so, some form of curation in the news that reaches me is inevitable. I would like my sources to be as unfiltered and unbiased as possible (with the exception of filtering out sports and “entertainment”), but that’s always going to be an illusion. Cronkite and Brokaw were filtered and biased, they just did a good job of looking like they might not be.
Good luck with that. Proto-AIs that you don’t control have been shaping the information that reaches you and everyone you know for decades now.
I would be ok with it being like a person, more like an acquaintance at work maybe. Specifically meaning I would be ok with having the AI know about me only based on what I’ve said to it.
But none of this surveillance economy stuff. And the AI model can be no snitch to big ad tech.
Even that is just confusing. I sometimes use Perplexity (because Pro comes with my bank account - neobanks have zero focus). And by default it remembers things you say. So when I ask a question sometimes it will randomly decide to bring in something else I asked about before. E.g. I sometimes use it to look up programming related stuff, and then when I ask something else it will randomly research whatever language it thinks I like now in that context too and do things like suggest an anime based on my recent interest in Rust for no good reason.
That’s true, these supposedly intelligent systems are really stupid about this stuff still. Especially with limited room to store that additional ever growing context about you.
I wasn’t accounting for quality, and it’s bad right now for this. And I’m skeptical it will get better. The models need to be tactful about using accumulated knowledge about the person driving.
I can’t help but feel my descriptions getting more and more similar to just describing a competent person. And, I’m aware I’m being idealistic and what I’m OK with won’t be a product any time soon.
I guess it would be fully on device, encrypted at rest and have a perfectly good memory of our conversations and it would be tactful in bringing in knowledge into conversations. I dunno, I’m just describing the ideal personal assistant AI. And many people would make it a companion. And… yeah, anyway. Surveillance capitalism and pervasive advertising is bad.
I’m sure that’ll be a tiered pay option soon enough. $10 a month gets you an ai that’s just an acquaintance. Doesn’t “care”, forgets your name sometimes, doesn’t remember that thing you talked about last week. $50 a month gets you your ai best friend. It “cares”, remembers everything about you, makes suggestions based on what it knows about you, even goes out of it’s way to prompt you first and ask you how your dentist appt went.
In my opinion ai is in the “drug dealer wandering around the club giving a free bump” phase. Once people get addicted and sew it into the fabric of every day life, these companies will up the price, make the cheaper tiers too frustrating to use, and charge up the ass. “Oh, you’ve got an ai boyfriend? Let’s see how much you’ll pay to keep it or have it lobotomized.”
Yeah, I described this more in another reply. And the more I described what I would want from an ai assistant the more it made me realize how bad it would be for society.
No it has been implemented long ago