A few enterprising hackers have started projects to do counter surveillance against ICE, and hopefully protect their communities through clever use of technology.
Thank you, but I do disagree. You cannot know the “result” of that LLM does include all the required context, and you won’t re-clarify it, since the output does already not contain the relevant, and in the end you miss the knowledge and waste the time, too.
How are you sure the output does include the relevant? Will you ever re-submit the question to an algorithm, without even knowing it is required re-submit it, since there’s even no indication for it? I.e. The LLM just did not include what you needed, did not include also important context surrounding it, and did not even tell you the authors to question further - no attribution, no accountability, no sense, sorry.
I’m not sure we disagree. I agree that LLMs are not a good source for raw knowledge, and it’s definitely foolish to use them as if they’re some sort of oracle. I already mentioned that they are not good at providing answers, especially in a technical context.
What they are good at is gathering sources and recontextualizing your queries based on those sources, so that you can pose your query to human experts in a way that will make more sense to them.
You’re of course in your absolute right to avoid the tech entirely, as it comes with many pitfalls. Many of these models are damn good at gathering info from real human sources, though, if you can be concise with your prompts and avoid the temptation of swallowing its “analysis”.
Thank you, but I do disagree. You cannot know the “result” of that LLM does include all the required context, and you won’t re-clarify it, since the output does already not contain the relevant, and in the end you miss the knowledge and waste the time, too.
How are you sure the output does include the relevant? Will you ever re-submit the question to an algorithm, without even knowing it is required re-submit it, since there’s even no indication for it? I.e. The LLM just did not include what you needed, did not include also important context surrounding it, and did not even tell you the authors to question further - no attribution, no accountability, no sense, sorry.
I’m not sure we disagree. I agree that LLMs are not a good source for raw knowledge, and it’s definitely foolish to use them as if they’re some sort of oracle. I already mentioned that they are not good at providing answers, especially in a technical context.
What they are good at is gathering sources and recontextualizing your queries based on those sources, so that you can pose your query to human experts in a way that will make more sense to them.
You’re of course in your absolute right to avoid the tech entirely, as it comes with many pitfalls. Many of these models are damn good at gathering info from real human sources, though, if you can be concise with your prompts and avoid the temptation of swallowing its “analysis”.