I think it’s terrible, I asked one to find a picture I thought was copied to me from somewhere was a quote not knowing it was made by that person.
The LLM uploaded to internet and “found” the exact picture stating it didn’t exist anywhere else, that source is still viewable on the internet despite me immediately submitting a request to remove. It’s not private but I felt like I’d been had.
I think combining LLM's with big-data was the most genius way to extract highly secret and sensitive information from individuals, intellectual property from companies, secrets from governments and military operations, pillow talk from executives and politicians. I could not have come up with a better way to burn it all down given all of these secrets will "leak" at some point. I fully expect it to become a master of manipulation especially for those that think their LLM is their significant other. Social media manipulation algorithms will be supplanted by AI.
But you trust Google, Gmail and a host of other online service providers? How do you know that Google isn’t using your email to train? You either have to trust your service providers or don’t use them.
This is like the early days when people didn’t trust buying things over the internet
> This is like the early days when people didn’t trust buying things over the internet
If you like bad analogies, why not do car analogies? For example, at least this one's accurate:
I wouldn't trust Sam Altman any more than a used car salesman. The only difference is, Sam Altman's persuaded me to pay him to sell me as a the product.
I think it’s terrible, I asked one to find a picture I thought was copied to me from somewhere was a quote not knowing it was made by that person.
The LLM uploaded to internet and “found” the exact picture stating it didn’t exist anywhere else, that source is still viewable on the internet despite me immediately submitting a request to remove. It’s not private but I felt like I’d been had.
I think combining LLM's with big-data was the most genius way to extract highly secret and sensitive information from individuals, intellectual property from companies, secrets from governments and military operations, pillow talk from executives and politicians. I could not have come up with a better way to burn it all down given all of these secrets will "leak" at some point. I fully expect it to become a master of manipulation especially for those that think their LLM is their significant other. Social media manipulation algorithms will be supplanted by AI.
> How much do you trust major LLM providers (OpenAI, Anthropic, Google, etc.) with your data?
I have zero trust in these companies on this count, and that's the main reason why I avoid using products that incorporate "AI".
But you trust Google, Gmail and a host of other online service providers? How do you know that Google isn’t using your email to train? You either have to trust your service providers or don’t use them.
This is like the early days when people didn’t trust buying things over the internet
> This is like the early days when people didn’t trust buying things over the internet
If you like bad analogies, why not do car analogies? For example, at least this one's accurate:
I wouldn't trust Sam Altman any more than a used car salesman. The only difference is, Sam Altman's persuaded me to pay him to sell me as a the product.
But you’re already trusting Google I assume with your data or Apple who is training AI models. There is no diffeevcd