Ex-Google engineer Blake Lemoine discusses why LaMDA and other AI systems may be considered sentient and explains exactly how much AI systems know about consumers.
You hypothesized that LaMDA has a soul. Where are you now on that scientific continuum between hypothesis, theory and law?
Lemoine: I’ve been trying to be very clear about this. From a scientific standpoint, everything was at the working hypothesis, doing-more-experiments state. The only hard scientific conclusion that I came to was that the hypothesis that LaMDA is not just the same kind of system as GPT-3, Meena and other large language models. There’s something more going on with the LaMDA system.
Many articles about you say, ‘This guy believes the AI is sentient.’ But when we think about a ‘working hypothesis,’ did you mean you were still working on that idea and hadn’t proven it?
Lemoine: [A working hypothesis] I think is the case. I have some amount of evidence backing that up and it is non-conclusive. Let me continue gathering evidence and doing experiments, but for the moment, this is what I think is the case. That’s basically what a working hypothesis is.
Read More at Tech Target
Read the rest at Tech Target