- cross-posted to:
- aicompanions@lemmy.world
- cross-posted to:
- aicompanions@lemmy.world
Although they have a working prototype, I wouldn’t doubt that some of this video is CGI. Regardless, ChatGPT already talks and functions like this today, so we’re probably a lot closer to this reality than we think.
This looks pre programmed. No way this wasn’t done multiple times to ensure its success. The cameras are out of the frame from the robot, there’s nothing to indicate optical sensors on the humanoid itself.
This is really amazing and a little scary at the same time.
What kind of bothers me is the “uncertain way” the robot touched the drying rack after he put the dish inside to slightly moving it away or how he seems to almost stumble when he had to explain his action “i… i think i did pretty well”. Is the ai trained to do that? I don’t know, maybe i’m overthinking.