This would have actually been a great thing to not only acknowledge but promote if they weren’t so caught up in their own hype.
Not that I will ever get into one of those death traps but if you tell the average consumer that any failures in autonomy immediately engage a tele-operator “to keep you moving on your way” they would probably feel better about riding.
I’ve done tele-driving before and it’s remarkably good, even if latency is a concern.
It’s the facade of it all, the need to seem to live up to the hype. It’s going to get more people killed.
I work near downtown Austin, where both Waymo and Robotaxi operate.
Waymo cars are some of the best drivers on the road because they actuallyt ested their product, use multiple Lidar sensors instead of just cameras, and have remote driver backups for unusual situations.
Teslas drive like maniacs and will end a ride and tell the driver to get out in the middle of a lane.
I’ve ridden in a few Waymo’s before, in SF they can be more dependable or easier to get than other ride options. I never felt like I was ever in danger in one.
Within my handful of experiences with them I’ve never had to use the help button or features to request assistance from a tele-operator but it was clear that they weren’t trying to hide the function from the passengers as the feature was explained and clearly labeled.
A friend who uses them often told me of the one time he needed to ask for assistance when their Waymo was stuck behind a doordash scooter with its hazard lights on that was either delivering or picking up and blocking a turn lane in downtown SF. The Waymo didn’t know what to do to get around it, my friend hit the button for assistance, a voice came over the speakers asking how they could help, my friend explained the situation and the tele-operator drove the car to safely navigate the situation. He said it was probably 1.5-2mins of tota inconvenience with 75% of that time was him wondering if he should hit the help button or not.
I understand a lot of AI implementation, such as Amazon Fresh or other business models have been hiding offshored human assistance within their “AI” features, which I do agree with you is deceitful but my experience with Waymo was not that. They did not hide or obfuscate that function and feature of the service but actively informed the passenger of its existence.
Granted, I haven’t ridden in one for almost a year at this point and I only did so in the SF market so things may have changed since or are different elsewhere.
Also, I can’t say that I follow the news intently about Waymo, I know they have run over a couple cats but I hadn’t heard anything about them killing people. Has that happened?
This would have actually been a great thing to not only acknowledge but promote if they weren’t so caught up in their own hype.
Not that I will ever get into one of those death traps but if you tell the average consumer that any failures in autonomy immediately engage a tele-operator “to keep you moving on your way” they would probably feel better about riding.
I’ve done tele-driving before and it’s remarkably good, even if latency is a concern.
It’s the facade of it all, the need to seem to live up to the hype. It’s going to get more people killed.
I work near downtown Austin, where both Waymo and Robotaxi operate.
Waymo cars are some of the best drivers on the road because they actuallyt ested their product, use multiple Lidar sensors instead of just cameras, and have remote driver backups for unusual situations.
Teslas drive like maniacs and will end a ride and tell the driver to get out in the middle of a lane.
I’ve ridden in a few Waymo’s before, in SF they can be more dependable or easier to get than other ride options. I never felt like I was ever in danger in one.
Within my handful of experiences with them I’ve never had to use the help button or features to request assistance from a tele-operator but it was clear that they weren’t trying to hide the function from the passengers as the feature was explained and clearly labeled.
A friend who uses them often told me of the one time he needed to ask for assistance when their Waymo was stuck behind a doordash scooter with its hazard lights on that was either delivering or picking up and blocking a turn lane in downtown SF. The Waymo didn’t know what to do to get around it, my friend hit the button for assistance, a voice came over the speakers asking how they could help, my friend explained the situation and the tele-operator drove the car to safely navigate the situation. He said it was probably 1.5-2mins of tota inconvenience with 75% of that time was him wondering if he should hit the help button or not.
I understand a lot of AI implementation, such as Amazon Fresh or other business models have been hiding offshored human assistance within their “AI” features, which I do agree with you is deceitful but my experience with Waymo was not that. They did not hide or obfuscate that function and feature of the service but actively informed the passenger of its existence.
Granted, I haven’t ridden in one for almost a year at this point and I only did so in the SF market so things may have changed since or are different elsewhere.
Also, I can’t say that I follow the news intently about Waymo, I know they have run over a couple cats but I hadn’t heard anything about them killing people. Has that happened?