How not to question the ethics of Tesla’s failed Autopilot tech

Thursday This New York Times Magazine publishes an important reporting feature Tesla Motors’ erratic, unreliable, and sometimes deadly self-driving technology, and what its success or failure might reveal about Elon Musk. Which, just up front, I must say to me is a very odd framing: on any list of topics that might be illuminated by fleets of not fully autonomous vehicles that exist on public roads, these cars tend to Mowing down pedestrians and plowing into stationary objects, of course “Elon Musk’s personality and morals” are the least important and definitely least interesting.

Still, it’s mostly an entertaining book. Journalist Christopher Cox joins some sweaty Tesla enthusiasts in California on a pair of surreal and darkly comedic journeys as they discover that their hype about the revolutionary, life-saving potential of self-driving cars occasionally A self-driving car is interrupted by the urgent need to prevent itself from killing someone spontaneously.

A minute later, the car warns Key with his hands on the steering wheel and his eyes on the road. “Tesla is kind of like a nanny in this right now,” he complained. If Autopilot was ever dangerously tolerant of careless drivers—even allowing them to doze off behind the wheel—that bug, like the stationary object bug, has been fixed. “Between the steering wheel and the eye tracking, it’s just a solved problem,” Key said.

[…]

Finally, Key asked FSD to take us back to the cafe. However, when we started to turn left, the steering wheel jerked and the brake pedal shuddered. Ki muttered nervously, “Okay.  …”

After a while, the car stopped in the middle of the road. A line of cars approached our side. Key hesitated for a second, but quickly took over and completed the turn. “It might pick up speed at that point, but I’m reluctant to cut it that close,” he said. Of course, if he’s wrong, there’s a good chance he’ll have a second AI accident on the same mile-long road.

There’s also some good reporting out there about how Tesla and Musk are habitually changing when it comes to communicating what the company’s cars are capable of and what those cars do — and Musk is light years away from the most reckless overpromise on the planet By. For the latter, Tesla compares its Autopilot technology’s crash statistics to those of human-driven vehicles in a way that at least seems designed to blur the background in favor of its claim that its AI drives better and safer than humans. Wrong statement. That’s bad.

This Second-rate The blog presents this topic in the context of utilitarianism and the risk-reward calculus. You can understand the impulse: the only reliable thing Elon Musk does is retreat into simultaneously half-baked and messianic long-termism when confronted with his own malice toward others or the fact that his cars kill people. Well, the fact that his car kills does add some regrettable weight to this crap. So here comes Peter Singer, every burnt-out internet person’s favorite utilitarian philosopher, to analyze the ethics of Musk’s willingness to flood the road with cars, sometimes spontaneously deciding it’s time to squash a toddler :

Even if Autopilot is just as deadly as a human driver, we should prefer AI, Singer told me, on the premise that the next software update based on crash reports and near-miss data will make the system safer. “It’s a bit like a surgeon doing experimental surgery,” he said. “They might lose patients the first few times they operate, but in the long run they save more patients.” However, Singh added, it’s important for surgeons to obtain informed consent from patients.

Here, at the top of the next paragraph, is exactly the moment my hair catches fire:

Do Does Tesla obtain informed consent from its drivers?

In the abstract, this is a good question: Tesla overpromises the performance of its cars and fudges its safety record; many Tesla human drivers — or, like operators? — may not know what they are buying, or what they are not buying.Too totally wrong question.

The salient informed consent question about unpredictable autonomous vehicles driving on public roads is not whether Tesla driver have enough information to agree to the risk to their own safety; in Singer’s analogy, the “patient”—whether he or Cox realizes it or not—is not the Tesla driver but the Everyone else there is using public roads, sidewalks and crosswalks, anyone can be killed or maimed at any time by unproven technology being tested on them without their knowledge or consent. No reasonable experimental surgical technique is likely to randomly kill innocent bystanders in the same building minding their own business, yet a self-driving car could spontaneously kill even the most informed and enthusiastically consenting Tesla superfan Kill anyone else it comes across. In fact, this has happened many times.This Second-rate The article itself later recounted one such case, when a self-driving Tesla ran through a red light at a Los Angeles intersection and crashed into a human-driven Honda, killing the Honda’s occupants—they had a crush on the Tesla. Whether drivers will take over and kill their autopilot systems.

That is, in Singer’s analogy, Tesla owners are using their car’s Autopilot system surgeon is. Not even your average surgeon, but basically human centipede Guys, perform surgery on randomly selected unsuspecting strangers.who cares if That Faced with the reality that other people don’t even have a choice in the matter, the risk that one person fully and informedly agrees with that he puts on other people?

It’s like reporting on the dangers of assault weapons and focusing on the risk of a happily ignorant AR-15 owner whose machine gun might blow up in his hand when he fires bullets at schoolchildren. It’s like wrestling with antivaxxers’ rights to dictate what’s inside them, and ignoring that amounts to granting them unilateral power to control other people’s stuff. You’ve been thinking that bloggers might instead consider public consent – the public might want to at least be consulted on their willingness to share the morning commuter’s two tons of killer robot tech exempted from normal security checks and amateur fanatics with the mentally ill The latter was road-tested in real-time traffic on the highway—but somehow it never got there.

In this way, Cox, knowingly or not, is mistaking the libertarian or sociopath (assuming you generously admit that they are not synonymous) view of Musk and Tesla: openness to uninformed public use. The road is a legitimate place to test a car’s inherently dangerous and unproven experimental technology, and building a data set through trial and sometimes fatal error may at some hypothetical future point enable the technology to live up to its manufacturer marketing bragging.Further: An unfalsifiable statement about what is an ideal future version of self-driving AI can do Turning over public infrastructure to the equivalent of crash tests is an unspoken and unadulterated benefit — like making the Big Mac a mandatory part of all elementary school lunches, because the CEO of McDonald’s says he dreams of having a Big Mac. Mac Prevention Day for Cancer.

Lurking behind it all, unscrutinized Second-rate The blogging question is who or what bears the ultimate responsibility to reasonably protect the public from the dangers that an unchecked bunch of brain-dead uber-capitalists and their deplorable cult of personality pose to us all. Maybe the fight has already been lost: after all, self-driving incompetent Tesla vehicles are already on the road, spontaneously flipping over, ramming pedestrians, shooting hapless human drivers at intersections, setting fire to nothing, and seeming to have no citizens or just …the political will to keep them off the road. There isn’t even any broad consensus that this is something that any institutional authority is capable of accomplishing.

but still.imagine the ideal world implied Second-rate Blogger disturbed by informed consent of Tesla drivers whose cars are killing others: They will know the dangers they face when they use full self-driving technology in their new cars. great. And I think the rest of us have to accept the possibility that the price of getting from here to there in the great outdoors is getting a mile of sidewalk dirtied by a predatory robot car. In your opinion, is this how things should be? At least we’ll be notified!

Dressing that bleak capitulation with pain or possibly necessary trade-offs to a coherent ethical philosophy, however well-meaning such an examination may be, is giving up the game. Quite simply, these things are broken death machines with no business on the road, and in a functionally marginalized society, the choice of whether to put them there doesn’t exist for a shitposting idiot like Elon Musk .For the media to dance around any part of this stark truth seems… well, let’s just settle for the status quo morally unstable.

Source link