Walter Isaacson’s Elon Musk biography is set to be published on Tuesday, and a new preview of the book illustrates details about Tesla’s development of the upcoming Full Self-Driving (FSD) version 12.
In an additional preview of his Musk biography for CNBC, Isaacson discusses the use of AI in the development of Tesla’s FSD v12, in a shift that took place within the last several months. Isaacson talks about Tesla’s recent development of the upcoming FSD v12, which he and Tesla demonstrate has moved away from a “rules-based” approach.
Notably, FSD v12 is expected to use billions of video frames from real-world driving incidents to train its neural network rather than using thousands of lines of code like previous versions. In a conversation with Musk last December, Tesla Autopilot employee Dhaval Shroff had likened the concept to the popular chatbot ChatGPT, instead for use with driving.
“It’s like ChatGPT, but for cars,” Shroff said. “We process an enormous amount of data on how real human drivers acted in a complex driving situation, and then we train a computer’s neural network to mimic that.”
Surprisingly enough, Tesla only shifted toward this “neural network planner” approach recently. By the beginning of this year, however, the neural network had already analyzed 10 million video clips based on the best-case-scenario drivers the system had access to. Musk instructed employees at the company’s Buffalo, New York facility who were in charge of analyzing the footage to train the AI on things “a five-star Uber driver would do.”
Moving from a rules-based to a network-path-based AI approach allowed FSD to use human driving data to avoid obstacles, even if breaking some rules was necessary. Shroff helped demonstrate the idea to Musk with a demo featuring trash bins, debris, and upturned traffic cones, which the car handled surprisingly well.
“Here’s what happens when we move from rules-based to network-path-based,” Shroff explained. “The car will never get into a collision if you turn this thing on, even in unstructured environments.”
Musk quickly took to the idea, as can be seen in a recent livestream of Tesla’s FSD v12 software in Palo Alto with Autopilot software director Ashok Elluswamy. He has repeatedly spoken about the upcoming software version’s impressive driving results, despite one small moment in the drive where the car almost ran a red light.
In any case, Musk could argue that the red-light moment is a good case for the need for self-driving software to continually learn. Given that it will constantly be trained from the video data generated by camera footage from real-world drivers, it should theoretically make it safer over time, according to Musk.
During development, Musk also reportedly latched onto the fact that it took over a million video clips for the neural network to begin performing well, though he looks forward to what significantly more data will do for FSD.
Still, critics and regulators have expressed concerns about the faults of human drivers training AI-based driving systems, and Tesla has repeatedly been questioned by the National Highway Traffic Safety Administration (NHTSA) about its Autopilot and FSD beta systems.
According to Isaacson, Tesla plans to release FSD v12 as soon as regulators approve it. Meanwhile, an ongoing study by the National Highway Safety Board is looking to determine if self-driving cars should be permitted to imitate human driving actions that blur traffic rules, such as creeping up at stop signs.
Musk said in April that he expects Tesla to reach full autonomy within a year, though he has also been known to share ambitious targets for the software in the past.
You can read Walter Isaacson’s full account of the development of Tesla FSD v12 here, in a CNBC preview of the upcoming Elon Musk biography.
What are your thoughts? Let me know at zach@teslarati.com, find me on X at @zacharyvisconti, or send your tips to us at tips@teslarati.com.
The post Tesla FSD v12 shifts away from ‘rules-based’ approach appeared first on TESLARATI.
This content was originally published here.