[Source: Forbes]
The car we were riding in was a white Lexus RX450h outfitted with a $65,000 laser sensor on the roof, and other gear that included radar sensors in the front and rear bumpers, a high-def camera looking out from the windshield, and another looking inward at the passengers – about $100,000 worth of extra technology in all. It’s all pulling in massive amounts of data. The laser, for instance, takes 1.5 million range measurements per second.
On the instrument panel, a graphic depicted each of the cars around us as a white rectangle and tracked its movement relative to ours. It even picked up a motorcycle weaving its way between cars despite the fact that it wasn’t traveling in a marked lane. It also sent a message to let us know there was a tailgater following too closely behind us.
Before the car can drive itself, Google engineers have to drive the route themselves to gather data about the environment, and then add it to highly detailed maps of the roads and terrain. (Luckily, this is something that Google happens to be very good at.) When it’s the autonomous vehicle’s turn to drive, it compares the data it is acquiring from all those sensors and cameras to the previously recorded data. That helps it differentiate a pedestrian from a light pole.
There are limitations, though. Urmson says the driverless car can’t handle heavy rain and can’t drive on snow-covered roads “because the appearance and shape of the world changes. It can’t figure out where to go or what to do.” And engineers are still working on how to program the car to handle “rare events” like encountering a stalled vehicle over the crest of a hill or identifying debris, like a tire carcass, in the middle of the road.