We're pretty
familiar with autonomous cars around here, and we've even been treated to a
ride in one of Stanford's robots at their automotive innovation lab, which they
launched in partnership with Volkswagen. You might also remember Shelley, their
autonomous Audi TTS, which autonomously raced to the top of Pikes Peak last
year. Volkswagen's thinking behind all of this high performance autonomous car
stuff is that at some point, they'll be able to program your car to be a far,
far better driver than you could ever be, and it'll have the ability to pull
some crazy maneuvers to save you from potential accidents.
Google,
who's just down the road from Stanford, seems to understand this, and they've
turned their autonomous cars up to "aggressive" in this driving demo
that they gave to some lucky sods in a parking lot at the TED conference in
Long Beach. It's pretty impressive: This might seem dangerous, but arguably,
this demo is likely safer than a human driving around the parking area at
normal speeds, if we assume that the car's sensors are all switched on and it's
not just playing back a preset path. The fact is that a car equipped with radar
and LIDAR and such can take in much more information, process it much more
quickly and reliably, make a correct decision about a complex situation, and
then implement that decision far better than a human can.
This is
especially true if we consider the type of research that is being done with
Shelley to teach cars how to make extreme maneuvers, safely. So why aren't we
all driving autonomous cars already? It's not a technical ; there are several
cars on the road right now with lane sensing, blind spot detection and adaptive
cruise control, which could be combined to allow for autonomous highway
driving. Largely, the reasons seem to be legal: there's no real framework or
precedent for yielding control of a vehicle to an autonomous system, and nobody
knows exactly who to blame or sue if something goes wrong.
And
furthermore, the first time something does go wrong, it's going to be like a
baseball bat to the face of the entire robotics industry. Anyway, enough of the
depressing stuff, here's an outside view of Google's robot car squealing around
that parking lot: For what it's worth, "aggressive" is apparently one
of four different driving personalities that you have the option of choosing
from every time to start up one of their robot cars.
Lidar (Light
Detection And Ranging) :
LIDAR (Light
Detection And Ranging also LADAR) is an optical remote sensing technology that
can measure the distance to, or other properties of a target by illuminating
the target with light,often using pulses from a laser. LIDAR technology has
application in geomatics, archaeology, geography, geology, geomorphology,
seismology, forestry, remote sensing and atmospheric physics, as well as in
airborne laser swath mapping (ALSM), laser altimetry and LIDAR Contour Mapping.
The acronym LADAR (Laser Detection and Ranging) is often used in military
contexts. The term "laser radar" is sometimes used even though LIDAR
does not employ microwaves or radio waves and is not therefore in reality
related to radar.
LIDAR uses
ultraviolet, visible, or near infrared light to image objects and can be used
with a wide range of targets, including non-metallic objects, rocks, rain,
chemical compounds, aerosols, clouds and even single molecules. A narrow laser
beam can be used to map physical features with very high resolution. LIDAR has
been used extensively for atmospheric research and meteorology.
Downward-looking LIDAR instruments fitted to aircraft and satellites are used
for surveying and mapping. A recent example being the NASA Experimental
Advanced Research Lidar. In addition LIDAR has been identified by NASA as a key
technology for enabling autonomous precision safe landing of future robotic and
crewed lunar landing vehicles. Wavelengths in a range from about 10 micrometers
to the UV (ca.250 nm) are used to suit the target. Typically light is reflected
via backscattering
Google Street View
Google Street View
Google
Street View is a technology featured in Google Maps and Google Earth that
provides panoramic views from various positions along many streets in the
world. It was launched on May 25, 2007, originally only in several cities in
the United States, and has since gradually expanded to include more cities and
rural areas worldwide. Google Street View displays images taken from a fleet of
specially adapted cars. Areas not accessible by car, like pedestrian areas,
narrow streets, alleys and ski resorts, are sometimes covered by Google Trikes
(tricycles) or a snowmobile. On each of these vehicles there are nine
directional cameras for 360° views at a height of about 8.2 feet, or 2.5
meters, GPS units for positioning and three laser range scanners for the
measuring of up to 50 meters 180° in the front of the vehicle.
There are
also 3G/GSM/Wi-Fi antennas for scanning 3G/GSM and Wi-Fi hotspots. Recently,
'high quality' images are based on open source hardware cameras from Elphel.
Where available, street view images appear after zooming in beyond the highest
zooming level in maps and satellite images, and also by dragging a
"pegman" icon onto a location on a map. Using the keyboard or mouse
the horizontal and vertical viewing direction and the zoom level can be
selected. A solid or broken line in the photo shows the approximate path
followed by the camera car, and arrows link to the next photo in each
direction. At junctions and crossings of camera car routes, more arrows are
shown.
Interactive
algorithms for path following involve direct communication with external
sources such as receiving navigation data from the leader or consulting GPS
coordinates. The Follow-the-Past algorithm is one such example; it involves
receiving and interpreting position data, orientation data, and steering angle
data from a leader vehicle]. The objective is to mimic these three navigational
properties in order to accurately follow the path set by the leader. As
orientation and steering angle are associated with GPS positional data, the
following vehicle can update its navigational state to match that of the leader
vehicle at the appropriate moment in time. One developed algorithm is best
described as a placing a trail of breadcrumbs based on the leading vehicle's
position . A cubic spline fit is applied to the generated breadcrumbs to
establish a smooth path by which to travel. This developed algorithm was tested
and showed centimeter-level precision in following a desired path