In the city of the future, no one will drive a car. Instead, you’ll be ferried around by a fleet of autonomous electric cars that will whiz through the city 24-7, their speed and direction entirely guided by on-board and cloud-based computer systems. These cars will never idle, never park. Unless they’re picking up or dropping off passengers or packages, or charging their batteries, they will barely need to pause, their flow along the streets as perfectly calibrated as a flock of starlings in the sky. When they need to be maintained, cleaned or stored, they will be sent to underground lots, or to dedicated zones outside the city—no need to use valuable surface real estate. Ordering a car will be as routine as streaming a movie on Netflix, and these on-demand vehicles will take you to work, to friends’ homes, on dates or on road trips. You’ll be able to read, nap, watch TV or have sex until you get to your destination. Your groceries, dry cleaning and the lumber to build a new deck will arrive at your home via other dedicated robo-vehicles. Gas stations will no longer exist. Parking lots will become parkland. Carbon emissions will drop. Congestion will ease, with more passengers sharing rides and vehicles routed efficiently. Because these cars will be exponentially safer, vehicular accidents and deaths will be a thing of the past. The cars will look different, too; without steering wheels or gas pedals or the need for drivers to sit in a front seat, roomier new designs will emerge.

Raquel Urtasun is making the dream of self-driving cars a reality. She’s the chief scientist at Uber Advanced Technologies Group, the company’s self-driving transportation lab in Toronto. Currently headquartered on the seventh floor of the MaRS building at College and University, it’s Uber’s only self-driving lab outside the U.S. and the only one dedicated expressly to building the “brain” for the company’s self-driving fleet. Uber started the division in 2015, betting that its ride-hailing business could bring in 70 per cent more revenue without human drivers.

Two years later, Uber hired Urtasun, a 41-year-old U of T computer science professor and a luminary of the artificial intelligence world. It was a match made in start-up heaven: Urtasun had already been working on self-driving for eight years. She considered it the “killer app” of AI, an immensely complex problem that, when finally solved, would transform how we move, work, live and play. And Uber’s deep pockets and appetite for growth would give her access to resources and data collection impossible to obtain in academia. “This is the best AI research lab for self-driving in the world,” Urtasun tells me.

The all-female senior leadership team: technology program manager Olga Palatnik, chief scientist Raquel Urtasun and senior engineering manager Inmar Givoni

The Toronto lab employs 60 people, and that number will double over the next year. This summer, they’ll move from their current home in the MaRS building into a larger lab near Bathurst and College. Last August, Toyota, the world’s largest automaker, and not one to make risky investments, struck a half-billion-dollar deal with Uber to develop self-driving cars. The Japanese multinational SoftBank and a consortium of other investors are rumoured to be buying a $1-billion stake in the company’s autonomous vehicles division.

The self-driving mission also offers Uber a chance to rehabilitate its image. Under its pugnacious founder and former CEO Travis Kalanick, Uber became synonymous with Silicon Valley hubris, prizing fierce expansion over just about everything else. It ran afoul of municipal governments and regulators from Seattle to Seoul. Allegations of harassment in the workplace, sexual and otherwise, were common. Google’s self-driving car offshoot, Waymo, sued Uber for stealing trade secrets, eventually settling for hundreds of millions in Uber equity (a telling vote of confidence). The new CEO, Dara Khosrowshahi, has overhauled the company’s frat-bro culture, implemented more rigorous driver screening processes and created additional safety features in the app.

Uber needs self-driving and the vast amount of money it represents. Ride-hailing is replacing private car ownership, particularly in cities, and Uber controls almost 70 per cent of the market. But it’s losing money—$865 million (U.S.) in the last quarter of 2018—and its recent IPO was a disappointment. Autonomous driving tech is expected to add $7 trillion (U.S.) to the global economy by 2050. According to Urtasun, self-driving at scale—that is, where the tech is affordable and safe for everyone to use—could be just a decade away. If all goes to plan, the company will gradually integrate driverless cars into its enormous, ambitious network long before anyone else can get there. “If we can’t do this at Uber,” Urtasun says, “nobody else is going to be able to do it.”

A self-driving car is essentially a robot on wheels. A robot on wheels with the compound eyes of a dragonfly, able to see in all directions at the same time. But before it can do that, it needs a lot of information. Uber’s prototype cars bristle with sensors, dozens of them, including cameras, radar and lidar—a radar-like scanner that uses laser light instead of radio waves. These sensors suck up as much information around the car as possible: where lanes and curbs are, crosswalks, traffic lights, other vehicles, pedestrians, bicycles, anything and everything on or near the route that the car is travelling. All these data and images are fed into a central computer that controls the steering system, acceleration and braking. The computer then uses this information to generate high-definition 3-D maps that the car uses for navigation.

The car must not only know what and where these objects are, but also how they’re going to behave. So self-driving car companies have spent years tooling around roads in various cities, gathering data to develop models for identifying objects and predicting what they will do. It’s impossible to write algorithms for all the scenarios a car and driver could encounter, so Uber’s AI system teaches itself through the complex interplay of previously gathered examples. The computation this requires must occur in milliseconds.

The first self-driving car appeared in 1986. A German aerospace engineer named Ernst Dickmanns outfitted a Mercedes van with cameras, sensors and computers, and tested it on the grounds of the Munich university where he worked. By the early 1990s, the German automaker Daimler-Benz was bankrolling Dickmanns’s research in the hopes of employing his technology in passenger cars. In 1994, Dickmanns let loose a pair of driverless Mercedes 500s, with passengers, on a highway in France. The cars reached speeds of 130 kilometres per hour, changed lanes and reacted to other cars. Dickmanns’s invention proved to be ahead of its time, or at least ahead of the power of computers.

His cars caught the attention of the Defense Advanced Research Projects Agency, the U.S. government developmental laboratory that’s credited with inventing the Internet. For many years, the Pentagon had been pushing defence contractors to build autonomous tech that would help reduce casualties. When they proved too slow, DARPA decided in 2004 to hold a competition open to anyone in the world: build a robot car, get it to drive 228 kilometres across the Mojave Desert, and win a million dollars. The teams that entered in subsequent years, brainiacs from Stanford and Carnegie Mellon, became the engineers who began the self-driving efforts at Tesla, Google and Uber.

Over the next couple of years, the companies vying for self-driving supremacy would become legion. The titans of Silicon Valley: Google, Apple, Tesla. The automobile manufacturers: Ford, GM, Honda, Daimler. The start-ups, with their peppy names: Zoox, Aurora and Voyage. The prodigy: George Hotz, the 26-year-old San Francisco hacker who built a self-driving car in his garage in a month. Just this February, the federal government gave BlackBerry—BlackBerry!—$40 million for software development and skills training in their autonomous vehicle division.

When Uber decided to create a self-driving lab for research and development, their first and only choice to run it was Urtasun. She agreed, on the condition that the lab was based in Toronto. Urtasun and Uber were drawn to Toronto for much the same reasons. Under Geoff Hinton, the so-called godfather of deep learning, U of T has become an epicentre of AI research. The talent is here, with almost nine per cent of the city’s labour force working in tech-related jobs and a government keen to entice foreign talent and fund research. On a practical level, Toronto is good for data collection because of its complex traffic patterns, streetcar lines and endless construction. The city is a perpetual-motion machine. Uber’s arrival would help expand the tech sector, which would in turn lead to more construction, which would make for even better data collection.

In 2016, Ontario launched a 10-year pilot program that allowed the testing of driverless cars on our roads. It was the first such program in the country. Seven companies and institutions—including Uber, the University of Waterloo, Magna and the Erwin Hymer Group, which is developing autonomous RVs—were selected as participants. In the initial years of the program, that testing took place largely on closed courses, but in January of this year, participants in the program were at last given access to public roadways. The robo-cars were suddenly among us.

In February and March, I visited Uber’s lab, where Urtasun walked me through some of her research and introduced me to members of her team. The office is modern, bright and busy, with banks of workstations occupied by enormous computer monitors and manned by dozens of whiz kids from all over the world. I never saw anyone over the age of 45. Floor-to-ceiling windows look out onto Queen’s Park Circle. Conference rooms are named for scientific pioneers—Bell, Banting—and the kitchen was well stocked with teas, coconut water, protein powder, potato chips and overflowing bowls of fruit. It could be a computer lab at any major university in North America. Which is, essentially, what it is. Uber’s relationship with U of T remains exceptionally tight, and a number of the staff are Urtasun’s PhD students who are working full time at the company while completing their theses.

Urtasun doesn’t have an office. She works at a standing desk in a corner, alongside the rest of the staff. She’s 43 but looks much younger, and possesses the self-contained swagger of someone who knows a lot of things that the rest of us don’t. She’s slim and athletic—she played competitive basketball for 15 years—with a frequent, off-kilter smile that makes her look like Charlotte Gainsbourg’s bookish sister. Born and raised in Pamplona, Spain, she speaks English—her third language after Spanish and French—rapidly and formally, with a pronounced accent (when she says “Volvo,” for instance, it sounds like “bolbo”). She loved math and video games as a kid, and when she attended university in her hometown, she initially studied electrical engineering. But image processing and machine learning fascinated her—algorithms, she says, were fun, like solving puzzles—and she wound up doing a PhD in computer science. “I was very interested in how to teach computers how to understand the world as we do,” she says. Over the years, she’s built an algorithm that can create Christmas carols and another that analyzes whether or not certain clothes are fashionable.

After postdocs at MIT and Berkeley, she became a professor at the Toyota Technology Institute, affiliated with the University of Chicago. In 2014, she moved to U of T, and a year later became Canada Research Chair in Machine Learning and Computer Vision, one of the most prestigious academic fellowships in the country. Along with Hinton, she co-founded the Vector Institute, a $135-million think tank just down the hall from Uber, which has led the boom in AI research in Canada. Since joining Uber in 2015, Urtasun’s renown, and the demand for her expertise, has soared. In 2017, Wired magazine called her an “AI superstar.”

Part of her fame comes, predictably, from being a woman in a field that still skews male. Not coincidentally, Urtasun’s senior leadership team is currently all women. Her two main lieutenants are Inmar Givoni, a lavender-haired Israeli who runs the Toronto engineering team, and the Ukrainian-born Olga Palatnik, who previously worked for Kobo and is now the lab’s technology program manager. “It’s easier to build an inclusive, diverse environment,” says Urtasun, “if you have leadership that’s been subject to bias.”

Another condition of Urtasun taking the job at Uber—“my value proposition,” she calls it—was that she and her staff be able to freely publish their research. That seemed bonkers to me given the apparent intensity of the competition, but Urtasun is playing a long game. None of the algorithms her lab is developing now, she says, is sophisticated enough to fully solve the enormous technical problems of self-driving. The only way for self-driving to become a reality, therefore, is for researchers to share knowledge. That sharing, in turn, attracts more and better talent, who will take the research even further.

In one of our meetings, Urtasun ran through a presentation that illustrates how the tech works. She showed me a series of images, 3-D and 2-D maps, simulations and other digitizations. Sometimes I could follow; at other times, I felt like I was watching someone play the world’s most inscrutable video game. She demonstrated how her system could generate maps on the fly. On one screen, computer-generated buildings and trees and pedestrians flew past, each object fixed in colour-coded bounding boxes. Another video depicted a traffic jam and trajectories for all the vehicles ahead of and behind us.

Just about every company in the self-driving game makes 3-D maps of the cities in which their vehicles operate. But these maps take a lot of time and money to produce. They require vehicles to pass several times, in both directions, through city streets, gathering data about every object in their vicinity. Those objects then have to be labelled by a crew of human operators. According to industry estimates, it would cost $2 billion to map the entire United States just once. Uber’s AI, by contrast, can extrapolate and build a map from fewer sweeps of the environment. According to Urtasun, Uber’s mapping techniques will cut the cost of building self-driving technology by millions of dollars.

While other systems also use algorithms, they are reliant on a complex software pipeline in which each task the car needs to accomplish—detection, tracking, prediction, motion-planning—requires separate, interlocking components. And each of these components requires hundreds of people to engineer it. This, Urtasun says, is what’s slowing down the development of self-driving. Uber’s system will do away with all that. It will use a holistic, single AI system that, she says, is designed to focus entirely on its end goal, and which can actually train itself to get better at pursuing that goal. “It actually learns to drive,” she says of the system.

“So why then would other companies invest in those software components?” I ask. “Why wouldn’t they just wait for you to finish what you’re doing, then use your work?”

“By the time they see what we’ve published, we’ll already be on to the next generation. And when this is ready, it’ll kick ass over everybody else.”

Uber is fond of declaring its positive effect on the Toronto tech sector. There has been an influx of workers from around the world—in 2017, the GTA generated more tech jobs than the Bay Area, Seattle and Washington, D.C., combined. In addition to the expansion of its research lab, Uber plans to invest another $200 million in a new engineering hub here, eventually employing 500 people. “There is no longer a brain drain,” Urtasun says. “It’s a brain gain.”

But what effect will self-driving have on Toronto’s streets? In 2015, David Ticoll, a fellow at the Munk School of Global Affairs, produced a rhapsodic report predicting that autonomous vehicles will be the prevailing mode of transportation by the 2030s. Assuming 90 per cent market penetration, Ticoll estimates there will be 12,000 fewer road accidents here each year, 38 fewer fatalities, a lot fewer injuries and savings of $1.2 billion in collision costs. Self-driving cars will save us money in a bunch of other ways, too: $2.7 billion in congestion costs, $1.6 billion in car insurance, half a billion in parking fees and fines. And autonomous vehicles—assuming they’re electric, which is Uber’s plan—will eventually reduce traffic emissions by 87 to 94 per cent.

The report identifies three likely scenarios to emerge in the next decade or so, the most “transformative and advantageous” closely resembling Urtasun’s vision. Torontonians will give up their personal cars in favour of walking, biking, mass transit, pod-like automated taxis for one or two people, and automated minibuses. There will be new jobs—for people who can build autonomous vehicles, people who can redesign roads, and people who sell stuff, make food and run courier companies—and delivery robots will make getting stuff around the city easier, cheaper and faster. But automation will also, of course, eliminate jobs. Task-specific robots that pick up garbage, remove snow and clean streets will be as common as cornflakes. So long bus drivers, taxi drivers, truck drivers. Oh, and so long streetcars: automated taxis and minibuses will replace those. The city will have to make up the revenue it receives from the federal gas tax, about $270 million, as well as from parking fees and tickets.

At street level, Ticoll says, autonomous vehicles picking up and dropping off people and goods will vie for curb space. Streets will need dedicated lanes for bikes and scooters as well as autonomous vehicles. And while self-driving might reduce the number of vehicles in Toronto, it won’t reduce the number of vehicles on the roads. Right now, most cars spend about 95 per cent of their time parked. Autonomous vehicles will allow cars to stay on the road almost perpetually. Though self-driving could alleviate the causes of congestion, it will also likely stimulate additional demand for car travel—if you can spend your time binge-watching Game of Thrones for the 15th time, you might not balk too much at a daily commute from Hamilton. And if you don’t actually need to steer or reach the pedals, if there is in fact no steering wheel or accelerator, how soon before your tween starts borrowing autonomous vehicles for their next slumber party? Ticoll’s report didn’t really touch on these questions, but academics have other concerns about self-driving cars: that they’ll need more technology to mitigate safety risks, that they’ll be hacked and weaponized.

In 2016, the city created a staff position focused on preparing Toronto for autonomous vehicles—the first such position at any government body in Canada. That staffer, Ryan Lanyon, has produced his own report, which details the impact of self-driving on everything from employment to emergency vehicles. Only one pilot project, set to start late next year, has been announced: a new transit route, location to be determined, that will use driverless shuttle vehicles carrying eight to 12 passengers. (Lanyon would not comment on any of this.)

It’s difficult to imagine a city like Toronto, notoriously slow in implementing even rudimentary bike lanes, and in completing the Union Station revitalization, ever being able to catch up with the futuristic visions of Silicon Valley. Uber was not named explicitly in any of Toronto’s planning, but one of the recommendations that council voted on was researching collaborative opportunities with Google’s Sidewalk Labs, which had proposed using autonomous vehicles in its Quayside development. Sidewalk Labs promised urban revolution too, but after a year and a half, amid privacy concerns, governmental squabbling and competing ideas over public space, that revolution seems to have stalled.

Last March, in Tempe, Arizona, disaster struck. One of Uber’s autonomous vehicles, operating in automated driving mode, killed a pedestrian, the first time such an accident had occurred. Uber was deemed not criminally responsible, but the company pulled all its self-driving cars off the road and considered pulling the plug on the program altogether. Instead, Uber put self-driving cars back on the road. More or less. In Toronto and San Francisco, they’re operated by two human drivers, exclusively in manual mode. Their only job is to collect data. The cars are confined to very small areas downtown and can operate only at low speeds. (In Pittsburgh, they’re again running autonomously.)

Uber seemed to be slowing down right when it needed to speed up. That same December, Waymo, Google’s self-driving car project, launched a self-driving taxi service in Phoenix. It was small-scale, operating only in a 160-kilometre zone around the city, catering to a curated customer base. A human driver had to be in the car in case of problems. But still, it seemed like a great leap forward, and a commercial one at that. A 15-minute, five-kilometre trip cost just a few cents more than a regular Uber trip of the same distance (and will cost much less once there’s no need for humans in the car).

Thanks to such public displays of progress, Waymo appears to be leading the self-driving race. But it can be difficult to accurately assess the competition. Most companies are highly secretive about their tech, and the data that companies report—the number of times a human driver has to take the wheel, distance driven—is not reported consistently. A kilometre driven in snowy Pittsburgh, for example, is not the same as a kilometre driven in temperate Phoenix.

I wanted to get in an autonomous vehicle and see for myself, but after the death in Tempe, such media access is no longer possible. Uber did, however, allow me to look at one, bringing it up to University Avenue from a garage below MaRS—a gleaming silver Volvo XC90 SUV. Two safety drivers sat in the front and two in the back, not doing much of anything. Urtasun pointed out a few features: the periscope-like lidar device mounted to the roof, seven recessed cameras just below it and radar sensors at the front of the vehicle. But I wasn’t allowed to sit inside or take pictures of the car’s interior, or really even look inside the car. Tinted rear windows hid the computer and cooling systems that filled the trunk. After hearing about all the wondrous technology that Urtasun’s team had dreamed up and was on the verge of deploying, staring at a parked car with its windows rolled up was anticlimactic and frustrating—like going to the Louvre and not getting to see the Mona Lisa.

Urtasun shrugged. “Getting into a car,” she says, “you’re not going to see anything. They’re comfortable, there’s no intervention, they look great. But that’s not proving anything. Those metrics are not the real metrics of leadership.” For her, the real proof is in her papers, in the math, in her mind—not on the road.

Urtasun walks to work. She has never owned a car. She’s more into motorcycles, she says, though she doesn’t currently own one of those, either. With all the transportation options now available, she doesn’t understand why anyone owns a car in this city. The future of self-driving, for her, is almost socialist in nature: shared, cheap transportation, where you’d no more own a car than you’d own your own jetliner.

And yet it’s hard to square that vision with the capitalistic realpolitik that has traditionally governed Uber. Khosrowshahi has called Uber the “Amazon of transportation,” a comparison that would chill any idealist’s heart. In our last meeting, I asked Urtasun how much pressure there was on her. What if Uber runs out of money before self-driving becomes the reality she dreams of? Urtasun spends her days baking uncertainty into her algorithms, but uncertainty doesn’t seem to trouble her outside the lab. “For me, this is not necessarily a concern.”

Maybe my questions were too earthbound. In one conversation, when Urtasun was describing her fantasy of a self-driving future, she talked briefly about another Uber project. The world, she explained, is three-dimensional, but roads are only two-dimensional—one-dimensional, sometimes—and it’s only a matter of time before we’re using the space above those roads to get around the city. I thought, for a minute, that she might be joking. But no. “We will have self-driving flying things,” she said, smiling. Urtasun’s optimism is contagious. For the moment, I forgot about contested curbs, dedicated lanes, weaponized cars. It was like someone else had taken the wheel.


This story originally appeared in the May 2019 issue of Toronto Life magazine. To subscribe, for just $29.95 a year, click here.