DISCUSSION
SPEAKER E1: And then, of course, there is an influence on how the systems operated. So the combination of machine and shipment is something that we need to better understand. Then, of course, machine-machine communication, somebody was mentioning that before. And I think what is relevant there is notions like string stability. If we have all these [? ACC ?] and from a local perspective, everything is cool, but we get traffic jams, ghost traffic jams from a global perspective that is something that is fundamentally problematic.
And I think we need to do a lot of work in that area and that brings me to my last point. If we then combine machine-machine interactions and machine-human interactions, then we need to think about incentive engineering. How do we make the social-technical systems do something where we want to go. Thank you.
SPEAKER E2: Awesome. Well, there I thought that you didn’t cover more. You have opened the room for even more. How about John, do you want– do you have even more to add?
JOHN: Oh, yeah. I mean, I’d like actually just to start by thanking you for inviting me to join us. I think informally we’ve had some work funded with Intel as of about two days ago. So having a lot of time to think about this. I mean, Mario and Alexander have said a lot of really important things. I mean, I’d like to really just stress some things that they’ve said.
I mean, I think first of all understanding how you assure mean learning– machine learning is certainly one of the really difficult technical problems. I think if we can’t do that we’re not going to get regulatory authorities to set this sort of technology. I’m popular with people who like to design cars because I actually don’t think we’re going to do level 4, level 5, levels autonomy without very substantial infrastructure support.
And the carmakers want to make ego vehicles. The fact it’s called ego I think is actually quite interesting. But I think without the traffic lights talking to the cars and telling them, hey, I’m red now or whatever. I don’t actually think we’re going to able to make that work. Which also links– I forget whether it’s Mario or Alexander who said this. Actually, we need to think about how traffic works not just individual cars and to manage traffic in a city or on a motorway.
The looks at challenges to safety. I think we’re going to have to work at how to solve dynamic safety problems and actually having vehicles and systems understand safety as a first-class object and reason about risk at runtime. And Mario and I hopefully have of a project to do some of that. But I think that’s really tough.
The human system interaction I fully agree. It’s not clear to me that we’ll ever really succeed in building level 3 autonomy. The expectation is that the driver is playing solitaire on his laptop or listening to the news but then instantly can be back in the loop when it takes at least 10 seconds or 30 seconds if you’re lucky. How far would you go in 30 seconds at 100 kilometers an hour? I’m just not convinced that’s a realistic model.
I think we need to understand the commercial model. The systems I know of that I think are anywhere near being able to do autonomous driving. You have a 20,000 pound vehicle with a 200,000 pound supercomputer and sensor system. You know we’ll sell that to private individuals. I think we’re going to move to things like mobility as a service before the economics work.
In the UK the average car is used 53 minutes a day less now I’m sure during lockdown because of Coronavirus. If the car was on the road 16 hours a day, then actually the economics are complete if we can afford to put more technology into the vehicle. Linked to that, I would also say there is an insurance perspective.
There’s been consultation in the UK about adopting the autonomous lane-keeping system. Standard are licensing cars with that from next year. As we want to prove we’re different now we’re about to be separate from the EU. We said, hey, why not do it at 70 miles an hour, not 60 kilometers an hour as Standard says.
And now, everybody that I know of has said no, including a body called Thatcham who do the NCAP cup test in the UK, but they’ve also basically said you can’t do this. They are funded by the insurance companies in the UK. And if Thatcham says you can’t do it, then you won’t get these insurers moving or involve the insurance guys. I think we need this much bigger ecosystem than we have. And I’d also like to just stress one last point.
I think we need to find some better ways to get the safety and machine learning communities to work together. Well, actually I think the emphasis has to be more on the safety guys. We have to show how we can add value into machine learning processes. Cities are going to be built using machine learning. We have to be able to show we can add value rather than stand back and criticize and say, actually, that’s not good enough.
So we have to find a way of working together. That’s really tough. I’m involved in two series of workshops, which have safety in AI in the title. One is led by safety people with a few AI guys. We know that’s the other way around. We still don’t speak a common enough language. So there’s a real cultural problem to overcome. And actually, if a company like Intel can really make a big difference, it actually might be acting as a forcing function to bring those groups together.
Perhaps as Alex was saying actually having common data sets that both communities can work on and start to be able to interact with one another, all the better. I’m sure there’s more things to say, but like Alex, I kept it down to only 100 points. Hope that is of some value anyway. Thanks.
SPEAKER E2: That’s awesome. Maybe [INAUDIBLE] can bring [INAUDIBLE] to the US and give us a perspective. I think that you can follow on very nicely to what John said there based on your work on different areas for safety security also.
SPEAKER E3: Yeah. I think both John and Alexandra’s input are really interesting. So I will talk about things in terms of two perspectives. One is our technology-centric perspective in terms of how do we develop safe autonomous systems. So even just beyond AV. I know– I mean a lot of– we use AV and in my team as a metaphorically and literally a vehicle to achieving safer kind of framework, safety frameworks in autonomous systems in general.
One of the things that we– so we spent about maybe about a– almost a decade looking at the AV safety problem and we find that the problem is too wide. That if you solve this part of the problem, then they say, well, what other factors. What about other– so then– and then the community started to sort of say, well, let’s break it down into scenarios and then let’s try to treat scenarios as a template and then we just solve scenario-based [? ODT ?] based solutions.
But the issue with civilian driving is that there’s very little way of being able to look at the trade-off between safety and making progress. That means under what situations– if I drive from Philadelphia to Manhattan and I reach five minutes late then people say, OK, no big deal probably it’s traffic. But they don’t really say, oh, it’s good that you’re actually alive and it’s OK you reached five minutes late. We just overlook that.
But what we– so this trade-off is not very well defined and so– and what the safety community says, oh, we need to be as safe as possible, and then you make progress. You put a car like that on the road then the product guys will say, no way. We’re not going to do that. And like driving in Jerusalem, the whole point was if you can drive there you can drive anywhere else. But I come from Bangalore if you can drive there you can drive anywhere else.
[LAUGHING]
[INAUDIBLE] then [INAUDIBLE] especially in the last 20 years. So therefore what we said was– what we started thinking about five years ago was how do we still come up with safety concepts but focus on a really clear scope of how this trade-off works between safety and aggressive driving or a sort of driving. And so we said, well, our [? ODD ?] of choice is going to be in racing and because in racing you are pushing the limits of perception, planning, control. You are always operating at the limits of that.
And then you need to– you have a very clear trade-off. You want to be safe enough but as aggressive or assertive as possible. If you are half a second too late, it’s a career choice over there. So racing is also– the track is known. So we are again removing these other aspects. And then we focus on how do you go from sim to real. So we build simulators, we build perception planning control pipelines.
And then we also bridge that to real vehicles. But we don’t go straight to a full-scale real vehicle we go to 1/10 scale vehicles. And that’s why we built the F1TENTH community which now has over 80 universities, and we just had a front end competition in [INAUDIBLE] with over 63 participants.
The second aspect I would say– so that’s the technology aspect. So obviously for the– racing as an [? ODD ?] is the path we are taking. I’m not saying that that’s the path [INAUDIBLE] Intel should take. But I think that path is giving us very good insights into how do we balance safety and performance dynamically under all these different contexts that– as we are driving.
The second aspect is totally non-technical is how do you build your technology and talent pipeline? In order to be able to answer these question, we say, oh, well, the safety community should meet with the controls and formal method should talk to classic transportation folks. These communities are very, very happy being inside their communities.
You find the– like there are a few rogues like me that will hop around these communities, and then you all say, hey, you don’t belong here. How come you are giving an invited talk here. And then the same other communities all tell me that. So my conclusion from that was that if you kind of solve this kind of cross-disciplinary problems we’re always going to be minorities in these communities.
So we have to start to build a talent pool where people don’t just focus on the CV aspect or only on a certain functional safety aspect. You have to be able– be trained in looking at system design as a whole. So therefore again, going back to that content that’s why we have started this entire training program across these [INAUDIBLE] universities now where people look across perception planning control,