I made one answer to this, but now I would like to add another one from a broader perspective.
It seems that the problem is that each driver bases its actions on mental predictions of what other drivers will do. For example, when I drive, I can tell when the car can move in front of me, even before it indicates, based on how it is built due to the gap between me and the car in front. He, in turn, can say that I saw him because I was retreating to make way for him, so it’s good to draw him in. A good driver picks up a lot of these subtle tips and is very difficult to model.
So, the first step is to find out which aspects of real driving are not included in failed models, and figure out how to insert them.
(Clue: all models are wrong, but some models are useful).
I suspect that the answer will be to give each simulated driver one or more mental models of what each other driver will do. This includes executing a scheduling algorithm for driver 2 using several different assumptions about what Driver 1 can do about Driver 2's intentions. Meanwhile, Driver 2 does the same thing about Driver 1.
This is something that can be very difficult to add to an existing simulator, especially if it was written in a regular language, because the planning algorithm can have side effects, even if it is only in the form in which it passes the data structure. But a functional language may be able to do better.
In addition, the interdependence between drivers probably means that there is somewhere a fixed point from which lazy languages tend to improve.
source share