In the hypothetical circumstances that you are developing, there are approximately 10 in-game requests for each core, as long as a reasonable handling of the request to the kernel (perhaps even the simplest cyclic load balancing) is just fine if each request lives throughout its life cycle on one core.
Point, this scenario is only ONE possibility - heavy requests that can really benefit (in terms of lower latency) from marshaling multiple cores per request are certainly an alternative option. I suspect that your script is more common on today's website, but it would be nice to handle both types and "batch" background processing ... especially because the number of cores (unlike each core speed) is what increases , and that will grow today.
Not with me to object to the wisdom of Jacob Kaplan-Moss, but I'm used to becoming a very good concurrency, at my employer, in a more pleasant and more explicit and difficult way than he seems to be defending - mapreduce for batch-like jobs based on the hash distribution for registering N backends to work around for 1 request, etc.
Perhaps I just do not have enough real experience with (say) Erlang, Scala or Haskell regarding the new transactional software memory to see how wonderfully they scale for the high use of tens or hundrends of thousands of cores on low QPS, high workloads on Q. ... but it seems to me that the silver bullet for this scenario (minus a relatively limited subset of cases where you can turn to mapreduce, pregel , sharding, etc.) has not yet been invented in any language. With an explicit, carefully thought-out Python architecture, it is certainly no worse than Java, C # or C ++ when processing such scripts, at least in my experience.
source share