Current hot topics in concurrent programming?

Hope this is the right place to ask [1], but I read a lot of good comments on other topics here, so I’ll just ask. I'm currently looking for a topic for my dissertation (Ph.d in non-German countries, I think) that should do something with parallelism or concurrency, etc., but otherwise I'm completely free to choose what I'm interested in. Also, everything with the GPU is not reasonable, because my colleague is already dealing with this topic, and we would like to have something else for me :)

So, the magic questions: what would you say are interesting topics in this area? Personally, I am interested in parallel functional programming languages ​​and virtual machines in general, but I would say that a lot of work has already been done there or actively researched (for example, in the Haskell community).

I would really appreciate any help in pointing me to other interesting topics.

Regards, Michael

PS: I already watched https://stackoverflow.com/questions/212253/what-are-the-developments-going-on-in-all-languages-in-parallel-programming-area , but there were no answers.

[1] I already asked http://lambda-the-ultimate.org , but the answer, unfortunately, was not as expected.

+4
source share
8 answers

Another area of ​​search is automatic parallelization. That is, given the sequence of instructions S0..Sn, come up with several sequences that perform the same work in smaller steps.

+6
source

Erlang programming!

+5
source

From the head:

  • Load balancing and ways to achieve the best level of parallelization. I think this can be a very good starting point for PhD, because here you can offer a new methodology and compare it with real values ​​in your hand (number of steps - this has already been mentioned, CPU usage, memory usage, etc.) General or for a specific algorithm or set of tasks (for example, image processing).

  • Parallel garbage collection. There are many algorithms for collecting, there are many algorithms for representing objects in memory. For example, there is a recent work from the Haskell community on Parallel GC: http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel-gc/index.htm Again, there is a good way to present your results and compare them with others, and this gives you flexibility in the end - you can focus on parallel data structures later or synchronize primitives or algorithms, etc.

+3
source

You probably already have your Ph.D .;). In any case: fault tolerance in massive parallel systems comes to my mind.

+2
source

Parallel processing mechanisms and rules are topics with high visibility in the commercial / industrial world of computers. So, what about viewing parallel implementations of the Rete algorithm (introductory descriptions here and here ), a fund under many commercial rules of business rules? Are there any Rete networks that are better suited for parallelization? Can relaying the vanilla Rete network to multiple networks that can be more efficiently performed in parallel? Etc.

+1
source

Parallelism friendly general applications. Right now, parallelism has largely focused on scientific computing and programming languages, but not so much on consumer applications, or on consumer-friendly features / data structures / design patterns, and this will be very important in a multi-core world.

+1
source

You mentioned Haskell, and you probably came across Data Parallel Haskell . Since big data analysis has been a big word lately, and given that the Map / Reduce niche is crowded, I think DPH is a good area of ​​research.

+1
source

All Articles