Concurrent Network Client in Cocoa

I am trying to develop in my head a better way to structure a Cocoa application, which is essentially a parallel download manager. There, on which the application is running, the user makes a large list of things that need to be demolished, and the processes of the applications that are listed. (It does not use HTTP or FTP, so I cannot use the URL loading system, I will talk through socket connections.)

This is basically a classic manufacturer-consumer model. The trick is that the number of consumers is fixed, and they are constant. The server sets a strict limit on the number of simultaneous connections that can be opened (although usually at least two), and opening new connections is expensive, so in an ideal world, the same N connections are open for the life of the application.

One way to get closer to this is to create N threads, each of which will β€œown” the connection and wait for the request queue, blocking if it is empty. Since the number of connections will never be huge, this is not unreasonable in terms of the actual costs of the system. But conceptually it seems that Cocoa should offer a more elegant solution.

It seems I could use NSOperationQueue and call setMaxConcurrentOperationCount: with the number of connections. Then I just load the download requests into this queue. But I'm not sure, in this case, how to manage the connections themselves. (Just put them on the stack and rely on the queue to make sure I'm not overloaded / not starting? Throw it away send the semaphore along with the stack?)

Now that we are in the brave new world of Grand Central Dispatch , are there any other ways to solve this problem? At first glance, this does not seem like the flagship ability of GCD dynamically scales concurrency (and is mentioned in Apple's recommendations on Changing Consumer Producers ) does not really help me. But I just scratched the reading surface about it.

EDIT:

In case it matters: yes, I plan to use asynchronous / non-blocking API sockets to actually communicate with the server. Thus, I / O itself should not be in its flows. I am simply interested in the mechanism of the order of work and (safely) delivering it to the connections when they become available.

+4
source share
2 answers

For posterity, after some discussion elsewhere, the solution that I think I would take for this is basically:

  • There is a queue of pending download operations, initially empty.
  • There is a set containing all open connections, initially empty.
  • Have a modified array (queue, really) of unoccupied open connections, initially empty.
  • When a user adds a download request:
    • If the array of unoccupied connections is not empty, delete it and assign a load for it.
    • If there are no free connections, but the number of shared connections has not reached its limit, open a new connection, add it to the set and assign it to download.
    • Otherwise, start the download later.
  • When the download is completed: if there are requests in the queue, delete one of them and transfer it to the connection; otherwise, add the connection to the waiting list.

All this work will take place in the main stream. The work of decoding the results of each download will be uploaded to the GCD, so it can handle concurrency throttling, and it does not clog the main thread.

Opening a new connection may take some time, so the process of creating a new one can be a little more difficult in real practice (for example, inserting a download into the queue, initiating the connection process and then deleting it after the connection is completed). But I still think that my perception of the possibility of race conditions was overestimated.

0
source

If you use non-blocking CFSocket calls for I / O, I agree that everything should happen in the main thread, allowing the OS to handle concurrency problems, since you are just copying the data and not really doing any calculations.

Also, this seems like the only job your application needs to do is maintain a queue of downloadable items. When any one transfer is complete, a CFSocket call may initiate the transfer of the next item in the queue. (If the queue is empty, reduce the number of connections, and if something is added to the empty queue, start a new transfer.) I don’t understand why you need several threads for this.

You may have left something important, but based on your description, the application is tied to I / O, not CPU, so all concurrency materials are just going to make more complex code with minimal impact on performance.

Do it all in the main thread.

+1
source

All Articles