What is the difference between asynchronous programming and multi-threaded processing?

I thought it was basically the same thing - writing programs that share tasks between processors (on machines with 2+ processors). Then I read this where it says:

Asynchronous methods are for non-blocking operations. A wait expression in an asynchronous method does not block the current thread while the expected task is executing. Instead, the expression signs the rest of the method as a continuation and returns control to the caller asynchronous method.

The async and await keywords do not create additional threads created. Asynchronous methods do not require multithreading, because the asynchronous method does not work in its own thread. The method works on the current synchronization context and uses the time in the stream only when the method is active. You can use Task.Run to move the processor to the background thread, but the background thread does not help with the process; it's just waiting for the results to become available.

and I wonder if anyone can translate this into English for me. It seems that a distinction is made between asynchrony (is that a word?) And multithreading, and it is understood that you may have a program that has asynchronous tasks but does not support multithreading.

Now I understand the idea of ​​asynchronous tasks like the pg example. 467 of Jon Skeet C # In Depth, 3rd Edition

async void DisplayWebsiteLength ( object sender, EventArgs e ) { label.Text = "Fetching ..."; using ( HttpClient client = new HttpClient() ) { Task<string> task = client.GetStringAsync("http://csharpindepth.com"); string text = await task; label.Text = text.Length.ToString(); } } 

The async means "This function, whenever it is called, will not be called in the context in which its completion is required for everything after the call."

In other words, write this in the middle of some kind of assignment

 int x = 5; DisplayWebsiteLength(); double y = Math.Pow((double)x,2000.0); 

since DisplayWebsiteLength() has nothing to do with x or y , it will cause DisplayWebsiteLength() run in the background, like

  processor 1 | processor 2 ------------------------------------------------------------------- int x = 5; | DisplayWebsiteLength() double y = Math.Pow((double)x,2000.0); | 

Obviously this is a stupid example, but am I right or am I completely baffled or what?

(Also, it bothers me why sender and e never used in the body of the above function.)

+163
multithreading c # asynchronous parallel-processing async-await
Jan 08 '16 at 15:53
source share
5 answers

Your misunderstanding is extremely common. Many people teach that multithreading and asynchrony are the same thing, but they are not.

An analogy usually helps. You cook in a restaurant. The order comes for eggs and toasts.

  • Synchronous: you cook the eggs, then you cook the toast.
  • Asynchronous, single-threaded: you start cooking eggs and set a timer. You start making toast and set a timer. While they both cook, you clean the kitchen. When the timers leave, you take eggs from the heat and toast from the toaster and serve them.
  • Asynchronous, multi-threaded: you hire two more chefs, one prepares eggs and makes toasts. Now you have the problem of coordinating the cooks so that they do not conflict with each other in the kitchen when sharing resources. And you have to pay them.

Now does it make sense that multithreading is just one kind of asynchrony? Threading - about workers; asynchrony is a task . In multi-threaded workflows, you assign tasks to workers. In asynchronous single-threaded workflows, you have a task schedule in which some tasks depend on the results of others; as each task completes, it calls the code that schedules the next task that can be performed, taking into account the results of the task just completed. But you (I hope) need only one worker to complete all tasks, and not one worker per task.

This will help to understand that many tasks are not processor related. For tasks related to the processor, it makes sense to hire as many workers (threads) as possible, since there are processors, assign one task to each employee, assign one processor to each employee and each of them to do their job is nothing other than calculate the result as quickly as as soon as possible. But for tasks that do not wait on the processor, you do not need to assign an employee at all. You just wait for the message to get the result and do something else while you wait. When this message arrives, you can schedule the continuation of the completed task as the next thing on your to-do list for review.

So, look at John's example in more detail. What's happening?

  • Someone is calling DisplayWebSiteLength. Who! We do not care.
  • He sets the label, creates the client, and asks the client for something. The client returns an object representing the task of getting something. This task is being performed.
  • Is it running in another thread? Probably no. Read Stephen 's article on why there is no thread.
  • Now we are waiting for the assignment. What's happening? We are checking to see if the task is completed between the time we created it and we were waiting for it. If so, then we get the result and continue to work. Suppose it is not finished yet. We record the remainder of this method as a continuation of this task and return .
  • Now control is returned to the caller. What does it do? Whatever he wants.
  • Now suppose the task is completed. How did this happen? It may have worked on another thread, or perhaps the caller that we just returned so that it can work until completion on the current thread. Despite this, we now have a completed task.
  • The completed task requests the correct thread - again, most likely the only thread - to start the continuation of the task.
  • The control immediately returns back to the method we just left at the waiting point. The result is now available, so we can assign text and run the rest of the method.

This is the same as in my analogy. Someone asks you to get a document. You mail for the document and continue to do other work. When it arrives in the mail, they signal you, and when you like it, you perform the rest of the work process - open the envelope, pay for delivery, whatever. You do not need to hire another employee to do everything for you.

+432
Jan 08 '16 at 15:58
source share

In a browser, Javascript is a great example of an asynchronous program that has no threads.

You don’t need to worry that several pieces of code are touching the same objects at the same time: each function will exit before any other javascript is launched on the page.

However, when you do something like an AJAX request, no code is run at all, so other javascript can respond to events such as click events until that request returns and calls the associated callback. If one of these other event handlers still works when the AJAX request returns, its handler will not be called until it is executed. Only one “stream” of JavaScript works there, although you can effectively pause what you are doing until you get the information you need.

In C # applications, the same thing happens when you are dealing with user interface elements — you are allowed to interact with user interface elements when you are in the user interface stream. If the user clicks the button and you want to respond by reading a large file from disk, an inexperienced programmer may make a mistake in reading the file in the click event handler itself, which will lead to the application "freezing" until the file has finished downloading, since it was not allowed Respond to any clicks, freezes, or any other events related to the user interface until this thread is released.

One of the programmers can use this problem to create a new chain to download the file, and then tell this stream code that when loading the file it needs to run the remaining code in the user interface stream again so that it can update the user interface elements based on the found in file. Until recently, this approach was very popular, because it was that the C # libraries and language were simplified, but fundamentally more complicated than it should be.

If you think about what the CPU does when it reads a file at the hardware / operating system level, it basically gives instructions for reading pieces of data from disk to memory and enters the operating system with an “interrupt” when reading is completed. In other words, reading from disk (or any I / O really) is essentially an asynchronous operation. The concept of a thread waiting for I / O to complete is an abstraction that library developers have created to make programming easier. It's not needed.

Now, most .NET input / output operations have a corresponding method ...Async() , which you can call, which returns Task almost immediately. You can add callbacks to this Task to specify the code that you want to run when the asynchronous operation is completed. You can also specify which thread you want to use for this code, and you can provide a token that the asynchronous operation can check from time to time to find out if you decide to cancel the asynchronous task, allowing it to quickly stop working and elegantly.

Until the async/await keywords were added, C # was much more obvious about how the callback code was called, as these callbacks were in the form of delegates that were associated with the task. To still take advantage of the operation ...Async() , avoiding code complexity, async/await abstracts the creation of these delegates. But they still exist in the compiled code.

That way, you can have an await UI event handler for an I / O operation, freeing up the user interface thread to do other things, and more or less automatically returning to the user interface thread after you finish reading the file - without having to create a new stream.

+20
Jan 08 '16 at 16:01
source share

By the definition of asynchrony (leaving the current thread to do something else), all multi-threaded executions are asynchronous, period.

However, whether the cost of context switching for each thread will depend on whether it can complete execution within the availability of other processors, which will be used to start several threads in parallel.

0
Jan 22 '19 at 11:13
source share

Parallel processing

  • A parallel task is executed in parallel with other tasks in the processor allocated to it.
  • Optimum parallel operation in a separate processor provides better performance.
  • Parallel task can be performed synchronously, in this case there will be no context switching

Process

  • One process is usually associated with one processor.
  • OS is located between the processor and the process and distributes different processes on different processors
  • The process starts from the main thread.

Multithreading

  • By default, a thread is created from a process and can run simultaneously with other threads created in the same process (or main thread).
  • A thread participating in multithreading can have context switching overhead if not completed in one run
  • A thread participating in multithreading provides a better user interface when multiple threads operate at human speed
  • A thread participating in multithreading optimizes the limited processing capabilities of a single processor
  • The thread generated by the process can be executed in a separate processor of the processor of its parent process (in modern computers).

.Net Task Parallel Library

  • Application frameworks such as tpl provide abstraction for multithreading and parallel processing and optimally allocate processing resources using proprietary algorithms.
  • When tpl spawns a task, it may or may not create a new thread in the same processor or a new process in a separate processor
  • System threads are controlled with os, and managed threads with clr / tpl

C # async / await Keyword

  • Asynchronous methods do not require multithreading because the asynchronous method does not execute in its own thread.
  • Async / await is a synchronous way to perform non-blocking operations.
  • Async provides scalability in the web application, freeing up the active thread
0
Feb 05 '19 at 18:23
source share

Very interesting discussion. In fact, the hardware (including toasters) is true concurrency, but usually slower than the processor. The traditional approach to handling blocking equipment was to start a thread and wait there. await prevents the need for this — all of which suggests that the problem may be with the hardware device executing the task (by its very nature) in parallel with the processor (possibly throwing an interrupt in depth to signal completion). If we now add a processor-oriented task, such as a method that does heavy computing in a matter of seconds, then we are probably back to using threads? In this (though rarer) case, there is no pending simultaneous hardware device.

0
Jul 16 '19 at 7:08
source share



All Articles