Setting the time interval for events sent by the HTML5 server

I want to send regular updates from server to client. For this, I used the event dispatched by the server. I am inserting the codes below:

Client side

Retrieving Server Updates

<script> if(typeof(EventSource)!="undefined") { var source=new EventSource("demo_see.php"); source.onmessage=function(event) { document.getElementById("result").innerHTML=event.data + "<br>"; } } else { document.getElementById("result").innerHTML="Sorry, your browser does not support server-sent events..."; } </script> </body> </html> 

Server side

 <?php header('Content-Type: text/event-stream'); header('Cache-Control: no-cache'); $x=rand(0,1000); echo "data:{$x}\n\n"; flush(); ?> 

The code works fine, but it sends updates in every 3 seconds . I want to send updates in milliseconds. I tried sleep(1) after flush() , but only increases the interval by 1 second. Does anyone have an idea how I can do this?

Also, can I send images using events sent by the server?

+6
source share
4 answers

As discussed in the comments above, running a PHP script in an endless loop with sleep or usleep incorrect for two reasons

  • The browser will not see any event data (presumably, it expects the connection to be closed first) while the script is still running. I remember that early implementations of the SSE browser enabled this, but this is no longer the case.
  • Even if this works with the browser, you will still encounter a problem when the PHP script runs for too long (until the PHP_ini time_out settings are entered). If this happens once or twice, everything is in order. If there are X thousand browsers that are simultaneously looking for the same SSE from your server, it will destroy your server.

The right way to make your PHP script respond with event flow data and then gracefully end as usual. Specify a retry value - in milliseconds - if you want to control when the browser tries again. Here is a sample code

 function yourEventData(&$retry) { //do your own stuff here and return your event data. //You might want to return a $retry value (milliseconds) //so the browser knows when to try again (not the default 3000 ms) } header('Content-Type: text/event-stream'); header('Cache-Control: no-cache'); header('Access-Control-Allow-Origin: *');//optional $data = yourEventData($retry); echo "data:{$str}\n\nretry:{$retry}\n\n"; 

As an answer to the original question, this is a bit late, but, nevertheless, in the interest of completeness:

What you get when polling the server this way is just data. What you do after that is completely up to you. If you want to treat this data as an image and update the image displayed on your web page, you simply do

 document.getElementById("imageID").src = "data:image/png;base64," + Your event stream data; 

So much for principles. I sometimes forgot that retry should be in milliseconds and ended up returning, for example, retry:5\n\n , which, to my surprise, was still working. However, I would not use SSE to update the image on the browser side with an interval of 100 ms. A more typical use would be in the following lines

  • The user requests a job on the server. This task is either queued for other tasks, or most likely it will take quite a while to complete (for example, creating a PDF or an Excel spreadsheet and sending it back).
  • Instead of making the user wait without feedback — and risk a timeout — you can run the SSE, which tells the ETA browser to complete the job, and retry set so that the browser knows when to look again for the result.
  • ETA is used to provide some feedback to the user.
  • At the end of the ETA, the browser will look again (browsers do this automatically, so you don't have to do anything)
  • If for some reason the task is not completed by the server, it should indicate that it is being returned in the event stream, for example. data{"code":-1}\n\n , so browser code can handle the situation gracefully.

There are other usage scenarios - updating stock quotes, news headlines, etc. Updating images with an interval of 100 ms is a purely personal opinion - as a misuse of technology.

+10
source

The reason for this behavior (message every 3 seconds) is explained here :

The browser attempts to reconnect to the source approximately 3 seconds after each connection is closed.

Thus, one way to receive a message every 100 milliseconds changes the reconnection time: (in PHP)

 echo "retry: 100\n\n"; 

This is not very elegant, but the best approach is an endless PHP loop that will sleep for 100 milliseconds at each iteration. Here is a good example here , just changing sleep() to usleep() to support milliseconds:

 while (1) { $x=rand(0,1000); echo "data:{$x}\n\n"; flush(); usleep(100000); //1000000 = 1 seconds } 
+6
source

I believe that the accepted answer may be misleading . Although it answers the question correctly (how to adjust the 1st interval), it is not true that an infinite loop is a bad approach in general.

SSE is used to receive updates from the server when there are actually updates that are opposite to the Ajax poll, which constantly checks for updates (even if there are none) at regular intervals. This can be done using an endless loop that maintains the server side of the script all the time, constantly checks for updates and selects them only in case of changes.

It is not true that:

The browser will not see any event data while the script is still running.

You can run the script on the server and still send updates to the browser without ending the script as follows:

 while (true) { echo "data: test\n\n"; flush(); ob_flush(); sleep(1); } 

Executing it, sending the retry parameter without an infinite loop, the script will end, and then run the script again, run it, run it again ... This is like checking an Ajax poll for updates, even if they are not there and this is not how the SSE is designed for work. Of course, there are some situations where this approach is suitable, as indicated in the accepted answer (for example, expecting the server to create a PDF file and notify the client when it does).

Using the infinite loop method will support the script on the server all the time, so you have to be careful with a lot of users, because you will have a script instance for each of them, and this can lead to server overload. On the other hand, the same problem can happen even in some simple scenario, when you suddenly get a bunch of users on a website (without SSE) or if you use web sockets instead of SSE. Everything has its limitations.

Another thing to take care of is what you put into the loop. For example, I would not recommend placing the database query in a loop that runs every second, because then you also put the database at risk of overload. I would suggest using some kind of cache (Redis or even a plain text file) for this case.

+3
source

SSE is an interesting technology, but it has a choking side effect when implemented using the APACHE / PHP backend.

When I first found out about SSE, I was so happy that I replaced all of the Ajax polling code with an SSE implementation. After only a few minutes, I noticed that my CPU load increased to 99/100 , and the fear that my server would be stopped soon forced me to cancel the changes in a friendly old Ajax poll. I like PHP , and although I knew that SSE would work better on Node.is, I just wasn't ready to go that route!

After a period of critical thinking, I came up with an SSE APACHE / PHP implementation that could work without literally drowning my server to death.

I'm going to share with you my SSE server side code, I hope it helps someone to overcome the difficulties of implementing SSE with PHP.

 <?php /* This script fetches the lastest posts in news feed */ header("Content-Type: text/event-stream"); header("Cache-Control: no-cache"); // prevent direct access if ( ! defined("ABSPATH") ) die(""); /* push current user in session data into global space so we can release session lock */ $GLOBALS["exported_user_id"] = user_id(); $GLOBALS["exported_user_tid"] = user_tid(); /* now release session lock having exported session data in global space. if we don't do this, then no other scripts will run thus causing the website to lag even when opening in a new tab */ session_commit(); /* how long should this connection be maintained - while we want to wait on the server long enoug for update, holding the connection forever burn CPU resources, depending on the server resources you have available you can tweak this higher or lower. Typically, the higher the closer your implementation stays as an SSE otherwise it will be equivalent to Ajax polling. However, an higher time burns CPU resource especially when there more users on your website */ $time_to_stay = strtotime("1 minute 30 seconds"); /* if no data is sent, we wait 2 seconds then abort connection. You can use this to test when a data you require for script operation is not passed along. Typically SSE reconnects after 3 seconds */ if ( ! isset( $_GET["id"] ) ){ exit; } /* if "HTTP_LAST_EVENT_ID" is set, then this is a continue of temporily terminated script operation. This is important if your SSE is maintaining state you can use the header to get last event ID sent */ $last_postid = ( ( isset( $_SERVER["HTTP_LAST_EVENT_ID"] ) ) ? intval( $_SERVER["HTTP_LAST_EVENT_ID"] ) : intval( $_GET["id"] ) ); /* keep the connection active until there data to send to client */ while (true) { /* You can assume this function perform some database operations to get latest posts */ $data = fetch_newsfeed( $last_postid ); /* if data is not empty, we want to push back to the client then there must have been some new posts to push to client */ if ( ! empty( trim( $data ) ) ){ /* With SSE its my common practice to Json encode all data because I notice that not doing so, sometimes cause SSE to lose the data packet and only deliver a handful of the data on the client. This is bad since we are returning a structured HTML data and loosing some part of it will cause our HTML page to break when the data is inserted in our page */ $data = json_encode(array("result" => $data)); echo "id: $last_postid \n"; // this is the lastEventID echo "data: $data\n\n"; // our data /* flush to avoid waiting for script to terminate - make sure its in the same order */ @ob_flush(); flush(); } // the amount of time that has been spent on this script $time_stayed = intval(floor($time_to_stay) - time()); /* if we have stayed more than time to stay, then abort this connection to free up CPU resource */ if ( $time_stayed <= 0 ) { exit; } /* we simply wait 5 seconds and continue again from start . We don't want to keep pounding our DB since we are in a tight loop so we sleep a few seconds and start from top*/ sleep(5); } 
0
source