Get result from shell_exec as you run the command

I am encoding a web page with a PHP script designed to receive the name of the JFFS2 image file that was previously uploaded to the server. The script is then to re-play the section on the server with the image and output the results. I used this:

$tmp = shell_exec("update_flash -v " . $filename . " 4 2>&1"); echo '<h3>' . $tmp . '</h3>'; echo verifyResults($tmp); 

(The verifyResults function will return some HTML code that tells the user whether the update command completed successfully. I., if the update completed successfully, display a button to restart the device, etc.)

The problem is that it takes a few minutes to execute the update command, and the PHP script will block until the shell command completes before it returns any output. This usually means that the update command will continue while the user sees an HTTP 504 error (in the worst case) or wait for the page to load within a few minutes.

I was thinking of doing something like this:

 shell_exec("rm /tmp/output.txt"); shell_exec("update_flash -v " . $filename . " 4 2>&1 >> /tmp/output.txt &"); echo '<div id="output"></div>'; echo '<div id="results"></div>'; 

This would theoretically put the command in the background and add all the output to /tmp/output.txt.

And then, in the Javascript function, I would periodically request getOutput.php, which would simply print the contents of /tmp/output.txt and paste it into the "output" div. Once the command is fully executed, another Javascript function will process the output and display the result in the "results" div.

But the problem that I see here is that getOutput.php will eventually become unavailable when updating the flash memory of the device, because it is on the segment for which the update is intended. Thus, this may leave me in the same position as before, although without page 504 or, it would seem, forever loaded page.

I could move getOutput.php to another section on the device, but then I think that I still have to do some funky things with the web server configuration in order to be able to access it there (symbolic link to it from a web browser like any other file, it will eventually be overwritten during the second flash).

Is there any other way to display the output of a command during its launch, or do I need to do this with my solution?

Edit 1: I am currently testing some solutions. I will update my question later.

Edit 2: It seems that the file system is not overwritten as I originally thought. Instead, the system seems to mount the existing file system in read-only mode, so I can still access getOutput.php even after restarting the file system.

The second solution I described in my question seems to work in addition to using popen (as indicated in the answer below) instead of shell_exec. The page loads, and through Ajax I can display the contents of output.txt.

However , it seems that output.txt does not reflect the output of the re-flash command in real time - it seems that nothing is displayed until the update command returns from execution. I will need to do additional testing to find out what is going on here.

Edit 3: Ignore it, it looks like the file is working when I access it. I simply delayed the delay while the kernel performed some JFFS2-related tasks caused by my use of the partition on which the original JFFS2 image is stored. I don't know why, but this seems to cause all PHP scripts to block until it ends.

To get around this, I am going to include the call of the update command in a separate script and request it via Ajax - this way the user will at least get some pre-packaged feedback while technically still waiting on the system.

+6
javascript bash ajax php
source share
4 answers
+3
source share

An interesting scenario.

My first thought was to do something with proc_ * and $ _SESSION, but I'm not sure if this will work or not. Try it, but if not ...

If you are worried that a file flashed during the process, you can always create a mysql database in the secondary process and write this. The database may exist in another section, and you can address it with local ip, and the system will take care of routing.

Edit

When I mentioned proc_ * with sessions, I meant something similar to this , where $ descriptorspec would become:

 $_SESSION = array( 1 => array("pipe", "w"), ); 

However, I doubt it will work. The process ends with writing to $ _SESSION in memory that no longer exists after the first script is killed.

Edit 2

In this case, you can install memcache and write your secondary process directly to memory, read your web interface.

+1
source share

If you destroy DocRoot, the / script resource will not be able to respond to user requests during this time. Therefore, you must send updates to the user in the same request that performs the cleanup. This requires you to start the shell process and return immediately to PHP. This can be done with pcntl_fork() and pcntl_exec() . Your PHP script should now continuously send shell script output to the client. If a shell script is attached to a file in / tmp, you can fpassthru() delete that file and clear it before the shell completes the script.

+1
source share

Regarding yours however :

I assume that you are trying to use this file as a stream. I have not done any production tests, but I believe that the file will only be written to disk on fclose ().

If you constantly write a file to script # 2, these records actually go directly to memory until the file is closed.

Again - I can't verify this, but if you want to test it, try reopening and closing the file for each entry. This will confirm or reject my theory, and you can change your approach accordingly.

+1
source share

All Articles