Laravel environment variables flowing between applications when they call each other through GuzzleHttp

I have two Laravel 5.2 applications (lets call them A and B) on my local computer, both configured on two different virtual hosts on my local Apache 2.4 development server.

Both applications sometimes call each other through GuzzleHttp.

At some point, I wanted to use encryption, and I began to get "mac is invalid" exceptions from Laravel Encrypter.

When investigating the problem, I found that when application A calls application B, application B suddenly gets the encryption key (app.key) from application A! This causes encryption to fail because the values ​​in application B are encrypted using the encryption key of application B.

During debugging, I discovered that the Dotenv library has some logic for preserving existing variables, if set. I found that both $ _ENV and $ _SERVER do not have missing variables, but getenv() has them!

I'm a bit confused because php putenv says:

An environment variable will exist only during the current request.

It seems that during the current request I run another request via GuzzleHttp, the variables set by Dotenv using putenv() suddenly become available in application B, which is requested using GuzzleHttp!

I understand that this will not be a problem on production servers, where the configuration cache will be used instead of Dotenv, and most likely both applications will work on different Apache servers, but this behavior violates my development process.

How to configure Laravel or GuzzleHttp or Apache or PHP to prevent putenv() from leaking from application A to application B?

+7
php apache laravel guzzle
source share
1 answer

The problem is that you are using a common PHP instance, so when one of the applications sets an environment variable that is shared with another application. I believe phpdotenv treats them as immutable, so after installing them, another application cannot override them.

mod_php (which I suppose you use since you mentioned apache) basically provides a PHP interpreter inside every apache process. The apache process will be shared between all your vhosts, hence why you have this problem. You will also get the same problem if you were running nginx and php-fpm, however it would be easier to solve if you used the latest software stack.

Unfortunately, one port can only be associated with one process. Thus, the only way to stick with mod_php and apache is to place your vhosts on separate port numbers, which means that you have to put the port number of at least one of them in the URL when accessing it. I don’t use apache anymore, so I can’t give you specific details about this, it may just be setting different ports in your vhost configuration and apache will just do it, but I will have to put you google as well.

If you ran nginx / php-fpm, this would probably be the case of creating a second php-fpm process configuration running on a different port or socket, and pointing to the second vhost on that php instance and away. / p>

So you have a few solutions:

  • Stay with apache and mod_php, and spend the rest of the week searching Google how to do what I said.
  • Look at running php, as the cgi module on apache will give you the necessary flexibility (this is akin to using nginx / php-fpm, but without changing your software for web servers).
  • Stop using phpdotenv and find an alternative approach (for example, upload your config to htaccess or inside vhost so that it is accessible as $ _ENV or $ _SERVER keys).
  • Install the dev stack, which includes nginx / php-fpm, and you need to easily resolve it by creating two php processes.
  • Use virtual machines (maybe look at a tramp or docker).

Sorry, I have no news, but unfortunately your WAMP stack is just too strict out of the box.

+7
source

All Articles