How to work with a large number of deleted files using PHPStorm

I have a small Debian VPS-box on which I host and create several small private PHP sites.
I am developing on a Windows desktop using PHPStorm.

Most of my projects contain only a few dozen source files, but also contain several thousand lib files.

I don’t want to run the web server on my local computer, because it creates a whole set of problems, I don’t want you to be bothered by such small projects (for example, creating another web server
file synchronization between my desktop and the VPS-box, managing various configurations for Windows and Debian (different hosts, paths ...), saving scheme synchronization and data synchronization.)

I am looking for a good way to work with PHPStorm on a large number of deleted files.

My approaches so far:

  • Installing the remote file system on Windows (using pptp / smb, ftp, webdav) and working with it using PHPStorm, as if it were local files.
    => Indexing, synchronization, and PHPStorms VCS support has become unusually slow. This is probably due to the high delay in accessing files.
  • PHPStorm allows you to automatically copy deleted files to your local computer and then synchronize them when changes are made.
    => After the initial copy, this is fast. Unfortunately, with this setting, PHPStorm cannot provide the VCS support that I use heavily.


Any ideas on this are welcome :)

+7
version-control phpstorm remote-access
source share
1 answer

I use PhpStorm in a very similar setup, as your second approach (local copies, automatic synchronized changes) And, importantly, VCS support.

Ideal The easiest . In my experience, the easiest solution is to check / clone the VCS branch on your local computer and use your remote file system as an intermediate platform that remains unfamiliar with VCS; simple file system.

Real world; Required remote VCS If, however, (as in my case), you must have VCS in each system; perhaps your remote environment is the standard for your store or your own tools for checking / building your store are platform specific. Then a slightly different remote configuration is required, but nonetheless, processing your remote system in stages is still the best approach.

Example: Perforce - centralized VCS (client workspace) In my experience, workspace-based VCS systems (like Perforce) can be better handled by sharing the same client workspace between local and remote systems, which has The advantage of VCS file status changes that should be applied only once. The disadvantage is that file system changes on the remote system usually need to be handled manually. In my case, I manually chmod (or the OS equivalent) delete the files and wash my hands (the problem is solved). An alternative (double working gap) approach requires more moving parts, which I do not recommend.

Example: Git - Distributed VCS An easier approach is, of course, Git, which has great magic to detect file changes without directly accessing files from VCS. This makes life easier, as you can simply start with a common work branch and create, for example, two separate branches: "my-feature" and "my-feature-remote-proxy". Once you decide to combine your changes upstream, you do this (ideally) from your local environment. The remote proxy branch can be canceled or whatever. NOTE: in the case of Git, I always have two branches, because it is easy. And when you hard drive melts in funny fiery lighting, you have additional redundancy: |

Hope this helps.

+1
source share

All Articles