I work in a very confusing environment where different machines have access to different distributed file systems.
- Machine
A has access to file system X and is used to install software on file system Y - Machine
B has access to the file system Y , but not X
I am working on machine B and I find that I am using python a lot. Sometimes I need packages that have not been pre-installed, so I use pip install PKGXYZ --user to install them locally. This usually works well, but there is a trick.
The python distutils packages and its monkey-derived setuptools , which pip uses, use distutils.sysconfig functionality to get the compiler versions, paths, and some of them. To do this, they use the internal Makefile , which was used to install python. Although this is usually a good strategy, it doesnβt work with my specific setup, because the paths in the pythons internal Makefile point to the X file system, which I do not have access to my B machine. Therefore, I found that I am using the --no-clean pip option and hacking the setup.py packages I want to install using the following snippets:
import re import sys import os cc = os.getenv("CC") if not cc: print("please set CC environment variable!") exit(0) from distutils.sysconfig import get_config_vars for k,v in get_config_vars().iteritems(): try: if "fsX" in v: newv = re.sub(r'/fsX/[^ ]*/g[c+][c+]',cc,v) get_config_vars()[k] = newv except TypeError: pass
so that I can use the CC environment variable to overwrite the default compiler path settings from the pythons Makefile with something that works on my machine.
However, this is an ugly hack. Of course, there should be a more convenient way to do this and make pip use some other compiler using some kind of environment variable, configuration file, or command line. Or is there?
python installation pip setuptools distutils
carsten
source share