Disclaimer: below, Functional testing is used synonymously with system testing. The lack of a formalized specification for most Fabric projects makes the distinction controversial. In addition, I may be random between the terms Functional Testing and Integration Testing, as the boundary between them is blurred with any configuration management software.
Local functional testing for tissue is hard (or impossible)
I am sure that it is impossible to perform functional testing without creating a virtual machine that you give as one of your limitations, or doing extremely extensive bullying (which will make your testuite inherently fragile).
Consider the following simple function:
def agnostic_install_lsb(): def install_helper(installer_command): ret = run('which %s' % installer_command) if ret.return_code == 0: sudo('%s install -y lsb-release' % installer_command) return True return False install_commands = ['apt-get', 'yum', 'zypper'] for cmd in install_commands: if install_helper(cmd): return True return False
If you have a task calling agnostic_install_lsb , how can you perform functional testing in a local field?
You can do unit testing by mocking the run , local and sudo calls, but not so much in terms of higher-level integration tests. If you want to be satisfied with simple unit tests, you donβt really have to use much for the test environment outside of mock and nose , since all of your unit tests work under tight control.
How would you make a mockery
You can mock the sudo , local and run functions to write your commands to a set of StringIO or files, but if there isnβt something smart that I am missing, you must also have to mock their return values ββvery carefully. To continue talking about things that you probably already know, your bullying would either have to know about Fabric context managers (tough), or you would have to mock all the context managers used (still difficult, but not so bad).
If you want to go this route, I think itβs safer and easier to create a test class whose setup creates layouts for all the context managers, run , sudo and any other parts of Fabric that you use, instead of trying to make a more minimal amount of ridicule over every check. At this point, you will create some common testing patterns for Fabric, and you should probably share it with PyPi as ... "mabric"?
I argue that this will not be very useful for most cases, since your tests ultimately take care of how the run is performed, and not just what is done towards the end. Switching the command to sudo('echo "cthulhu" > /etc/hostname') from run('echo "cthulhu" | sudo tee /etc/hostname') should not break the tests, and itβs hard to figure out how to achieve this with simple mocks . This is due to the fact that we began to blur the line between functional and unit testing, and this basic mockery is an attempt to apply unit testing methodologies for functional tests.
Testing configuration management software on virtual machines is an established practice
I urge you to reconsider how much you want to avoid deploying virtual machines for your functional tests. This is a common Chef testing practice that faces many of the same issues.
If you are concerned about automation, Vagrant does a very good job of simplifying the creation of virtual machines from a template. I even heard that integration with Vagrant / Docker is good if you are a Docker fan. The only downside is that if you are a fan of VMWare, Vagrant requires a VMWare workstation ($$$). Alternatively, just use Vagrant with Virtualbox for free.
If you work in a cloud environment such as AWS, you even get the opportunity to deploy new virtual machines with the same base images as your production servers, for the sole purpose of conducting your tests. Of course, a notable drawback is that it costs money. However, this is not a significant part of your costs if you already use the full software stack in the public cloud, because the test servers are designed for several hours in just a month.
In short, there are many ways to solve the problem of full functional testing on virtual machines, and this is a tried and tested method for other configuration management software.
If you are not using a stroller (or similar), keep a package of locally executable unit tests
One of the obvious problems with creating your tests depends on the launch of the virtual machine - this makes testing difficult for developers. This is especially true for retesting with the local version of the code, as some projects (such as a web user interface) may be required.
If you use Vagrant + Virtualbox, Docker (or raw LXC) or a similar solution for your test virtualization, then local testing is not extremely expensive. These solutions enable you to deploy new virtual machines on low-cost laptop equipment in less than ten minutes. For particularly fast repetitions, you can test several times against the same virtual machine (and then replace it with a new one for a trial run).
However, if you are doing your virtualization in a public cloud or similar environment where too much testing on your virtual machines is expensive, you should separate your tests into an extensive testsuite module that can be run locally, as well as integration or system tests that are required virtual machine. This separate set of tests allows you to develop without full testing, working against unit tests as development continues. Then, before merging / shipping / subscribing to the changes, they should work against functional tests in the virtual machine.
Ultimately, nothing should break into your code base that has not passed functional tests, but you should try to achieve almost complete code coverage for such a set of unit tests as you can. The more you can do to increase the confidence that your device tests give you, the better, as this reduces the number of false (and potentially costly) runs of your system tests.