Let me first turn to the R-solution; As far as I understand, the e1071 package is just a wrapper for the libsvm library. Therefore, assuming that you use the same settings and steps in both cases, you should get the same results.
I am not an ordinary R user, but I can say that you are not normalizing the data in the R code (to scale functions in the range [-1,1] ). Since we know that SVMs are not scale invariant, this omission should explain the difference from other results.
MATLAB has its own implementations in svmtrain and fitcsvm . It only supports binary classification, so you will have to manually handle problems with multiple classes (see here ).
The documentation explains that it uses the standard SMO algorithm (in fact, one of the three possible algorithms proposed to solve the quadratic programming problem ). Documents list several books and documents below as links. Basically, you should get similar forecasts like libsvm (assuming that you are replicating the parameters used and applying the same data preprocessing).
Now for libsvm versus liblinear , you should know that implementations are slightly different in the formulation of the target function:
libsvm solves the following dual task: 
On the other hand, the dual form of liblinear with the L2-regularized SVC L1-loss solver: 
... not to mention that the algorithms are encoded for different purposes: libsvm is written in such a way as to allow switching between different kernel functions, while liblinear is optimized to always be linear and not have the concept of kernels at all, thatβs why libsvm not easily applicable to large-scale tasks (even with a linear kernel), and it is often suggested to use liblinear when you have a large number of instances.
In addition, regarding problems with several classes with k classes, libsvm by default implements a one-to-one approach, creating binary classifiers k*(k-1)/2 , and liblinear implements vs-the-rest by constructing binary classifiers k (it has there is also an alternative Crammer and Singer method for handling multiclass problems). Earlier, I showed how to perform classification with one and several others using libsvm (see here and here ).
You also need to make sure that they match the parameters passed to each (as close as possible):
- libsvm must be installed in the linear core C-SVM classifier by calling
svm-train.exe -s 0 -t 0 - The type liblinear solver must be set to
L2R_L1LOSS_DUAL by calling train.exe -s 3 (the double form of the L2-regularized classifier of L1 loss vectors) - the cost parameter should obviously match
-c 1 for both learning functions - The tolerance for the termination criterion must match (the default value
-e is different between the two libraries, e=0.001 for libsvm and e=0.1 for liblinear) - liblinear should be explicitly instructed to add the term offset as it is disabled by default (by adding
train.exe -B 1 ).
Even then, I'm not sure that you will get accurate results in both, but the forecasts should be close enough ...
Other considerations include how libraries handle categorical functions. For example, I know that libsvm converts a categorical function with m possible values ββinto numeric functions m numeric 0-1, encoded as attributes of binary indicators (i.e. only one of them is one, the rest are zeros). I'm not sure what liblinear does with discrete functions.
Another problem is whether a particular implementation is deterministic and always returns the same results when repeated to the same data using the same settings. I read somewhere that liblinear internally generates random numbers during its operation, but please do not forget my word without checking the source code :)