How to load datasets for sklearn? - python

NLTK has a function nltk.download()for loading datasets that come with the NLP package.

Sklearn talks about loading datasets ( http://scikit-learn.org/stable/datasets/ ) and retrieving data from http://mldata.org/ , but for the rest of the datasets, instructions had to be loaded from the source.

Where can I save the data that I downloaded from the source? Are there any other steps after I save the data in the correct directory before I can call from my python code?

Is there a boot example, for example. data set 20newsgroups?

I have pip installed by sklearn and tried this but got it IOError. Most likely, because I did not load the dataset from the source.

>>> from sklearn.datasets import fetch_20newsgroups
>>> fetch_20newsgroups(subset='train')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/sklearn/datasets/twenty_newsgroups.py", line 207, in fetch_20newsgroups
    cache_path=cache_path)
  File "/usr/local/lib/python2.7/dist-packages/sklearn/datasets/twenty_newsgroups.py", line 89, in download_20newsgroups
    tarfile.open(archive_path, "r:gz").extractall(path=target_dir)
  File "/usr/lib/python2.7/tarfile.py", line 1678, in open
    return func(name, filemode, fileobj, **kwargs)
  File "/usr/lib/python2.7/tarfile.py", line 1727, in gzopen
    **kwargs)
  File "/usr/lib/python2.7/tarfile.py", line 1705, in taropen
    return cls(name, mode, fileobj, **kwargs)
  File "/usr/lib/python2.7/tarfile.py", line 1574, in __init__
    self.firstmember = self.next()
  File "/usr/lib/python2.7/tarfile.py", line 2334, in next
    raise ReadError("empty file")
tarfile.ReadError: empty file
+4
source share
1 answer

The network connection problem probably ruined the source archive on your disk. Delete files or folders from 20 groups from the folder scikit_learn_datain the user's home directory and try again.

$ cd ~/scikit_learn_data'
$ rm -rf 20news_home
$ rm 20news-bydate.pkz
+8
source

All Articles