These are just handy functions that naturally fit into the way these two libraries tend to do things, respectively. The first "condenses" the information, changing things by integers, and the second "expands" the sizes, allowing (possibly) more convenient access.
sklearn.preprocessing.LabelEncoder simply converts data from any domain, so its domain is 0, ..., k - 1, where k is the number of classes.
So for example
["paris", "paris", "tokyo", "amsterdam"]
can be
[0, 0, 1, 2]
pandas.get_dummies also takes a series with elements from a certain domain, but extends it into a DataFrame whose columns correspond to the records in the series and the values ββare 0 or 1 depending on what they were originally from. So, for example, the same
["paris", "paris", "tokyo", "amsterdam"]
will become a dataframe with labels
["paris", "tokyo", "amsterdam"]
and whose record "paris" would be a series
[1, 1, 0, 0]
The main advantage of the first method is that it saves space. Conversely, encoding things as integers can give the impression (to you or some machine learning algorithm) that ordering means something. Is Amsterdam closer to tokyo than to paris just because of integer encoding? probably no. The second view is a little more clear.