It depends on the problem you are working on. If number of categorical variables is very large, it is better to use label encoding. But the label encoding should be meaningful i.e. the categories which are close to each other should get similar labels. Let's say you are creating a model where you have a feature Month. But there is a periodicity in your target variable, i.e. every x months, say 3 months, the trends are similar. Now it does not make sense to use labels 1, 2, ... 12 for months, instead, it is better to use 0, 1, 2, 0, 1, 2.... such labels. So Jan is 0, Feb is 1, Mar is 2 and again Apr is 0 and so on.
You can use LabelEncoder of sklearn.preprocessing for this problem. But it does not take care of the semantics as I mentioned. For that, you can do some manual label encoding.