I use twitter API to create moods. I am trying to create a word cloud based on tweets.
Here is my code for generating wordcloud
wordcloud(clean.tweets, random.order=F,max.words=80, col=rainbow(50), scale=c(3.5,1))
Result for this:

I also tried this:
pal <- brewer.pal(8,"Dark2")
wordcloud(clean.tweets,min.freq = 125,max.words = Inf,random.order = TRUE,colors = pal)
Result for this:

Did I miss something?
Here's how I get and clean tweets:
tweets <- searchTwitter("#hanshtag",n = 5000, lang = "en",resultType = "recent")
no_retweets <- strip_retweets(tweets , strip_manual = TRUE)
df <- do.call("rbind", lapply(no_retweets , as.data.frame))
df$text <- sapply(df$text,function(row) iconv(row, "latin1", "ASCII", sub=""))
df$text = gsub("(f|ht)tp(s?)://(.*)[.][a-z]+", "", df$text)
sample <- df$text
sum_txt1 <- gsub("(RT|via)((?:\\b\\w*@\\w+)+)","",sample)
sum_txt2 <- gsub("http[^[:blank:]]+","",sum_txt1)
sum_tx3 <- gsub("@\\w+","",sum_txt2)
sum_tx4 <- gsub("[[:punct:]]"," ", sum_tx3)
sum_tex5 <- gsub("[^[:alnum:]]", " ", sum_tx4)
sum_tx6 <- gsub("RT ","", sum_tex5)
corpus <- Corpus(VectorSource(sum_tx6))
clean.tweets<- tm_map(corpus , content_transformer(tolower))
clean.tweets<- tm_map(guj_clean,removeWords, stopwords("english"))
clean.tweets<- tm_map(guj_clean, removeNumbers)
clean.tweets<- tm_map(guj_clean, stripWhitespace)
Thanks in advance!