There is a story that in the early days of AI research, when significant progress was made in "hard" logical problems with the help of mechanical theoretical theorists, the professor appointed one of his graduate students as an "easy" problem to solve the problem of how vision, subject to significant contribution to the brain. Obviously, everything turned out to be much more complicated than the professor expected. So no, not vision in the general sense.
If you are just starting out with AI, there are several directions. The classic AI problems - logical puzzles - are solved using a mechanical theorem (usually written in Lisp - see here for classic text on solving logical puzzles). If you do not want to create your own, you can get a copy of Prolog (they are essentially the same thing).
You can also go with pattern recognition problems , although you want the initial problems to be fairly simple so as not to worry about the details. My dissertation included the use of stochastic processes for recognizing letters in a free-floating space, so I partially relate to this approach (don't start with stochastic processes, though, if you really don't like math). Right next door is a subfield of neural networks . This is popular because you can hardly learn NN without creating interesting projects. Across the whole domain (template processing), itβs great that you can solve real problems, not toy puzzles.
Many people love Natural Language Processing , as it is easy to start, but almost infinitely difficult. One of the defining problems is the creation of an NLP program for processing a language in a specific domain (for example, discussing a chess game). This makes it easy to see progress, albeit difficult enough to complete a semester.
Hope you get some ideas!
Mark brittingham
source share