There is not just one metric more important than the rest. In any evaluation you should consider them all to make sure you evaluate your model properly. Nevertheless, accuracy (the number of correct predictions over the number of total instances that have been evaluated) is the most common metric used to measure the performance of classification models, however sometimes it is not the best metric. For instance, when we have a two class model but these classes are unbalanced ("true" class has 850 instances and "false" class has 150) you can get a high accuracy if you classify all your instances as the bigger class, so you would be predicting "true" most of the time. Since this is not correct, when evaluating your model you need to take into account other metrics such as Precision, Recall, F-measure or F-score, and Phi coefficient.