Evaluating your model is key to make good and accurate predictions. Decision tree models are built on a subset of the total data, which is called the training data, and they are used to predict on new data that is not part of this training subset. On the one hand, if a model is totally adapted to its training data, it would fail to predict accurately any new data that was not exactly as in the training set (this phenomenon is known as overfitting). On the other hand, if your model is too general it would predict poorly on particular cases (underfitting). A good model should be perfectly balanced to avoid both. By holding out part of your data from the training set and evaluating your model with this subset of test data, you will able to measure the real performance of your model when a new case appears. Test data, of course, should never be part of the training data for the evaluation to be significant.
With BigML you can measure your model's performance with different evaluation metrics for classification models and for regression models.
BigML offers a train/test split menu option which allows you to generate an 80% / 20% split of your dataset. The former can be used to train your model and the latter to test it. Please read this other question to learn the evaluation process.