BigML lets you generate an 80% / 20% split of your dataset with just one click, so that you can train your model with the 80% subset and test it with the 20% left. Or you can also split your dataset differently if you like.
Follow the evaluation process through the BigML Dashboard using the 80% / 20% automatic split. If you prefer the BigML API, click here to learn how to split your dataset, and visit this section to dive into the evaluations documentation.
Once your source is uploaded to BigML and a dataset is created, you can split it into two parts, the training and the test data, as shown below:
Then, you will have to build your model using the training dataset. When ready, you will be able to evaluate your training model just by clicking EVALUATE from the 1-click action menu:
The next screen you see will ask you to select the dataset to be tested. By default, BigML picks the dataset with the 20% that was separated at the beginning of the process. However, if you want to test it with different data (as long as the number and order of the data fields are the same as the ones used when training your model) you can always select it by typing the first characters of your dataset upon which BigML will find it for you, as long as that dataset has previously been uploaded to your dashboard. At that point, you can click the Evaluate button:
Next up, BigML compares the predicted outcome to the actual value in your test data. This forms your evaluation, which is shown below:
Please click here to see the evaluation metrics for classification models, or read the evaluations chapter of the BigML Dashboard documentation.