The importance is also known as relative error reduction. For each split in a model BigML has an estimate of the prediction error that the split helps reduce (BigML uses these same estimates when pruning). To get the importance BigML sums the error reduction for every split, but grouped by the field. So we get a sum of error reduction for each field. BigML normalizes those, so that they total to one, and that gives the importance values. It is conceptually similar to Breiman's gini importance (popular with random forests), except BigML does not use gini as the error metric.
If you are using the BigML API, please read this section to learn more about the importance property. From the BigML Dashboard you can see which fields are more important by clicking the Model Summary Report menu option.
And a small window like the one below will display with the field importance.
Please read this blog post for more regarding the important fields in decision tree models.