- The predictions requested via the API are asynchronous, which allows you to check the status of your prediction by following the same approach as with any other resource within BigML.io. For batch predictions, you can download a CSV file with all the predictions. For single predictions, we advise you to visit the BigML Dashboard and navigate through the prediction view, or retrieve the JSON.
- The BigML PredictServer is a dedicated cloud image which you can deploy to create blazingly fast predictions easily. It can process thousands of predictions per second based on your trained BigML model or ensemble. BigML PredictServer is a great solution for customers that need predictions in real-time or in large batches, this is a very quick processing mode since the BigML server stores your models in its cache.
Did you know you can also make predictions in your local environment? Please read this question for more info.