Bagging (or Bootstrap Aggregating), uses a different random subset of the original dataset for each model in the ensemble. Specifically, BigML uses by default a sampling rate of 100% with replacement for each model, this means that some of the original instances will be repeated and others left out. Random Decision Forests extend this technique by only considering a random subset of the input fields at each split. Generally, Random Decision Forests are the most powerful type of ensemble. For datasets with many noisy fields you may need to adjust a Random Decision Forest's "random candidates" parameter for good results. Bagging, however, does not have this parameter and may occasionally give better out-of-the-box performance.
The Boosting Ensemble technique is significantly different. With Boosted Trees, tree outputs are additive rather than averaged (or decided by majority vote). Individual trees in a Boosted Tree differ from trees in bagged or random forest ensembles since they do not try to predict the objective field directly. Instead, they try to fit a "gradient" to correct mistakes made in previous iterations. This unique technique, where each tree improves on the imperfect predictions of the previously grown tree, lets you predict both categorical and numeric fields.
Feel free to dive deeper into the three types of ensembles BigML offers and their pros and cons in the Dashboard documentation or in the documentation for developers.