BigML grows a random decision tree model until each instance is in its own leaf. That is, the model is forced to keep splitting until it has isolated all the data into separate nodes. Think of an instance at the bottom of the tree, this will have required a lot of splits to get to this position and in fact these points are the hard ones to tell apart. Conversely, the top of the tree require very few splits, meaning that is easier to separate them. So what BigML does is to introduce a metric that measures the depth of each instance of the tree and repeats this process hundreds of times to find out whether it has been consistently easy to isolate each instance or probably anomalous. BigML represents this by transforming the average depth into a number between 0 (similar) and 1 (anomalous), considering anomaly scores greater than 0.60 as highly likely anomalous data points.
The Anomaly Detector (also called anomalies) do not need labeled data as some less versatile anomaly detection methods require. They are scalable, competitive and almost parameter-free. The anomalies can handle missing data and categorical fields, and explain which fields contributed most to an anomaly. There is no data rescaling needed nor distance metric required.
Please read our Dashboard documentation or the documentation for developers for more details.