Files are uploaded entirely to BigML but only the first lines are analyzed to understand what's their data structure so that they can be parsed correctly. The parsing information is stored in the generated source created in BigML, and applies to the entire file. However, you can use a smaller part of your data by just sampling it when creating the corresponding dataset. For instance, say that you want to work with the Standard subscription where your tasks cannot be larger than 64 MB each, in this case, you need to use only a portion of your source file.
BigML lets you decide the percentage of data to be used to perform any task either through the BigML Dashboard and the API. If you are using the former to create a dataset, you can reduce your source by just moving the highlighted slider on the image below, which you can access from the source view, and clicking the Configure dataset icon:
Check the argument you need to set if you are using the BigML API.
Notice that when you have created a dataset you can also use just a sample of it to create other BigML resources. For more information, please take a look at the Sampling Datasets chapter in the BigML Dashboard documentation or read the subsection sampling of the documentation for developers.