Data Collection, Preparation & Preprocessing in ML

Data preparation, quality reviews and formatting are precursors to successful machine learning (ML) efforts. It’s been my experience that clients consistently underestimate the effort necessary and time required to get datasets ready for use on a ML project. Data related issues range from security access, quality, quantity, low predictive power, validating the meaning of the data, etc. In this post I will talk about these issues as well as some others and offer ways to mitigate and solve them. During a project it is sometimes useful to think of feature engineering as a sub-phase within the data collection and prep phase but I have also seen seen feature engineering as a separate and distinct step in the ML project life cycle primarily due to the specialized skills and algorithms required. For the purposes of this blog post I am not going to cover feature engineering here.

I have an earlier post on the business aspects of ML here. In this post I will discuss the more technical aspects of machine learning. As an engineer I am more interested in the technical aspects of the project but it is always the non-technical issues that are the most time consuming to overcome and take a significant amount of planning and preparation. I think about the process of delivering ML results in four very broad categories -

  • Data Planning - data planning, raw data analysis, determining what data is needed, quality of the data, gaining access, data preparation and any associated preprocessing

  • Feature Analysis - numeric representation of the raw data, feature engineering

  • Model Development and Training - a mathematical or statistical analysis of the features and training of the model

  • Implementation and Maintenance - developing models and underlying processes that are executable on a regular basis, getting your models and the associated processes into a production ready state and maintaining them over time. Building scalable and repeatable processes.

I will talk about the first broad category in this post, that is, raw data analysis and data planning. The very first step in your ML journey is often times the most overlooked, under appreciated and under estimated (in terms of time required) than any of the other mentioned categories. Implementing a robust and scalable machine learning practice in a large enterprise will take careful data planning up front. Most teams want to dive into the more interesting work of feature analysis, model development and training but getting the right data, with proper level of quality, at the right time is key to success of the entire effort. This phase will alway take longer and be more difficult than you first estimate.

What Did My Machine Learn?

The reason it is important to focus on what data to use and ensuring you get data from broad and wide ranging sample sets is that the machine may not be learning what we humans think it is learning even if it gives you the correct answer. In her book Artificial Intelligence A Guide For Thinking Humans (which I highly recommend) Melanie Mitchell discusses how one of her graduate students trained a neural network to classify images into one of two categories - with animals and those without animals. The depth of field difference between the data - animal images tend to have a shallow depth of field with a fuzzy background where landscapes or pictures without animals tend have a much greater depth of field, with foreground to deep background in sharp focus. The neural net did very well with this task, except that the humans thought the neural net was learning to discriminate between animals and non-animals but further testing indicated what it in fact what the machine learned was to categorize images into two groups, one with blurry backgrounds and a second with in focus backgrounds. There are numerous examples of this kind of thing happening in machine learning. The important point to remember is understanding, reviewing, looking for diverse datasets, and being attended to overfitting is a key step in the use of machine learning.

What Data is Required

Generally, clients know what data they want to use to solve a given machine learning problem. I say generally because what is usually well known is the obvious choices - if we are working on a problem to increase sales we will want sales data, customer data and marketing data. Determining other less obvious choices that may help us find the answers to the questions we are interested in is the hard work. What about external data - will we benefit from this and if so which data sources are valid and reliable? For the most complete model and for the best outcomes, you should look for diverse data sources -- accessed from multiple sources (internal and external), across business domains, and at various points in time -- this will aid in developing robust and accurate ML models. Once deployed to production, the machine learning algorithms will need to continuously read large, diverse data sets to keep the model results fresh and accurate over time, so you will have to be mindful of getting a steady flow of data.

There is no algorithm that you can run to answer what data sources are required to give you the best outcomes. This information will come from the ML analyst’s experience and leveraging domain experts inside and outside your company. A few sessions with experts can generate ideas for new data sources that can be ranked and prioritized for deeper technical analysis and feasibility. At this stage it is a good idea not to limit thinking but to be open to possibilities even if there are issues. You can easily discard candidates that are not feasible at this time or put those data sources on your product roadmap for future model development. During one project’s data analysis session we uncovered non-obvious ideas for using new external data sources that were not immediately germane to the problem space, such as data from; weather systems, government housing and income data, and census information. Over the years we have developed a internal set of job aids that help us document, score, rate, and prioritize data sources for analysis. Getting a tool for this is a wise investment - it will help you organize your thoughts related to data sources and assist in tracking issues associated with those sources. You will see how this tool is leveraged further in the sections below.

Data Availability & Definition

As you develop your list of candidate data sources you must determine if the data exists, can you get access, is there a cost involved, and is this data available in the timeframe you need it. There may be security issues related to the data you are interested in, hopefully your organization has done a privacy impact assessment (PIA) to identify required privacy protections for the organization’s data. This should provide you with the security information you require. You must determine if you are allowed to use the data for the intended purpose. For example, some pseudonymization or anonymization would be required if there was PII (personally identifiable information) in your data sets. Data masking or removal methods will then need to be employed. I typically would prefer data masking because removal can often reduce the overall usefulness of your data. The two most common techniques used to mask data is:

  • Substitution cipher - each occurrence of restricted data is replaced by its hashed and encrypted value. You will need to use a salted secure hash algorithm to prevent repeatable values

  • Tokenization - a token which is a non-sensitive equivalent that is substituted for the restricted data. The token is a pointer which relates back to the sensitive data using the tokenization system. It is thought of as more secure than substitution cipher.

Depending on the technology/tool sets you are using these algorithms are fairly straight forward to implement. There are also readily available tools on the market for this purpose.

Licensing restrictions and access to third party data always has a long lead time. Whenever I work on projects that require access to third party data the time it takes is directly related to the size of the company seeking the license. The larger the company, the longer it takes. More lawyers, more restrictions, more issues about IP all takes time to sort out - make sure this is in your planning.

What do you know about the data, is the data and metadata defined? Understanding the formats (structured and unstructured), frequency of refresh, locations, owners, quality indexes, etc. Will this data be available long term, can you have/get access to the data you need for the timeframe needed for the project?

If you work in an organization that has well defined data ownership tools and processes you are in luck as this will save you hours of work. If you don’t have that luxury then a poor man’s approach may be needed but either way the data analysis plan and output that takes into consideration all the items discussed here needs to be maintained, this will prove invaluable as the project progresses. The data information (including metadata) that is documented during your machine learning project should be formally documented and saved in the appropriate repositories for later use. During this phase, I also like to document project related information — data definitions, metadata, why some data was included and excluded from analysis, data imputation details, etc.

Data Quality

When people think of poor quality data they are typically thinking of noise in the data. That is, the data is corrupted in some way - poor images (blurry), formatting issue - loss of spaces in text or decimal point issues with numbers, data can be missing, audio data may be incomprehensible not able to be transcribed. Depending on the extent of the missing data and the size of the data set it can be ok to ignore missing data and let the algorithm sort out the issue. However, if the data set is relatively small (in the thousands of samples) you run the risk of overfitting and the data set will need to be fixed or potentially lead to modeling the noise within the data. In a big data environment if the noise is random then the law of averages will take over and missing values will be averaged out. Or if the proportion of missing observations is small with respect to the overall size of the data set then the entire observation can be removed. I try to avoid deletions as it tends to be overused and biased estimates can easily slip into the process.

Data imputation techniques are used by the data analyst to infer the missing attributes. The most common imputation methods are listed below:

  • Mean or Median Value - Calculating the mean value for the existing features and using the calculated value as a replacement for the missing feature value is by far the fastest and easiest method to infer the missing attributes. The primary drawbacks to this method are; it can not be used on categorical data, there is no way to account for correlations between features and it is not always the most accurate method.

  • Mode Value - Imputing the data based on the mode is also a fast and easy method to infer missing values and it can be used on categorical data. Mode has the downside of potentially adding bias to the data set. A related imputation method is the creation of a new category such as “missing” which allows you to not lose track of the data you have changed and limits bias.

  • Random Sample - If your data is normalized and the data is missing at random a quick way to impute data is to randomly select values from the existing attributes to impute the missing values.

  • kNN - k-nearest neighbors is usually thought of as a classification algorithm but the process can also be used as a way to impute missing values. By using the samples in the training set that are nearest to it then averaging these nearby points to fill in the missing value. In other words, kNN imputation algorithm is a donor-based method where the imputed value is the average of measured values from k records.

    This is an enormously important area and a lot can be written but I want to spend some time discussing two of the hyper-parameters that get tuned when using kNN for imputation. Selecting the optimal value for k and an appropriate distance metric is crucial for the data scientist as they balance under and overfitting. In a subsequent text I will write more about the mathematics behind kNN classifier but to give you a feel for the importance of of the value of k see Figure 1 below.

Screen Shot 2021-07-22 at 9.16.02 PM.png

In this somewhat extreme example you can see how the value of k set to 3 vs a value of 7 becomes important in classification of an unknown value represented here with a “?”.

In a recent post I talked about churn modeling and how we tuned customer marketing treatments based on new external data sources, we used kNN in that analysis.


Implementation Advice: Your Python implementations of kNN for imputation is best done using the machine learning library sklearn. The class KNNImputer provides a way to complete missing data values based on the kNN algorithm allowing you to specify values for k and a distance metric (default value nan_euclidian). In another post I will discuss the different distance measures for kNN algorithms.

Data Sampling

Earlier in my career it was common to deal with the issue of too little data as opposed to too much data, now the situation is reversed. We are experiencing a data explosion - a rapid increase of information availability (wikipedia). By one estimate (Statista) the volume of data/information created, captured, copied, and consumed worldwide has increased from 2 zettabytes in 2010 to 41 in 2019 and expected to be at 181 zettabytes in 2025. Recall, a zettabyte is 1E+9 terabytes. This huge influx in data is driving the need for data sampling within ML projects. It is not necessary nor is it very efficient to deal with all the data in the data sets that may be of interest therefore you will need to become familiar with data sampling strategies. Formally, a sample is a statistical process that you employ to select a subset of objects from the larger population, this then defines your sample or observation set. Like many of the topics in AI and ML data sampling can become quite nuanced but for most practical purposes you will use one of the three types of sampling methods below. These the techniques are probabilistic sampling methods, where each data value has a chance of being selected and gives you a good representation of your population.

  • Simple Random Sample - a probability based method where every entry in your data set has an equal chance of being selected. While simple to implement using any random number generator it can be problematic if you don’t select a characteristic of interest or may entirely miss a minority characteristic that proves to be important

  • Interval Sample - with interval sampling the first value is selected at random this is your starting value. Beginning at your starting value, select every kth element until you get the desired sample size. Set the value of k such that gives you the proper sample size and traverses the population set

  • Stratified Sample - the statistical technique of sampling via stratification is done by creating groups or strata based on some characteristics of your data then you randomly select a sample from each strata. For example in one ML project we were interested in analyzing churn and how our population in different income groups responded to various treatments. We created strata based on income brackets and selected sample from each income bracket that was commensurate with the representation in the population as a whole. To perform stratification sampling you of course need a good understanding of the underlying data.

The the sampling techniques above are easily implemented with Sklearn, Python and Pandas using classes provided.

Encoding Categorical Data

Input data to machine learning algorithms will most likely include categorical data as well as numerical. While it is true that some ML algorithms can use categorical data without encoding, decision trees for example, many other algorithms can not operate on labeled data directly, therefore these features will require some type of encoding. Examples of categorical variables are: “color” with values: red, blue, green; or “fabric” with values: cotton, silk, wool, polyester; or “size” S, M, L, XL. The variables color and fabric are examples of a nominal categorical feature where no order is implied whereas size is an ordinal categorical feature where an order is implied as: XL > L > M > S. Where there is no implied order between fabrics or colors.

Ordinal encoding can be used on ordinal values transforming the text to numerical values to accommodate ML algorithms. The data for size could be encoded as 0=S, 1=M, 2=L, 3=XL. A reverse mapping dictionary and process can be easily implemented to reverse this process to retrieve the more meaningful values for reporting. It is not recommended to use ordinal encoding on nominal categorical data as the ML algorithms will treat features such as fabric and color as ordinal and we have seen this would not make sense. For nominal features, one-hot encoding is a commonly used.

The idea with one-hot-encoding is to convert each unique value in the nominal set to a dummy feature (that is why this method of encoding is sometimes called dummy encoding) that will take on a binary value. For the fabric example above four new features representing each fabric would be added to the data set - cotton, silk, wool, polyester. A binary value would be assigned based on the sample’s fabric type. If the row value was for a cotton fabric the binary values would be cotton= 1, silk=0, wool=0, polyester=0. You can see why it is called one-hot-encoding one value is on or hot while all the others are off. One-hot-encoding can perform very well but it’s easy to see that you will quickly have feature expansion depending on the size of k, where k is the number of unique nominal features. As with all my posts the easiest implementations are done with scikit-learn, pandas and python and one-hot-encoding is no exception. OneHotEncoder is a scikit-learn implementation in sklearn.preprocessing module to encode categorical features as a one-hot array. Also you should be aware, there is a handy class in sklearn.compose called ColumnTransformer which allows you to selectively transform individual columns in an array or a pandas DataFrame.

Implementation advice: for illustrative purposes the encoding examples above are fairly trivial but in real world implementations these can get lengthy and extensive. In most industrial strength development efforts I am on we build specific dictionary mapping sub-modules for encoding and decoding data. We develop them to be robust and extensible as they are used frequently.

Partitioning Data

Up to this point we have been writing about datasets in their entirety. In fact, you will need a process to partition the data into three subsets - training, validation and hold out (sometimes called a test partition). As a rough estimate you can think of the percentage of data split across the three partitions as 60,20, 2o respectively. Or if you are only need a training and test dataset the split could be 70, 30.

  • Training dataset - this data partition is used to train your model(s). With supervised learning the training data is labeled and is used by your ML algorithms to “learn” relationships between features and the dependent variable.

  • Validation dataset - this data partition is used to tune the model’s hyperparameters and determine/maximize model performance. If you are testing multiple models this is the data you will use to determine which model fits the validation data the best, that model will move on to the next phase, which is testing against the hold out data.

  • Hold out or test partition - this partition is used to determine final model performance on data that it has never seen before.

In summary you fit/train your model on training data, tune and make model predictions using validation data with final testing being done using the model that fits the validation data the best.


The code snippet below will give you an idea of how to use sklearn LabelEncoder and train_test_split on your data. In this example I used the Wisconsin Breast Cancer dataset so you could see an example with real data using pandas and sklearn classes. I cut the dataset down to a few rows and columns for readability and put in print statements so you would be able to see what is happening at each stage. In the data you will see that ‘M’ and ‘B’ representing malignant and benign get transformed to 1 and 0 respectively. I also call train_test_split to demonstrate a 70:30 split of the data.

Screen Shot 2021-08-12 at 1.22.19 PM.png
Screen Shot 2021-08-12 at 1.26.59 PM.png

Typically, you will do your data partitioning to the raw data that has been randomized before any partitioning.

The data collection and prep processes will need to be run repeatedly so it is best to build scalable and robust modules for this purpose. When you are just starting your ML journey it is tempting to build quick and dirty scripts or to try and take short cuts related to data preparation but time spent here will be time saved elsewhere. Good data and a solid methodology for processing and preparing it is foundational to your successful ML practice.

Previous
Previous

Transfer Leaning and Pre-Trained Models

Next
Next

Insights as a Service - Using External Partners for AI/ML