Feature scaling

Method used to normalize the range of independent variables
Part of a series on
Machine learning
and data mining
Paradigms
  • Supervised learning
  • Unsupervised learning
  • Semi-supervised learning
  • Self-supervised learning
  • Reinforcement learning
  • Meta-learning
  • Online learning
  • Batch learning
  • Curriculum learning
  • Rule-based learning
  • Neuro-symbolic AI
  • Neuromorphic engineering
  • Quantum machine learning
Problems
Learning with humans
Journals and conferences
  • v
  • t
  • e

Feature scaling is a method used to normalize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step.

Motivation

Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization. For example, many classifiers calculate the distance between two points by the Euclidean distance. If one of the features has a broad range of values, the distance will be governed by this particular feature. Therefore, the range of all features should be normalized so that each feature contributes approximately proportionately to the final distance.

Another reason why feature scaling is applied is that gradient descent converges much faster with feature scaling than without it.[1]

It's also important to apply feature scaling if regularization is used as part of the loss function (so that coefficients are penalized appropriately).

Empirically, feature scaling can improve the convergence speed of stochastic gradient descent. In support vector machines,[2] it can reduce the time to find support vectors. Feature scaling is also often used in applications involving distances and similarities between data points, such as clustering and similarity search. As an example, the K-means clustering algorithm is sensitive to feature scales.

Methods

Rescaling (min-max normalization)

Also known as min-max scaling or min-max normalization, rescaling is the simplest method and consists in rescaling the range of features to scale the range in [0, 1] or [−1, 1]. Selecting the target range depends on the nature of the data. The general formula for a min-max of [0, 1] is given as:[3]

x = x min ( x ) max ( x ) min ( x ) {\displaystyle x'={\frac {x-{\text{min}}(x)}{{\text{max}}(x)-{\text{min}}(x)}}}

where x {\displaystyle x} is an original value, x {\displaystyle x'} is the normalized value. For example, suppose that we have the students' weight data, and the students' weights span [160 pounds, 200 pounds]. To rescale this data, we first subtract 160 from each student's weight and divide the result by 40 (the difference between the maximum and minimum weights).

To rescale a range between an arbitrary set of values [a, b], the formula becomes:

x = a + ( x min ( x ) ) ( b a ) max ( x ) min ( x ) {\displaystyle x'=a+{\frac {(x-{\text{min}}(x))(b-a)}{{\text{max}}(x)-{\text{min}}(x)}}}

where a , b {\displaystyle a,b} are the min-max values.

Mean normalization

x = x x ¯ max ( x ) min ( x ) {\displaystyle x'={\frac {x-{\bar {x}}}{{\text{max}}(x)-{\text{min}}(x)}}}

where x {\displaystyle x} is an original value, x {\displaystyle x'} is the normalized value, x ¯ = average ( x ) {\displaystyle {\bar {x}}={\text{average}}(x)} is the mean of that feature vector. There is another form of the means normalization which divides by the standard deviation which is also called standardization.

Standardization (Z-score Normalization)

The effect of z-score normalization on k-means clustering. 4 gaussian clusters of points are generated, then squashed along the y-axis, and a k = 4 {\displaystyle k=4} clustering was computed. Without normalization, the clusters were arranged along the x-axis, since it is the axis with most of variation. After normalization, the clusters are recovered as expected.

In machine learning, we can handle various types of data, e.g. audio signals and pixel values for image data, and this data can include multiple dimensions. Feature standardization makes the values of each feature in the data have zero-mean (when subtracting the mean in the numerator) and unit-variance. This method is widely used for normalization in many machine learning algorithms (e.g., support vector machines, logistic regression, and artificial neural networks).[4][5] The general method of calculation is to determine the distribution mean and standard deviation for each feature. Next we subtract the mean from each feature. Then we divide the values (mean is already subtracted) of each feature by its standard deviation.

x = x x ¯ σ {\displaystyle x'={\frac {x-{\bar {x}}}{\sigma }}}

Where x {\displaystyle x} is the original feature vector, x ¯ = average ( x ) {\displaystyle {\bar {x}}={\text{average}}(x)} is the mean of that feature vector, and σ {\displaystyle \sigma } is its standard deviation.

Robust Scaling

Robust scaling, also known as standardization using median and interquartile range (IQR), is designed to be robust to outliers. It scales features using the median and IQR as reference points instead of the mean and standard deviation: x = x Q 2 ( x ) Q 3 ( x ) Q 1 ( x ) {\displaystyle x'={\frac {x-Q_{2}(x)}{Q_{3}(x)-Q_{1}(x)}}} where Q 1 ( x ) , Q 2 ( x ) , Q 3 ( x ) {\displaystyle Q_{1}(x),Q_{2}(x),Q_{3}(x)} are the three quartiles (25th, 50th, 75th percentile) of the feature.

Unit vector normalization

Unit vector normalization regards each individual data point as a vector, and divide each by its vector norm, to obtain x = x / x {\displaystyle x'=x/\|x\|} . Any vector norm can be used, but the most common ones are the L1 norm and the L2 norm.

For example, if x = ( v 1 , v 2 , v 3 ) {\displaystyle x=(v_{1},v_{2},v_{3})} , then its Lp-normalized version is: ( v 1 ( | v 1 | p + | v 2 | p + | v 3 | p ) 1 / p , v 2 ( | v 1 | p + | v 2 | p + | v 3 | p ) 1 / p , v 3 ( | v 1 | p + | v 2 | p + | v 3 | p ) 1 / p ) {\displaystyle \left({\frac {v_{1}}{(|v_{1}|^{p}+|v_{2}|^{p}+|v_{3}|^{p})^{1/p}}},{\frac {v_{2}}{(|v_{1}|^{p}+|v_{2}|^{p}+|v_{3}|^{p})^{1/p}}},{\frac {v_{3}}{(|v_{1}|^{p}+|v_{2}|^{p}+|v_{3}|^{p})^{1/p}}}\right)}

See also

  • Normalization (machine learning)
  • Normalization (statistics)
  • Standard score
  • fMLLR, Feature space Maximum Likelihood Linear Regression

References

  1. ^ Ioffe, Sergey; Christian Szegedy (2015). "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift". arXiv:1502.03167 [cs.LG].
  2. ^ Juszczak, P.; D. M. J. Tax; R. P. W. Dui (2002). "Feature scaling in support vector data descriptions". Proc. 8th Annu. Conf. Adv. School Comput. Imaging: 25–30. CiteSeerX 10.1.1.100.2524.
  3. ^ "Min Max normalization". ml-concepts.com. Archived from the original on 2023-04-05. Retrieved 2022-12-14.
  4. ^ Grus, Joel (2015). Data Science from Scratch. Sebastopol, CA: O'Reilly. pp. 99, 100. ISBN 978-1-491-90142-7.
  5. ^ Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome H. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer. ISBN 978-0-387-84884-6.

Further reading

  • Han, Jiawei; Kamber, Micheline; Pei, Jian (2011). "Data Transformation and Data Discretization". Data Mining: Concepts and Techniques. Elsevier. pp. 111–118. ISBN 9780123814807.
  • Lecture by Andrew Ng on feature scaling