fbpx
Skip to content

Recursive Feature Selection – In Depth

    recursive feature selecting

    Last Updated on: 17th October 2023, 05:43 am

    Feature selection is a crucial process in machine learning. It aims to pick the most relevant features that could meaningfully contribute to improving an algorithm’s performance. One cutting-edge feature selection technique that continues to make ripples and prove its effectiveness is Recursive Feature Elimination (RFE).

    Understanding Recursive Feature Elimination

    Before delving into the heart of Recursive Feature Elimination, let’s slice open the feature selection and explore its importance. Feature selection is the process of reducing the dimensionality of your data by removing irrelevant features.

    This technique helps to avoid overfitting, improves model performance and reduces computational expenses.

    Now, on to RFE. Recursive Feature Elimination, as the title suggests, is a technique that recursively removes features to reduce the dimensionality of the problem, taking it from a more complex high-dimension space into a simplified low-dimension space.

    This approach aims to identify features that contribute least to the prediction variable or output in which one is interested.

    RFE harnesses models that assign weights to features (e.g., linear models, SVM, etc.). Here, the idea is to recursively remove features with the least absolute weights and iterate this process until the desired number of features is attained.

    Implementing Recursive Feature Elimination with Code

    Let’s study a practical code implementation of this technique by using Python’s “Scikit-learn” library.

    The first step is to import the necessary libraries.

    from sklearn.datasets import make_classification
    from sklearn.feature_selection import RFE
    from sklearn.tree import DecisionTreeClassifier

    Next, we’ll generate a simple dataset for classification.

    # generate a classification dataset
    X, y = make_classification(n_samples=1000, n_features=20, n_informative=5, n_redundant=5, random_state=1)

    We’ll then define the method, RFE.

    # define RFE model
    rfe = RFE(estimator=DecisionTreeClassifier(), n_features_to_select=5)

    After that, we’ll fit the model to the dataset.

    # fit the model
    rfe.fit(X, y)

    Then we obtain the feature selection results.

    for i in range(X.shape[1]):
       print('Column: %d, Selected %s, Rank: %.3f' % (i, rfe.support_[i], rfe.ranking_[i]))

    This process will recursively eliminate features and eventually provide the final ranking of all the features.

    Advantages of Recursive Feature Elimination

    Some of the benefits of RFE include.

    1. Precision: RFE eliminates features iteratively and is therefore more precise in choosing the most relevant features.
    2. Efficiency: RFE is a cost-effective means of reducing an algorithm’s computational pressure.
    3. Prevention of Overfitting: As it eliminates non-pertinent features, RFE helps to prevent a model from overfitting, which can cause the model to perform poorly on unseen data.

    Limitations of Recursive Feature Elimination

    Despite its several advantages, RFE also comes with some drawbacks, including.

    1. High Computation Cost: For large datasets with high-dimensional features, RFE can be computationally demanding as it trains the model multiple times.
    2. Dependence on the Estimator: RFE’s effectiveness depends heavily on the performance of the estimator used for feature ranking at each iteration.
    3. Handling of Multicollinearity: It may struggle when you have a highly correlated multicollinearity dataset.

    Recursive Feature Selecting Example

    Let’s take an example with a practical dataset namely the Breast Cancer Wisconsin (Diagnostic) Data Set.

    For an easier understanding of code, we will use Python. Assume that we want to predict if a breast cancer tumor is malignant or benign based on several observed features. Let’s assume that we want to use a Support Vector Classifier.

    Step 1: Import Required Python Libraries

    from sklearn.datasets import load_breast_cancer
    from sklearn.feature_selection import RFE
    from sklearn.svm import SVC

    Step 2: Load Your Data

    data = load_breast_cancer()
    
    # We'll use all features in the dataset. You can check feature names by calling `dataset.feature_names`.
    X = data.data
    y = data.target

    Step 3: Initialize the Recursive Feature Elimination (RFE) Model

    model = SVC(kernel="linear", C=1)
    rfe = RFE(model, n_features_to_select=3)

    Here, we create a Support Vector Classifier with a linear kernel and then pass this model to RFE. We also set n_features_to_select=3 which means that we want to see the top 3 features in our dataset.

    Step 4: Fit Your Data to the RFE Model:

    rfe = rfe.fit(X, y)

    This process will fit your features to the Recursive Feature Elimination model we defined above.

    Step 5: Get the Ranking of Features

    Now that we have our fitted model, we can access the ranking of features. Features are ranked from most important (1) to least important (n).

    print('Feature ranking: ', rfe.ranking_)

    For our case, this will return a list of 30 integers (since the breast cancer dataset has 30 features). The result could look something like this:

    [1, 1, 16, 13, 23, 8, 3, 1, 15, 25, 9, 2, 19, 5, 28, 10, 12, 4, 29, 7, 21, 14, 6, 22, 27, 11, 18, 20, 24, 26]

    This result shows us the ranking of each feature in the dataset. The number 1 means that the feature is the most important, so here, the first, second, eighth are deemed as the most informative features according to the SVM-RFE method in the context of the cancer dataset.

    Step 6: Get the Names of the Most Important Features

    print('Most important features (RFE): ', data.feature_names[rfe.support_])

    This line will return the names of the top 3 features, which might be something like:

    ['mean radius', 'mean texture', 'mean concave points']

    Now, you can use these top features for building your machine learning model or further analysis!

    This was a simple example of using Recursive Feature Elimination in Python with sci-kit-learn. Here SVC-RFE method is used with SVM as the base estimator and the number of selected features is 3.

    Note that RFE is a greedy optimization algorithm that might not provide the optimal feature subset. It works by recursively removing features and building a model on those features that remain.

    Also, the choice of algorithm does matter and it would be wise to try out your feature selection process with different model-building algorithms as base estimators.

    Conclusion

    Recursive Feature Elimination (RFE) is a powerful technique used in feature selection. It’s all about recursively eliminating less important features, to eventually retain only the most useful ones for your model.

    Its elegance lies in its precision, efficiency, and ability to curb overfitting. However, one should also consider its limitations when dealing with substantial high-dimensional datasets.

    With the power of Python and Scikit-learn, you can harness RFE to extract a robust subset of features from your dataset and robustly boost your machine learning algorithm’s performance.

    Share this post on social!

    Comment on Post

    Your email address will not be published. Required fields are marked *