方差过滤: Removing features with low variance

方差特征选择的原理与使用

VarianceThreshold 是特征选择的一个简单基本方法,其原理在于–底方差的特征的预测效果往往不好。而VarianceThreshold会移除所有那些方差不满足一些阈值的特征。默认情况下,它将会移除所有的零方差特征,即那些在所有的样本上的取值均不变的特征。

例如,假设我们有一个特征是布尔值的数据集,我们想要移除那些在整个数据集中特征值为0或者为1的比例超过80%的特征。布尔特征是伯努利( Bernoulli )随机变量,变量的方差为


Var[X] = p(1-p)

因此,我们可以使用阈值.8*(1-.8)进行选择:

from sklearn.feature_selection import *

X = [[100, 1, 2, 3],
     [100, 4, 5, 6],
     [100, 7, 8, 9],
     [100, 11, 12, 13],
     [100, 11, 12, 13],
     [101, 11, 12, 13]]

threshold = .8*(1-.8)

def test_VarianceThreshold(X,threshold):
    selector = VarianceThreshold(threshold)
    selector.fit(X)
    print("Variances is %s" % selector.variances_)
    print("After transform is %s" % selector.transform(X))
    print("The surport is %s" % selector.get_support(True))
    print("After reverse transform is %s" %selector.inverse_transform(selector.transform(X)))


    return selector.transform(X)

test_VarianceThreshold(X=X,threshold=threshold)
Variances is [  0.13888889  15.25        15.25        15.25      ]
After transform is [[ 1  2  3]
 [ 4  5  6]
 [ 7  8  9]
 [11 12 13]
 [11 12 13]
 [11 12 13]]
The surport is [1 2 3]
After reverse transform is [[ 0  1  2  3]
 [ 0  4  5  6]
 [ 0  7  8  9]
 [ 0 11 12 13]
 [ 0 11 12 13]
 [ 0 11 12 13]]





array([[ 1,  2,  3],
       [ 4,  5,  6],
       [ 7,  8,  9],
       [11, 12, 13],
       [11, 12, 13],
       [11, 12, 13]])

但是对于实际的数据集而言,很多时候底方差的数据并不代表着其不是有效的数据,在很多时候移除底方差的数据带来的可能并不是模型性能的提升,而是下降。下面的实验就证明力这一现象

方差特征选择的缺陷

首先,加载数据

from sklearn import datasets,model_selection
def load_data():
    iris=datasets.load_iris() # scikit-learn 自带的 iris 数据集
    X_train=iris.data
    y_train=iris.target

    return model_selection.train_test_split(X_train, y_train,test_size=0.25,random_state=0,stratify=y_train)

然后定义一个用来比较性能差距的类,在之后的测试中我们将会一直使用这两个类:

def show_tree(X_train,X_test,y_train,y_test):
    from sklearn.tree import DecisionTreeClassifier
    criterions=['gini','entropy']
    for criterion in criterions:
        clf = DecisionTreeClassifier(criterion=criterion)
        clf.fit(X_train, y_train)
        print("    ",criterion,"Training score:%f"%(clf.score(X_train,y_train)))
        print("    ",criterion,"Testing score:%f"%(clf.score(X_test,y_test)))

def comparison_tree(selector):
    X_train,X_test,y_train,y_test=load_data()

    print("
Before feture selection :
")
    show_tree(X_train,X_test,y_train,y_test)

    print("
After feture selection :
")
    selector.fit(X_train)
    new_X_train = selector.transform(X_train)
    new_X_test = selector.transform(X_test)
    show_tree(new_X_train,new_X_test,y_train,y_test)

comparison_tree(selector=VarianceThreshold(.8*(1-.8)))
Before feture selection :

     gini Training score:1.000000
     gini Testing score:0.947368
     entropy Training score:1.000000
     entropy Testing score:0.947368

After feture selection :

     gini Training score:1.000000
     gini Testing score:0.947368
     entropy Training score:1.000000
     entropy Testing score:0.921053

由上面的实验可以证明,移除底方差的数据并不一定会带来模型性能的性能提升,甚至可能是下降。

其他方法

参考

  • fit_transform : 使用数据并转换
  • get_params : 获取参数
  • get_support :获取所选元素的整数索引
  • inverse_transform : 反转换
  • set_params : 设置参数

本文参考

原文地址:https://www.cnblogs.com/fonttian/p/8480692.html