task4 心电图

通过task4的学习,很感谢制作课件的老师的指导,我们第4节主要学习调参的方法:

  • 模型调参:
    • 贪心调参方法;
    • 网格调参方法;
    • 贝叶斯调参方法;

“”“使用lightgbm 5折交叉验证进行建模预测”""
cv_scores = []
for i, (train_index, valid_index) in enumerate(kf.split(X_train, y_train)):
print(’************************************ {} ************************************’.format(str(i+1)))
X_train_split, y_train_split, X_val, y_val = X_train.iloc[train_index], y_train[train_index], X_train.iloc[valid_index], y_train[valid_index]

train_matrix = lgb.Dataset(X_train_split, label=y_train_split)
valid_matrix = lgb.Dataset(X_val, label=y_val)

params = {
            "learning_rate": 0.1,
            "boosting": 'gbdt',  
            "lambda_l2": 0.1,
            "max_depth": -1,
            "num_leaves": 128,
            "bagging_fraction": 0.8,
            "feature_fraction": 0.8,
            "metric": None,
            "objective": "multiclass",
            "num_class": 4,
            "nthread": 10,
            "verbose": -1,
        }

model = lgb.train(params, 
                  train_set=train_matrix, 
                  valid_sets=valid_matrix, 
                  num_boost_round=2000, 
                  verbose_eval=100, 
                  early_stopping_rounds=200,
                  feval=f1_score_vali)

val_pred = model.predict(X_val, num_iteration=model.best_iteration)

val_pred = np.argmax(val_pred, axis=1)
cv_scores.append(f1_score(y_true=y_val, y_pred=val_pred, average='macro'))
print(cv_scores)

print("lgb_scotrainre_list:{}".format(cv_scores))
print("lgb_score_mean:{}".format(np.mean(cv_scores)))
print("lgb_score_std:{}".format(np.std(cv_scores)))

“”“通过网格搜索确定最优参数”""
from sklearn.model_selection import GridSearchCV

def get_best_cv_params(learning_rate=0.1, n_estimators=581, num_leaves=31, max_depth=-1, bagging_fraction=1.0,
feature_fraction=1.0, bagging_freq=0, min_data_in_leaf=20, min_child_weight=0.001,
min_split_gain=0, reg_lambda=0, reg_alpha=0, param_grid=None):
# 设置5折交叉验证
cv_fold = KFold(n_splits=5, shuffle=True, random_state=2021)

model_lgb = lgb.LGBMClassifier(learning_rate=learning_rate,
                               n_estimators=n_estimators,
                               num_leaves=num_leaves,
                               max_depth=max_depth,
                               bagging_fraction=bagging_fraction,
                               feature_fraction=feature_fraction,
                               bagging_freq=bagging_freq,
                               min_data_in_leaf=min_data_in_leaf,
                               min_child_weight=min_child_weight,
                               min_split_gain=min_split_gain,
                               reg_lambda=reg_lambda,
                               reg_alpha=reg_alpha,
                               n_jobs= 8
                              )

f1 = make_scorer(f1_score, average='micro')
grid_search = GridSearchCV(estimator=model_lgb, 
                           cv=cv_fold,
                           param_grid=param_grid,
                           scoring=f1

                          )
grid_search.fit(X_train, y_train)

print('模型当前最优参数为:{}'.format(grid_search.best_params_))
print('模型当前最优得分为:{}'.format(grid_search.best_score_))

“”“以下代码未运行,耗时较长,请谨慎运行,且每一步的最优参数需要在下一步进行手动更新,请注意”""

“”"
需要注意一下的是,除了获取上面的获取num_boost_round时候用的是原生的lightgbm(因为要用自带的cv)
下面配合GridSearchCV时必须使用sklearn接口的lightgbm。
“”"
分别使用如下的函数,得到一个最优的参数
“”“设置n_estimators 为581,调整num_leaves和max_depth,这里选择先粗调再细调”""
lgb_params = {‘num_leaves’: range(10, 80, 5), ‘max_depth’: range(3,10,2)}
get_best_cv_params(learning_rate=0.1, n_estimators=581, num_leaves=None, max_depth=None, min_data_in_leaf=20,
min_child_weight=0.001,bagging_fraction=1.0, feature_fraction=1.0, bagging_freq=0,
min_split_gain=0, reg_lambda=0, reg_alpha=0, param_grid=lgb_params)
下面还有很多 我就不写了。
下面是贝叶斯调参,第一次知道这种方法,学习学习
from bayes_opt import BayesianOptimization
“”“定义优化参数”""
bayes_lgb = BayesianOptimization(
rf_cv_lgb,
{
‘num_leaves’:(10, 200),
‘max_depth’:(3, 20),
‘bagging_fraction’:(0.5, 1.0),
‘feature_fraction’:(0.5, 1.0),
‘bagging_freq’:(0, 100),
‘min_data_in_leaf’:(10,100),
‘min_child_weight’:(0, 10),
‘min_split_gain’:(0.0, 1.0),
‘reg_alpha’:(0.0, 10),
‘reg_lambda’:(0.0, 10),
}
)

“”“开始优化”""
bayes_lgb.maximize(n_iter=10)


显示最好的参数
image

参数优化完成后,我们使用优化后的参数建立新的模型,降低学习率并寻找最优模型迭代次数.

“”“调整一个较小的学习率,并通过cv函数确定当前最优的迭代次数”""
base_params_lgb = {
‘boosting_type’: ‘gbdt’,
‘objective’: ‘multiclass’,
‘num_class’: 4,
‘learning_rate’: 0.01,
‘num_leaves’: 173,
‘max_depth’: 18,
‘min_data_in_leaf’: 96,
‘min_child_weight’:6.5,
‘bagging_fraction’: 0.64,
‘feature_fraction’: 1,
‘bagging_freq’: 25,
‘reg_lambda’: 10,
‘reg_alpha’: 0.91,
‘min_split_gain’: 0.002,
‘nthread’: 10,
‘verbose’: -1,
}

cv_result_lgb = lgb.cv(
train_set=train_matrix,
early_stopping_rounds=1000,
num_boost_round=20000,
nfold=5,
stratified=True,
shuffle=True,
params=base_params_lgb,
feval=f1_score_vali,
seed=0
)

运行结果 结束 电脑配置太差 跑蓝屏了T.T