目标:

youtubeNet通过训练tensorflow2时设置不同的激活函数,训练和预测采用不同的分支,然后可以在训练和测试时,把模型进行分离,得到训练和预测时,某些层的参数不同。可以通过类似迁移学习的办法实现。
第二,获取训练模型的参数。


获取模型参数:


比较简单,就是调用model.get_weights()。也可以用精确定位到某层,得到

w_dense4 =model.get_layer('dense4').get_weights()

#获取训练后的权重参数
weigts_parm = model.get_weights()
获得的就是模型参数的每一层权重和偏置信息。


模型不同输出:

想不到更好办法,利用迁移学习的办法,如果模型天生支持判断是预测阶段还是预测阶段,就更好了。
原理就是:

  • 模型构建时,先构建一个分支,一个用于训练train。
  • 把要替换的层或者激活函数单独拧出来。
  • 利用模型打包compile时,把train都包含进模型。
  • 利用train=train,这样训练还是用到训练参数。
  • 预测时,单独新建一个model,去接收之前model某层的输出。
  • 添加上自己想要的层,如果需要重新训练,则重新compile,如果不想再训练,直接predict就好了。
    由于预测的那个分支没有经过训练,所以一般只适用于对最后输出层,采用了不同的输出函数,比如YouTube推荐模型,输出层训练时用得weighted LR,而预测时用了e^x 激活函数,这种方式就能实现训练时用一个激活函数,预测时用另一个激活函数。
import tensorflow as tf
import os
import pandas as pd

# 读取数据集
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()

# 数据集归一化,注意softmax输出的分类labels不用归一化
train_images = train_images / 255
test_images = test_images / 255

# 创建模型结构
net_input=tf.keras.Input(shape=(28,28))
fl=tf.keras.layers.Flatten(name='flatten')(net_input)#调用input
l1=tf.keras.layers.Dense(128,activation="relu",name='dense1')(fl)
d1=tf.keras.layers.Dense(64,activation="relu",name='dense2')(l1)
l2=tf.keras.layers.Dense(32,activation="relu",name='dense3')(d1)
l3=tf.keras.layers.Dense(10,activation=None,name='dense4')(l2)
output=tf.keras.activations.softmax(l3)
# 创建模型类
model = tf.keras.Model(inputs=net_input, outputs=output)

# 查看模型的结构
model.summary()

# 模型编译
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
              loss="sparse_categorical_crossentropy",
              metrics=['acc','mse'])

# 模型训练
model.fit(train_images, train_labels, batch_size=50, epochs=2, validation_split=0.1)

#获取训练后的权重参数
weigts_parm = model.get_weights()
print(len(weigts_parm))  #12
model.evaluate(test_images,test_labels)
weigt_dense3 = weigts_parm[-2]

weigt_dense3_embedding = pd.DataFrame(weigt_dense3.transpose())
print(weigt_dense3_embedding)

predicts =model.predict(test_images)

#another_model
# inputs=model.input
out1 = model.get_layer('dense3').output
out = tf.keras.layers.Dense(128,activation='relu',name='mydense1')(out1)
out = tf.keras.layers.Dense(64,activation='relu',name='mydense2')(out)
mylogits = tf.keras.layers.Dense(10,activation='softmax',name='my_logit')(out)

predict_model = tf.keras.Model(inputs=model.input,outputs=mylogits)
predict_model.compile(loss=tf.keras.losses.sparse_categorical_crossentropy,
                      optimizer='adam',metrics=['acc','mse'])
print(predict_model.summary())
predict_model.fit(train_images, train_labels, batch_size=50, epochs=2, validation_split=0.1)
predicts_2 = predict_model.predict(test_images)

for i in predicts_2:
    print(i.argmax())
# print(predicts_2)
print(test_labels)

运行结果:

Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 28, 28)]          0         
_________________________________________________________________
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense1 (Dense)               (None, 128)               100480    
_________________________________________________________________
dense2 (Dense)               (None, 64)                8256      
_________________________________________________________________
dense3 (Dense)               (None, 32)                2080      
_________________________________________________________________
dense4 (Dense)               (None, 10)                330       
_________________________________________________________________
tf.compat.v1.nn.softmax (TFO (None, 10)                0         
=================================================================
Total params: 111,146
Trainable params: 111,146
Non-trainable params: 0
_________________________________________________________________
2021-12-20 16:51:29.354023: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/2
2021-12-20 16:51:30.006144: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2021-12-20 16:51:30.563964: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
1080/1080 [==============================] - 7s 5ms/step - loss: 0.5380 - acc: 0.8104 - mse: 27.6815 - val_loss: 0.3976 - val_acc: 0.8528 - val_mse: 27.5902
Epoch 2/2
1080/1080 [==============================] - 5s 5ms/step - loss: 0.3796 - acc: 0.8623 - mse: 27.6900 - val_loss: 0.3787 - val_acc: 0.8628 - val_mse: 27.5929
8
313/313 [==============================] - 2s 5ms/step - loss: 0.3998 - acc: 0.8566 - mse: 27.6816
         0         1         2   ...        29        30        31
0 -0.287471  0.036533 -0.127725  ...  0.234849  0.014998  0.068000
1 -0.109836 -0.223555 -0.400632  ... -0.338368 -0.307823 -0.406235
2 -0.094260 -0.186713 -0.101085  ... -0.025677  0.120322  0.028206
3 -0.386172  0.152541 -0.527324  ... -0.248855 -0.129524 -0.235581
4 -0.201059 -0.341049 -0.474235  ... -0.021036  0.152996  0.161320
5 -0.174123  0.459365 -0.005071  ...  0.244765  0.244738  0.067327
6  0.330713  0.066037 -0.465473  ... -0.378929 -0.101122  0.235503
7  0.242240  0.199673  0.010125  ... -0.217853  0.255141 -0.057792
8 -0.140202 -0.248611 -0.405854  ... -0.030497  0.384872  0.019411
9 -0.133955  0.261924  0.068150  ...  0.404557  0.171754  0.072854

[10 rows x 32 columns]
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 28, 28)]          0         
_________________________________________________________________
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense1 (Dense)               (None, 128)               100480    
_________________________________________________________________
dense2 (Dense)               (None, 64)                8256      
_________________________________________________________________
dense3 (Dense)               (None, 32)                2080      
_________________________________________________________________
mydense1 (Dense)             (None, 128)               4224      
_________________________________________________________________
mydense2 (Dense)             (None, 64)                8256      
_________________________________________________________________
my_logit (Dense)             (None, 10)                650       
=================================================================
Total params: 123,946
Trainable params: 123,946
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/2
1080/1080 [==============================] - 7s 6ms/step - loss: 0.4096 - acc: 0.8584 - mse: 27.6896 - val_loss: 0.4016 - val_acc: 0.8642 - val_mse: 27.5922
Epoch 2/2
1080/1080 [==============================] - 6s 5ms/step - loss: 0.3320 - acc: 0.8794 - mse: 27.6930 - val_loss: 0.3711 - val_acc: 0.8647 - val_mse: 27.5948

对比打印两个模型的输出:

for i in predicts:
    print(i.argmax())
    
for i in predicts_2:
    print(i.argmax())

建议:



  • 1、获取模型某层的参数,常常就是embedding向量
  • 2、模型的不同输出。采用迁移学习的办法。

有更好的办法可以提出。