在日常的项目中,CNN与RNN这类很基础的网络搭建是很频繁的,仅以此记录几个常用搭建网络的方法以及其封装。

kears的官方文档:https://keras.io/

要学会看文档是咋写的,讲道理很快的

1、keras搭建CNN网络+gpu声明

import keras.backend.tensorflow_backend as ktf
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout,Flatten,Conv1D,MaxPool1D,BatchNormalization
from keras.callbacks import LearningRateScheduler, EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras import optimizers
import tensorflow as tf

# 声明GPU的代码,keras声明GPU很简单
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
# GPU 显存自动分配
config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.allow_growth=True
session = tf.Session(config=config)
ktf.set_session(session)

#关于搞数据我这里就不管了,假设数据就这么喂进来了
#利用sequential来定义一个序列化的keras模型
model=Sequential() 
model.add(Conv1D(filters=64,kernel_size=3,strides=1,activation='sigmoid',padding="same",input_shape=(2600,1)))
model.add(BatchNormalization())
model.add(Conv1D(filters=64,kernel_size=3,strides=1,activation='sigmoid',padding="same"))
model.add(MaxPool1D(pool_size=4))
model.add(Dropout(0.25))
model.add(Conv1D(filters=32,kernel_size=3,strides=1,activation='sigmoid',padding="same"))
model.add(BatchNormalization())
model.add(Conv1D(filters=32,kernel_size=3,strides=1,activation='relu',padding="same"))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(128, activation='sigmoid'))  
model.add(Dense(32, activation='sigmoid'),name='layer_xgb')   
model.add(Dropout(0.25))

# 输出层
model.add(Dense(3, activation='softmax'))

opt=optimizers.Adam(lr=0.001, epsilon=1e-8, decay=1e-4) #定义优化器

from keras import metrics  #定义评价函数

model.compile(optimizer = opt , loss = "categorical_crossentropy", metrics=[metrics.categorical_accuracy])
model.summary()
#搞一个学习略衰减的callback
learning_rate_reduction = ReduceLROnPlateau(monitor = 'val_loss', 
                                            patience=3,
                                            verbose=1,
                                            factor=0.5,
                                            min_lr=0.000001
                                            )
#搞一个checkpoint
ckpt = "model_{epoch:02d}-{val_loss:.2f}.hdf5"
checkpoint = ModelCheckpoint(os.path.join(r'/kaggle/working/test-2020-1-demo/model', ckpt), monitor='val_loss', save_weights_only=False, verbose=1, save_best_only=True)
#搞一个 监测monitor的早停,评估其性能
earlystopping = EarlyStopping(monitor='val_loss', verbose=1, patience=3, restore_best_weights=True)

model.fit_generator(
                    batch_generator(batch_size),steps_per_epoch=batch_num,
                    epochs=epoch,
                    callbacks=[learning_rate_reduction,checkpoint,earlystopping],
                    workers=1,
                    use_multiprocessing=False,validation_data=(val_data,val_label),
                    class_weight=class_weights
                    )

model.save_weights(r'/kaggle/working/test-2020-1-demo/model/finall.hdf5')

#这就是一个大致的流程,这里用了model.fit_generator,是因为数据太大了,一下子装不进去,所以用generator的方法,每次喂给他一部分,小数据直接用model.fit,内部会分batch的。

2、keras搭建RNN网络

from keras.models import Sequential,Model
from keras.layers.convolutional import Convolution2D, MaxPooling2D,AveragePooling1D,AveragePooling2D
from keras.layers.convolutional import Convolution1D, MaxPooling1D
from keras.layers.core import Reshape, Permute, Activation

#这是一个带masking的,masking层的作用,是遮盖的意思,在很多应用场景下,输入是不定长的
#这时候搞到定长,就要用padding补0,但补0之后如果直接输入到网络中,网络的参数也会对0这值
#进行优化,即实际是没有0的,padding之后,网络当他是0,那么权重的优化就不对了,这时候搞一#个#masking层遮起来,不让这部分数据在权重优化的时候参与进来


nb_lstm_outputs = 400  #神经元个数
nb_time_steps = 800  #时间序列长度
nb_input_vector = 7 #输入序列

model = Sequential()

model.add(Masking(mask_value=0,input_shape=(800,7))) 
#这个mask_value看的是padding的时候充入的是啥子


model.add(LSTM(units=nb_lstm_outputs,input_shape=(nb_time_steps,nb_input_vector), kernel_regularizer=regularizers.l2(1),return_sequences=True))
model.add(LSTM(units=nb_lstm_outputs,input_shape=(nb_time_steps,nb_input_vector)))
# kears用lstm的时候,一定要注意,这里的return_sequences第一个的时候要ture,如果不写的话,会减一个维度,这个时候再add lstm就会报错的

#这后面再接一个分类器就ok了

keras真的是傻瓜式操作了,TensorFlow2.0也开始走keras的路线,更好的封装,用的不习惯,这里写的是TensorFlow1.x的版本

3、 tensorflow搭建cnn

tensorflow使用占位符去搭建的一个网络的,然后用循环去进行epoch训练,更原始一点儿,不过模型结构和处理流程更为清晰

import tensorflow as tf
x=tf.placeholder(dtype=tf.float64,shape=[None,100,6,1]) 
# sample_size是输入数据的维度;
x1=tf.layers.conv2d(inputs=x,filters=8,kernel_size=[3,3],strides=(1,1),padding='same')
# filter是卷积核的个数 kernel_size是卷积核的尺寸 stride是步长 padding 是进行卷积的方向
x2=tf.layers.max_pooling2d(inputs=x1,pool_size=[2,2],strides=(1,1),padding='same')
x3=tf.layers.conv2d(inputs=x2,filters=8,kernel_size=[3,3],strides=(1,1),padding='same')
x4=tf.layers.max_pooling2d(inputs=x3,pool_size=[2,2],strides=(1,1),padding='same')

x5=tf.layers.flatten(inputs=x4)

#
x6=tf.layers.dense(inputs=x5,units=256,activation=tf.nn.sigmoid)
x7=tf.layers.dense(inputs=x6,units=128,activation=tf.nn.sigmoid)
x8=tf.layers.dense(inputs=x7,units=32,activation=tf.nn.sigmoid)
x9=tf.layers.dense(inputs=x8,units=16,activation=tf.nn.sigmoid)
dropout=tf.layers.dropout(inputs=x9,rate=0.3)

## units=1, units: 输出的大小(维数)

y_predict=tf.layers.dense(inputs=dropout,units=2,activation=tf.nn.sigmoid)
#全连接层


y_label=tf.placeholder(dtype=tf.float64,shape=[None,2])

loss_function=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits
							 (logits=y_predict,labels=y_label))
optimizer=tf.train.AdamOptimizer(learning_rate=0.005).minimize(loss_function)

with tf.name_scope('evaluate_model'):
	 correct_prediction =tf.equal(tf.arg_max(y_predict,1),
								  tf.arg_max(y_label,1))
	 accuracy = tf.reduce_mean(tf.cast(correct_prediction,'float'))


#打乱构建batch去训练
train_data=x_input
train_label=y_out
input_quene=tf.train.slice_input_producer([train_label,train_data],shuffle=True,num_epochs=None)
y_data, t_data = tf.train.batch(input_quene, batch_size=batch_size, num_threads=1, capacity=20, allow_smaller_final_batch=False)


with tf.Session() as sess:
	sess.run(tf.global_variables_initializer())
	sess.run(tf.local_variables_initializer())
    
	coord = tf.train.Coordinator()
	threads = tf.train.start_queue_runners(sess, coord)
	try:
		for i in range(epoch_num):  # 每一轮迭代
			print('************')
			epoch_loss = 0.0
			train_acc=sess.run(accuracy,feed_dict={x:train_data,y_label:train_label})    
			for j in range(batch_total): #每一个batch
				# 获取每一个batch中batch_size个样本和标签
				t_data_v,y_data_v=sess.run([t_data,y_data])
				sess.run(optimizer,feed_dict={x:t_data_v,y_label:y_data_v})  #喂数据给CNN,参数更新的代码
				_loss = sess.run(loss_function, feed_dict={x:t_data_v,y_label:y_data_v})
				
				epoch_loss += _loss
			epoch_loss /= batch_total
			y_p=sess.run(y_predict,feed_dict={x:t_data_v,y_label:y_data_v})
			print("Epoch: %s, batch loss: %s, train accuracy: %s"%(i, epoch_loss, train_acc))
        
	except tf.errors.OutOfRangeError: #这个是和try配合的,其意思是会不会报错,如果报错执行except下的语句
		print("大哥错了哦")
	finally:
		coord.request_stop() 
	coord.join(threads)

 

4、 tensorflow搭建lstm网络