文章目录

  • 基本步骤
  • 1、定位图片中的脸
  • 2、在面部ROI中检测出关键的面部结构
  • 什么是ROI
  • 补充函数rect_to_bb,将rect转成坐标点
  • 补充函数shape__to__np
  • 补充函数resive
  • 主要代码
  • 导入相关的包
  • 初始化面部检测器和面部特征预测器
  • 打开图片并读取,将之转换为的灰度图片,固定大小
  • 调用加载好的检测器,对目标进行检测
  • 遍历所有是别人出来的人脸
  • 输出修改之后的图片
  • 最终的代码
  • 实验效果
  • 分析与总结


基本步骤

1、定位图片中的脸
  • 面部检测可以使用很多方式实现,比如说OpenCV内嵌的Haar级联,预训练好的HOG+ 先行SVM对象检测,或者使用深度学习算法进行面部识别。无论什么方法,我们最终都是要获得标定面部的选区
2、在面部ROI中检测出关键的面部结构
  • 主要给出对应面部选区,我们就可以进行面部特征点的检测。有很多不同的面部特征点检测器,但是所有的方法基本上都是针对以下几个器官:嘴、左右眼睫毛、左右眼、鼻子、下巴等。
  • Dlib库中常用的面部特征检测器是One millisecond face alignment with an ensemble of regression trees。这个方法使用了一个人工标注的测试集,标注的内容是围绕面部特征的特定的坐标,坐标表示的是像素点之间的距离。有了这个训练集,就可以训练出来一个集成的回归树,用来检测面部的特征。这个特征检测器的可以进行实时高精度检测。
  • 如果想要的更加深入的了解这个技术,可以通过连接,读相关的文章,配合Dlib的官方文档。
  • 文章的连接
  • Dlib官方文档
  • 在Dlib库中的预先训练好的面部特征检测是针对人脸结构上的68个特征点,分布如下。

CNN面部表情识别 面部表情识别测试_dlib

什么是ROI
  • 图像处理中,从被处理图像以方框、圆、椭圆等不规则多边形方式勾勒出的需要处理的区域,成为感兴趣区域,ROI。
补充函数rect_to_bb,将rect转成坐标点
  • 描述:将检测器检测出来的rect转换成具体的长方形框的坐标点
  • 原理:detecor返回的值是rect,数据的形式是(x,y,height,width)
def rect_to_bb(rect):
	# take a bounding predicted by dlib and convert it
	# to the format (x, y, w, h) as we would normally do
	# with OpenCV
	x = rect.left()
	y = rect.top()
	w = rect.right() - x
	h = rect.bottom() - y
	# return a tuple of (x, y, w, h)
	return (x, y, w, h)
补充函数shape__to__np
  • 描述:将包含的68个面部区域的坐标点的shape转为numpy数组
def shape_to_np(shape, dtype="int"):
	# initialize the list of (x, y)-coordinates
	coords = np.zeros((68, 2), dtype=dtype)
	# loop over the 68 facial landmarks and convert them
	# to a 2-tuple of (x, y)-coordinates
	for i in range(0, 68):
		coords[i] = (shape.part(i).x, shape.part(i).y)
	# return the list of (x, y)-coordinates
	return coords
补充函数resive
  • 描述:将图片按照要求设定大小
  • 参数:image是cv2.imread的对象
width和height是新指定的大小参数
def resize(image, width=None, height=None, inter=cv2.INTER_AREA):
    # initialize the dimensions of the image to be resized and
    # grab the image size
    dim = None
    (h, w) = image.shape[:2]

    # if both the width and height are None, then return the
    # original image
    if width is None and height is None:
        return image

    # check to see if the width is None
    if width is None:
        # calculate the ratio of the height and construct the
        # dimensions
        r = height / float(h)
        dim = (int(w * r), height)

    # otherwise, the height is None
    else:
        # calculate the ratio of the width and construct the
        # dimensions
        r = width / float(w)
        dim = (width, int(h * r))

    # resize the image
    resized = cv2.resize(image, dim, interpolation=inter)

    # return the resized image
    return resized
主要代码
导入相关的包
# import the necessary packages
import numpy as np
import argparse
import dlib
import cv2
初始化面部检测器和面部特征预测器
# initialize dlib's face detector (HOG-based) and then create
# the facial landmark predictor
# 初始化基于HOG预测的预先训练好的检测器
detector = dlib.get_frontal_face_detector()
# 使用的shape-predictor去加载的下载的面部特征训练器
# 括号里的是检测器的路径
predictor = dlib.shape_predictor("the path of the detector")
打开图片并读取,将之转换为的灰度图片,固定大小
# 使用opencv打开图片
image = cv2.imread("1.jpg")
# 统一图片的大小
image = imutils.resize(image,width = 500)
# 将图片转为灰度图片,将BGR图片转为灰度图片
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
  • 具体resize函数
def resize(image, width=None, height=None, inter=cv2.INTER_AREA):
    # initialize the dimensions of the image to be resized and
    # grab the image size
    dim = None
    (h, w) = image.shape[:2]

    # if both the width and height are None, then return the
    # original image
    if width is None and height is None:
        return image

    # check to see if the width is None
    if width is None:
        # calculate the ratio of the height and construct the
        # dimensions
        r = height / float(h)
        dim = (int(w * r), height)

    # otherwise, the height is None
    else:
        # calculate the ratio of the width and construct the
        # dimensions
        r = width / float(w)
        dim = (width, int(h * r))

    # resize the image
    resized = cv2.resize(image, dim, interpolation=inter)

    # return the resized image
    return resized
  • imread的具体的输出,每一个像素点是以(r,g,b)的形式进行保存的,结果如下

CNN面部表情识别 面部表情识别测试_ci_02


原图片的shape输出,对应的是heightweightchannel,总共是rgb三个颜色的通道,像素点是711*474

CNN面部表情识别 面部表情识别测试_ci_03

  • 修改之后的图片尺寸

CNN面部表情识别 面部表情识别测试_ci_04

  • 转换之后的灰度图片,仅仅只有一个单通道

CNN面部表情识别 面部表情识别测试_opencv_05

调用加载好的检测器,对目标进行检测
  • 第一个参数是需要检测的图片
  • 第二个参数是图片的层数,这里是单层图片,只有一个灰度层
# detect face in the grayscale face
# detecting the bounding box of faces in our image
# the second parameter is the number of image pyramid layer
# prior applying the detector we must upscaling the image
rects = detector(gray,1)

CNN面部表情识别 面部表情识别测试_人脸识别_06

  • rects的结果是的坐标和weight和height两对参数
遍历所有是别人出来的人脸
for (i,rect) in enumerate(rects):
    # i对应的是目标的索引,rect对应是每一个框的起始点坐标和长宽
    # 定位人脸的关键点,返回的定位之后的68个关键点的位置
    shape = predictor(gray,rect)
    # shape是输出之后坐标点,是(68,2),68个点,每个点都是二维的,将所有的坐标点转为numpy数组
    shape = face_utils.shape_to_np(shape)

    # 将rect检测人脸的边框转为绘制矩形框的具体位置
    (x,y,w,h) = face_utils.rect_to_bb(rect)
    # 绘制人脸的矩形框
    cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2)

    # 设置矩形框的文字部分
    cv2.putText(image,"Face #{}".format(i+1),(x-10,y-10),
                cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,0),2)

    # 循环遍历所有的点,将之在原图上进行标注
    for (x,y) in shape:
        cv2.circle(image,(x,y),1,(0,0,255),-1)
  • 具体的函数face_utils.shape_to_np的具体代码,将shape的输出结果转为numpy数组
def shape_to_np(shape, dtype="int"):
	# initialize the list of (x, y)-coordinates
	coords = np.zeros((68, 2), dtype=dtype)
	# loop over the 68 facial landmarks and convert them
	# to a 2-tuple of (x, y)-coordinates
	for i in range(0, 68):
		coords[i] = (shape.part(i).x, shape.part(i).y)
	# return the list of (x, y)-coordinates
	return coords
输出修改之后的图片
# show the output image with the face detections + facial landmarks
cv2.imshow("Output",image)
cv2.waitKey(0)
最终的代码
# import the necessary packages

# import argparse
import cv2
import dlib

import imutils
# the package below from the writer
from imutils import face_utils


# intialize dlib face detector adn then create the facial landmark predictor
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

# load the image,resize it and convert it into grayscale
# resice the size of the image into 500 width
# image = cv2.imread(args["image"])
image = cv2.imread("1.jpg")
image = imutils.resize(image,width = 500)
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)

# detect face in the grayscale face
# detecting the bounding box of faces in our image
# the second parameter is the number of image pyramid layer
# prior applying the detector we must upscaling the image
rects = detector(gray,1)


# Given the coordinates of the face in the image
# loop over the face detections
for (i,rect) in enumerate(rects):
    # determine the facial landmarks for the face region
    # convert the coordiantes of the facial landmark to numpy array
    # predictor is to detect the facila landmark
    shape = predictor(gray,rect)
    # convert the dlib objects to a numpy array
    shape = face_utils.shape_to_np(shape)

    # convert dlib's rectangle to a OpenCV-style bounding box(x,y,w,h)
    # then draw the face bounding box
    (x,y,w,h) = face_utils.rect_to_bb(rect)
    cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2)

    # show the face number
    cv2.putText(image,"Face #{}".format(i+1),(x-10,y-10),
                cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,0),2)

    # loop over the (x,y)-coordinates for the facial lanmarks
    # and draw the on the image
    for (x,y) in shape:
        cv2.circle(image,(x,y),1,(0,0,255),-1)

# show the output image with the face detections + facial landmarks
cv2.imshow("Output",image)
cv2.waitKey(0)
实验效果

CNN面部表情识别 面部表情识别测试_opencv_07

分析与总结

里面有一个的imutils的包下载地址,不过下不了也没关系,我已经把对应原函数附在了对应调用的地方,你们可以自己改一下