1.参考笔记

代码版本M79

2.PacketBuffer--插入rtp包-返回完整帧

记录一下时间戳,记录首个包序号

Packet Bytes 面板怎么显示到右边_缓存

在已有的缓存里判断是否是重复包,缓存不够就加,加到上限还判断是重复就只能清掉缓存重新等关键帧

sequence_buffer_是索引数组

data_buffer_是对应的数据数组

Packet Bytes 面板怎么显示到右边_缓存_02

把rtp packet和索引都存起来,执行丢包检测,计算时间戳,分析排序缓存,检查是否能够组装出完整的帧并返回

Packet Bytes 面板怎么显示到右边_丢包_03

处理空包

发送端可能在编码器输出码率不足的情况下为保证发送码率填充空包,空包不会进入排序缓存和数据缓存,但是会触发丢包检测和完整帧的检测。

触发丢帧检测和完整包检查等

Packet Bytes 面板怎么显示到右边_ide_04

丢包检测

Packet Bytes 面板怎么显示到右边_缓存_05

流程模拟:

(1)第一次插入seq = 5;newest_inserted_seq_num_ =5,missing_packets_=[];

  (2)  第二次插入seq = 4;第一个aheadof判断为false,执行到else。newest_inserted_seq_num_ =5,missing_packets_=[];

  (3)  第三次插入seq = 6;old_seq_num=65542,第二个aheadof由于符号问题,所以判断old不比newest新,所以不执行506行

         509行++后,newest_inserted_seq_num_=6;

        seq = newest_inserted_seq_num_,故512while内部不执行

(4)第四次插入seq = 8;old_seq_num=65544,不执行506,执行509后newest_inserted_seq_num_=7;

       512行判断为true,丢包数组插入【7】,

   (5)  后续插入7后,从丢包数组中去掉

连续包检测

PacketBuffer::PotentialNewFrame

Packet Bytes 面板怎么显示到右边_ide_06

找完整帧PacketBuffer::FindFrames

PacketBuffer::FindFrames函数会遍历排序缓存中连续的包,检查一帧的边界,但是这里对VPX和H264的处理做了区分:

对VPX,这个函数认为包的frame_begin可信,这样VPX的完整一帧就完全依赖于检测到frame_begin和frame_end这两个包;

对H264,这个函数认为包的frame_begin不可信,并不依赖frame_begin来判断帧的开始,但是frame_end仍然是可信的,具体说H264的开始标识是通过从frame_end标识的一帧最后一个包向前追溯,直到找到一个时间戳不一样的断层,认为找到了完整的一个H264的帧。

另外这里对H264的P帧做了一些特殊处理,虽然P帧可能已经完整,但是如果该P帧前面仍然有丢包空洞,不会立刻向后传递,会等待直到所有空洞被填满,因为P帧必须有参考帧才能正确解码。

std::vector<std::unique_ptr<RtpFrameObject>> PacketBuffer::FindFrames(
    uint16_t seq_num) {
  std::vector<std::unique_ptr<RtpFrameObject>> found_frames;
    // 基本算法:遍历所有连续包,先找到带有frame_end标识的帧最后一个包,然后向前回溯,
      // 找到帧的第一个包(VPX是frame_begin, H264是时间戳不连续),组成完整一帧,
      
      // PotentialNewFrame(seq_num)检测seq_num之前的所有包是否连续
  for (size_t i = 0; i < size_ && PotentialNewFrame(seq_num); ++i) {
      // 当前包的缓存索引
    size_t index = seq_num % size_;
      // 如果seq_num之前所有包连续,那么seq_num自己也连续
    sequence_buffer_[index].continuous = true;
 
    // If all packets of the frame is continuous, find the first packet of the
    // frame and create an RtpFrameObject.
      // 找到了帧的最后一个包
    if (sequence_buffer_[index].frame_end) {
      size_t frame_size = 0;
      int max_nack_count = -1;
        // 帧开始序列号,从帧尾部开始
      uint16_t start_seq_num = seq_num;
      // 帧的最小接收时间,基本是帧第一个包的接收时间.
      int64_t min_recv_time = data_buffer_[index].packet_info.receive_time_ms();
      // 帧的最大接收时间,基本是最后一个包的接收时间
      int64_t max_recv_time = data_buffer_[index].packet_info.receive_time_ms();
      RtpPacketInfos::vector_type packet_infos;
 
      // Find the start index by searching backward until the packet with
      // the |frame_begin| flag is set.
        // 开始向前回溯,找帧的第一个包.
        // 帧开始的索引,从帧尾部开始
      int start_index = index;
      size_t tested_packets = 0;
        // 当前包的时间戳.
      int64_t frame_timestamp = data_buffer_[start_index].timestamp;
 
      // Identify H.264 keyframes by means of SPS, PPS, and IDR.
      bool is_h264 = data_buffer_[start_index].codec() == kVideoCodecH264;
      bool has_h264_sps = false;
      bool has_h264_pps = false;
      bool has_h264_idr = false;
      bool is_h264_keyframe = false;
 
        // 从帧尾部的包开始回溯
      while (true) {
          // 测试包数++
        ++tested_packets;
          // 累加帧大小
        frame_size += data_buffer_[start_index].sizeBytes;
          // 获取最大重传数
        max_nack_count =
            std::max(max_nack_count, data_buffer_[start_index].timesNacked);
          // 当前包现在被标识为已经用于创建一个帧.
        sequence_buffer_[start_index].frame_created = true;
 
          // 获取最小接收时间
        min_recv_time =
            std::min(min_recv_time,
                     data_buffer_[start_index].packet_info.receive_time_ms());
          // 获取最大接收时间
        max_recv_time =
            std::max(max_recv_time,
                     data_buffer_[start_index].packet_info.receive_time_ms());
 
        // Should use |push_front()| since the loop traverses backwards. But
        // it's too inefficient to do so on a vector so we'll instead fix the
        // order afterwards.
          //这里本身应该倒序,但是用插入的方式效率太低,就后来再做
        packet_infos.push_back(data_buffer_[start_index].packet_info);
 
          // 如果是VPX,并且找到了frame_begin标识的第一个包,一帧完整,回溯结束.
        if (!is_h264 && sequence_buffer_[start_index].frame_begin)
          break;
 
        if (is_h264) {
            // 先检测是否关键帧,从数据缓存获取H264头
          const auto* h264_header = absl::get_if<RTPVideoHeaderH264>(
              &data_buffer_[start_index].video_header.video_type_header);
            // 一个帧最多只支持10个nalu
          if (!h264_header || h264_header->nalus_length >= kMaxNalusPerPacket)
            return found_frames;
 
            // 遍历所有NALU,注意WebRTC所有IDR帧前面都会带SPS、PPS.
          for (size_t j = 0; j < h264_header->nalus_length; ++j) {
            if (h264_header->nalus[j].type == H264::NaluType::kSps) {
              has_h264_sps = true;
            } else if (h264_header->nalus[j].type == H264::NaluType::kPps) {
              has_h264_pps = true;
            } else if (h264_header->nalus[j].type == H264::NaluType::kIdr) {
              has_h264_idr = true;
            }
          }
            // 默认sps_pps_idr_is_h264_keyframe_为false,也就是说只需要有IDR帧就认为是关键帧,
            // 而不需要等待SPS、PPS完整.
          if ((sps_pps_idr_is_h264_keyframe_ && has_h264_idr && has_h264_sps &&
               has_h264_pps) ||
              (!sps_pps_idr_is_h264_keyframe_ && has_h264_idr)) {
            is_h264_keyframe = true;
          }
        }
 
        if (tested_packets == size_)
          break;
 
          // 搜索指针向前移动一个包
        start_index = start_index > 0 ? start_index - 1 : size_ - 1;
 
        // In the case of H264 we don't have a frame_begin bit (yes,
        // |frame_begin| might be set to true but that is a lie). So instead
        // we traverese backwards as long as we have a previous packet and
        // the timestamp of that packet is the same as this one. This may cause
        // the PacketBuffer to hand out incomplete frames.
        // See: https://bugs.chromium.org/p/webrtc/issues/detail?id=7106
          //这里保留了注释,可以看看H264不使用frame_begin的原因,实际上应该也可以
        if (is_h264 &&
            (!sequence_buffer_[start_index].used ||
             data_buffer_[start_index].timestamp != frame_timestamp)) {
          break;
        }
 
        // 如果仍然在一帧内,开始包序列号--.
        --start_seq_num;
      }
 
      // Fix the order since the packet-finding loop traverses backwards.
        //倒序完整的数据
      std::reverse(packet_infos.begin(), packet_infos.end());
 
        // 到这里帧的开始和结束位置已经搜索完毕,可以开始组帧.
              // 但是对H264 P帧,需要做另外的特殊处理,虽然P帧可能已经完整,
              // 但是如果该P帧前面仍然有丢包空洞,不会立刻向后传递,会等待直到所有空洞被填满,
              // 因为P帧必须有参考帧才能正确解码。
 
      if (is_h264) {
        // Warn if this is an unsafe frame.
        if (has_h264_idr && (!has_h264_sps || !has_h264_pps)) {
          RTC_LOG(LS_WARNING)
              << "Received H.264-IDR frame "
              << "(SPS: " << has_h264_sps << ", PPS: " << has_h264_pps
              << "). Treating as "
              << (sps_pps_idr_is_h264_keyframe_ ? "delta" : "key")
              << " frame since WebRTC-SpsPpsIdrIsH264Keyframe is "
              << (sps_pps_idr_is_h264_keyframe_ ? "enabled." : "disabled");
        }
 
        // Now that we have decided whether to treat this frame as a key frame
        // or delta frame in the frame buffer, we update the field that
        // determines if the RtpFrameObject is a key frame or delta frame.
          // 设置数据缓存中的关键帧标识
        const size_t first_packet_index = start_seq_num % size_;
        RTC_CHECK_LT(first_packet_index, size_);
        if (is_h264_keyframe) {
          data_buffer_[first_packet_index].video_header.frame_type =
              VideoFrameType::kVideoFrameKey;
        } else {
          data_buffer_[first_packet_index].video_header.frame_type =
              VideoFrameType::kVideoFrameDelta;
        }
 
          // missing_packets_.upper_bound(start_seq_num) != missing_packets_.begin()
          // 这个条件是说在丢包的列表里搜索>start_seq_num(帧开始序列号)的第一个位置,
          // 发现其不等于丢包列表的开头, 有些丢的包序列号小于start_seq_num, 也就是说P帧前面有丢包空洞,
          // 举例1:
          // missing_packets_ = { 3, 4, 6}, start_seq_num = 5, missing_packets_.upper_bound(start_seq_num)==6
          // 作为一帧开始位置的序列号5,前面还有3、4这两个包还未收到,那么对P帧来说,虽然完整,但是向后传递也可能是没有意义的,
          // 所以这里又清除了frame_created状态,先继续缓存,等待丢包的空洞填满.
          // 举例2:
          // missing_packets_ = { 10, 16, 17}, start_seq_num = 3, missing_packets_.upper_bound(start_seq_num)==10
          // 作为一帧开始位置的序列号3,前面并没有丢包,并且帧完整,那么可以向后传递.
        // With IPPP, if this is not a keyframe, make sure there are no gaps
        // in the packet sequence numbers up until this point.
        const uint8_t h264tid =
            data_buffer_[start_index].video_header.frame_marking.temporal_id;
        if (h264tid == kNoTemporalIdx && !is_h264_keyframe &&
            missing_packets_.upper_bound(start_seq_num) !=
                missing_packets_.begin()) {
          uint16_t stop_index = (index + 1) % size_;
          while (start_index != stop_index) {
            sequence_buffer_[start_index].frame_created = false;
            start_index = (start_index + 1) % size_;
          }
 
          return found_frames;
        }
      }
 
        // 马上要组帧了,清除丢包列表中到帧开始位置之前的丢包.
        // 对H264 P帧来说,如果P帧前面有空洞不会运行到这里,在上面已经解释.
        // 对I帧来说,可以丢弃前面的丢包信息(?).
      missing_packets_.erase(missing_packets_.begin(),
                             missing_packets_.upper_bound(seq_num));
 
        // 组一个帧
      const VCMPacket* first_packet = GetPacket(start_seq_num);
      const VCMPacket* last_packet = GetPacket(seq_num);
      auto frame = std::make_unique<RtpFrameObject>(
          start_seq_num, seq_num, last_packet->markerBit, max_nack_count,
          min_recv_time, max_recv_time, first_packet->timestamp,
          first_packet->ntp_time_ms_, last_packet->video_header.video_timing,
          first_packet->payloadType, first_packet->codec(),
          last_packet->video_header.rotation,
          last_packet->video_header.content_type, first_packet->video_header,
          last_packet->video_header.color_space,
          first_packet->generic_descriptor,
          RtpPacketInfos(std::move(packet_infos)),
          GetEncodedImageBuffer(frame_size, start_seq_num, seq_num));
 
      found_frames.emplace_back(std::move(frame));
 
        //清除缓存数据和状态
      ClearInterval(start_seq_num, seq_num);
    }
      // 向后扩大搜索的范围,假设丢包、乱序,当前包的seq_num刚好填补了之前的一个空洞,
      // 该包并不能检测出一个完整帧,需要这里向后移动指针到frame_end再进行回溯,直到检测出完整帧,
      // 这里会继续检测之前缓存的因为前面有空洞而没有向后传递的P帧。
    ++seq_num;
  }
    // 返回找到的所有完整帧
  return found_frames;
}

回调完整帧

Packet Bytes 面板怎么显示到右边_ide_07