6

There are some interesting literature about RPNs (Region Proposal Network). The most concise and helpful documentation that I found so far is the following: https://www.quora.com/How-does-the-region-proposal-network-RPN-in-Faster-R-CNN-work?share=1.

But there is something that I still don't understand through my various lectures. RPNs are designed to propose several candidate regions. From which, a selection will be done to know which candidates fits our needs.

But, RPNs and neural network in general are deterministic. Thus, once trained, they will always produce the same output for a given input; there is no way to query new candidates given the same input image. As far as I understood, RPNs are trained to produce a fix number of proposal, for each new image. But how the training work then? If the RPN has to produce 300 candidates, what should be the labeled data that we use for the training, knowing that a training image probably won't have more than 5 golden truth bounding boxes?

And then, knowing that the bounding box sizes are not consistent among candidates, how does the CNN behind operates with the different size of the input?

Emile D.
  • 161
  • 1
  • 5

2 Answers2

5

The first answer in your commented link answers one point about how region proposals are selected. It is the Intersection Over Union (more formally the Jaccard Index) metric. So how much of your anchor overlaps the label. There is usually a lower limit set for this metric to then filter out all the useless proposals, and the remaining matches can be sorted, choosing the best.


I recommend reading through this excellently explained version of a proposal network - Mask-R-CNN (Masked Region-based CNN). If you prefer looking at code, there is the full repo here, implemented in Keras/Tensorflow (there is also a PyTorch implementation linked somewhere).

There is even an explanatory Jupyter notebook, which might help make things click for you.

n1k31t4
  • 15,468
  • 2
  • 33
  • 52
2

To know how RPN work for training, we can dive into the code wrote by Matterport, which is 10,000 stared and tf/keras implementation Mask R-CNN repo.

You can check the build_rpn_targets function in mrcnn/model.py

If we used the generated anchors (depends on your anchor scales, ratio, image size ...) to calculate the IOU of anchors and ground truth,

    # Compute overlaps [num_anchors, num_gt_boxes]
    overlaps = utils.compute_overlaps(anchors, gt_boxes)

we can know how overlaps between anchors and ground truth. Then we choose positive anchors and negative anchors based on their IOU with ground truth. According to Mask R-CNN paper, IOU > 0.7 will be positive anchors and < 0.3 will be negative anchors, otherwise will be neutral anchors and not used when training

    # 1. Set negative anchors first. They get overwritten below if a GT box is
    # matched to them. 
    anchor_iou_argmax = np.argmax(overlaps, axis=1)
    anchor_iou_max = overlaps[np.arange(overlaps.shape[0]), anchor_iou_argmax]
    rpn_match[anchor_iou_max < 0.3] = -1
    # 2. Set an anchor for each GT box (regardless of IoU value).
    # If multiple anchors have the same IoU match all of them
    gt_iou_argmax = np.argwhere(overlaps == np.max(overlaps, axis=0))[:,0]
    rpn_match[gt_iou_argmax] = 1
    # 3. Set anchors with high overlap as positive.
    rpn_match[anchor_iou_max >= 0.7] = 1

To effectively train RPN, you need to set up the RPN_TRAIN_ANCHORS_PER_IMAGE carefully to balance training if there is few objects in one image. Please note that there can be multiple anchors match one ground truth since we can give the bbox off-set for each anchor to fit the ground truth.

Hope the answer is clear for you!

jimmy15923
  • 21
  • 2