Skip to content
firepick1 (pixel) edited this page Oct 31, 2014 · 14 revisions

FireSight matchGrid works much like standard chessboard OpenCV Camera Calibration. However, matchGrid is designed to for use in pick-and-place camera calibration, which has slightly different requirements (flat, almost 2D space used for motion planning measurement) than standard chessboard calibration (large 3D space used for location/pose detection).

The FireSight matchGrid stage matches recognized features from a preceding FireSight stage to a rectangular grid and computes camera calibration parameters from matched features. This stage requires a preceding stage such as op-matchTemplate to recognize the features to be matched. This separation allows for free experimentation on what constitutes the grid (e.g., holes, diamonds, crosshairs, etc.). Recognized features should be provided to matchGrid via the FireSight model pipeline as a JSON array of rects such as those returned by op-matchTemplate.

OpenCV chessboard calibration requires many images at different chessboard angles, chessboard distances and camera angles. For matchGrid, the input imaging requirement is different, and the calibration grid should be presented to the camera in a single Z plane, since that corresponds directly with the pick-and-place use case. For best results, the presented grid should be aligned with the camera axes (i.e., avoid random orientations) FireSight matchGrid only requires a single image for calibration, but it can use sub-images to mimic chessboard calibration.

The method used to match the grid is FireSight specific.

  • sep JSON [X,Y] array of horizontal (x) and vertical (y) grid separation. Default is [5,5] for a 5mm grid
  • calibrate Default is best, which chooses the best known matching algorithm. See calibrate options.
  • tolerance used to match features in a row or column. Default is 0.35.
  • color JSON BGR color array to mark unused image features. Default is [255,255,255].
  • scale JSON [X,Y] array of scaling coefficients for calibration. Currently only used by ellipse, default value is [1,1]

calibrate options

  • best Use best known option (may change between software versions)
  • tile1 Use all matched grid points in a single calibration image
  • tile2 Use all matched grid points in each image quadrant to make up a set of four calibration images
  • tile3 Use all matched grid points in a 3x3 matrix of 9 calibration images
  • tile4 Use all matched grid points in a 4x4 matrix of 16 calibration images
  • tile5 Use all matched grid points in a 5x5 matrix of 25 calibration images
  • ellipse Use all matched grid points within a scaled XY ellipse for a single calibration image.
  • other experimental options may be available

Stage Model

{
  ...

  "grid1":{
    "dxMedian":-31.0,
    "dxCount1":119,
    "dxCount2":108,
    "dxdxAvg1":-31.512605667114258,
    "dxdyAvg1":0.81512606143951416,
    "dxdxAvg2":-31.527778625488281,
    "dxdyAvg2":0.81944441795349121,
    "gridX":6.307685375213623,
    "dyMedian":-31.0,
    "dyCount1":117,
    "dyCount2":105,
    "dydxAvg1":-0.76068377494812012,
    "dydyAvg1":-31.24786376953125,
    "dydxAvg2":-0.77142858505249023,
    "dydyAvg2":-31.233333587646484,
    "gridY":6.2485718727111816,
    "rects":[
      {
        "x":21.0,
        "y":37.0,
        "objX":-27.961538314819336,
        "objY":-24.269229888916016
      },
	  ...
      {
        "x":343.0,
        "y":341.0,
        "objX":22.038461685180664,
        "objY":25.730770111083984
      }
    ],
    "calibrate":{
      "perspective":[
        1.0405136564383939,
        0.026744906574870872,
        -2.943810346145538,
        0.024529984309304427,
        1.097007339310166,
        -14.050588819421538,
        -1.7780681696463529e-5,
        0.00027266213560320576,
        1.0
      ],
      "op":"perspective",
      "cameraMatrix":[
        1960.1759945599078,
        0.0,
        223.20476931433635,
        0.0, 1946.7429049886605,
        195.34605646405871,
        0.0,
        0.0,
        1.0
      ],
      "distCoeffs":[
        0.79295763201481084,
        -36.591779462843959,
        0.0010729473989439847,
        -0.013230756623078195,
        -921.47568559839351
      ],
      "candidates":130,
      "matched":138,
      "images":1.0,
      "rmserror":0.41535305162567565,
      "gridnessIn":[
        3.8133699893951416,
        4.2762274742126465
      ],
      "gridnessOut":[
        0.34463158249855042,
        0.37010332942008972
      ]
    }
  },
  ...
}
  • gridX The calculated pixels per grid X-separation unit
  • gridY The calculated pixels per grid Y-separation unit
  • rects The RotatedRect values of the matched positions
  • rmserror The root of the mean squared error returned by OpenCV cameraCalibrate()
  • images number of sub-images used for calibration
  • candidates number of matched grid points
  • matched number of matched grid points actually used as input to OpenCV cameraCalibrate()
  • gridnessIn RMS point-by-point error of input image with respect to overlaid grid. A perfect grid match is 0.
  • gridnessOut RMS point-by-point error of output image with respect to overlaid grid. A perfect grid match is 0.

Example: Full grid calibration pipeline

firesight -i img/cal-grid.png -p json/matchGrid.json -Dtemplate=img/cross32.png -Dcalibrate=tile1

Using all the grid points for calibration minimizes overall error:

Example: Tic-Tac-Tile3 calibration pipeline

firesight -i img/cal-grid.png -p json/matchGrid.json -Dtemplate=img/cross32.png -Dcalibrate=tile3

Using 9 sub images provides greater local accuracy at the expense of some loss in "the big picture"

Example: Cutting-corners calibration pipeline

firesight -i img/cal-grid.png -p json/matchGrid.json -Dtemplate=img/cross32.png -Dcalibrate=ellipse -Dscale=[0.85,0.85]

If the corners are never used, then ignore them for calibration:

Example: A different perspective pipeline

firesight -i img/cal-grid.png -p json/matchGrid-perspective.json -Dtemplate=img/cross32.png

If the imaged objects line on flat surface at an angle with the camera, use matchGrid to determine the perspective matrix for warpPerspective. For the image shown below, the RMS gridness error is about 1/3 of a pixel. In this case, a perspective transformation provides a very usable cartesian map of the imaged surface without any calibration.

"perspective":[
1.0405136564383939,    0.026744906574870872, -2.943810346145538,
0.024529984309304427,  1.097007339310166,    -14.050588819421538,
-1.7780681696463529e-5,0.00027266213560320576,1.0
],