<itemvalue="Each pixel is 32 bits, with the highest 8 bits encoding red, the next 8 bits encoding green, the next 8 bits encoding blue, and the lowest 8 bits encoding alpha. Premultiplied alpha is used."/>
<itemvalue="The bytes represent encoded image bytes and can be encoded in any of the following supported image formats: {@macro dart.ui.imageFormats}"/>
<itemvalue="The image to display. Since a [RawImage] is stateless, it does not ever dispose this image. Creators of a [RawImage] are expected to call [dart:ui.Image.dispose] on this image handle when the [RawImage] will no longer be needed."/>
<itemvalue="When running Flutter on the web, only the CanvasKit renderer supports image resizing capabilities (not the HTML renderer). So if image resizing is critical to your use case, and you're deploying to the web, you should build using the CanvasKit renderer."/>
<itemvalue="The [getTargetSize] parameter, when specified, will be invoked and passed the image's intrinsic size to determine the size to decode the image to. The width and the height of the size it returns must be positive values greater than or equal to 1, or null. It is valid to return a [TargetImageSize] that specifies only one of `width` and `height` with the other remaining null, in which case the omitted dimension will be scaled to maintain the aspect ratio of the original dimensions. When both are null or omitted, the image will be decoded at its native resolution (as will be the case if the [getTargetSize] parameter is omitted)."/>
<itemvalue="{@template auto_size_text.stepGranularity} The step size in which the font size is being adapted to constraints. The Text scales uniformly in a range between [minFontSize] and [maxFontSize]. Each increment occurs as per the step size set in stepGranularity. Most of the time you don't want a stepGranularity below 1.0. Is being ignored if [presetFontSizes] is set. {@endtemplate}"/>
<itemvalue="MinFontSize must be a multiple of stepGranularity"/>
<itemvalue="[parameters] - (optional) an object with one or more properties defining the material's appearance. Any property of the material (including any property inherited from [Material]) can be passed in here. The exception is the property [color], which can be passed in as a hexadecimal int and is 0xffffff (white) by default. [Color] is called internally."/>
<itemvalue="A material for shiny surfaces with specular highlights. The material uses a non-physically based [Blinn-Phong](https:en.wikipedia.orgwikiBlinn-Phong_shading_model) model for calculating reflectance. Unlike the Lambertian model used in the [MeshLambertMaterial] this can simulate shiny surfaces with specular highlights (such as varnished wood). [MeshPhongMaterial] uses per-fragment shading. Performance will generally be greater when using this material over the [MeshStandardMaterial] or [MeshPhysicalMaterial], at the cost of some graphical accuracy."/>
<itemvalue="Mat (int rows, int cols, int type, void data, size_t step=AUTO_STEP)"/>
<itemvalue="This function can throw exception, so make sure to free the allocated memory inside a `try-finally` block!"/>
<itemvalue="Be careful when using this constructor, as you are responsible for managing the native pointer yourself. Improper handling may lead to memory leaks or undefined behavior."/>
<itemvalue="[data] should be raw pixels values with exactly same length of [channels] [rows] [cols]"/>
<itemvalue="Create a Mat from self-allocated buffer"/>
<itemvalue="Releases all resources held by the detector. Call this when you're done using the detector to free up memory. After calling dispose, you must call [initialize] again before running any detections."/>
<itemvalue="Outputs for a single detected face. [boundingBox] is the face bounding box in pixel coordinates. [landmarks] provides convenient access to 6 key facial landmarks (eyes, nose, mouth). [mesh] contains 468 facial landmarks as pixel coordinates. [eyes] contains iris center, iris contour, and eye mesh landmarks for both eyes."/>
<itemvalue="Canny finds edges in an image using the Canny algorithm. The function finds edges in the input image image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges."/>
<itemvalue="Raw unmodified format. Unencoded bytes, in the image's existing format. For example, a grayscale image may use a single 8-bit channel for each pixel."/>