<itemvalue="The bytes represent encoded image bytes and can be encoded in any of the following supported image formats: {@macro dart.ui.imageFormats}"/>
<itemvalue="The image to display. Since a [RawImage] is stateless, it does not ever dispose this image. Creators of a [RawImage] are expected to call [dart:ui.Image.dispose] on this image handle when the [RawImage] will no longer be needed."/>
<itemvalue="When running Flutter on the web, only the CanvasKit renderer supports image resizing capabilities (not the HTML renderer). So if image resizing is critical to your use case, and you're deploying to the web, you should build using the CanvasKit renderer."/>
<itemvalue="The [getTargetSize] parameter, when specified, will be invoked and passed the image's intrinsic size to determine the size to decode the image to. The width and the height of the size it returns must be positive values greater than or equal to 1, or null. It is valid to return a [TargetImageSize] that specifies only one of `width` and `height` with the other remaining null, in which case the omitted dimension will be scaled to maintain the aspect ratio of the original dimensions. When both are null or omitted, the image will be decoded at its native resolution (as will be the case if the [getTargetSize] parameter is omitted)."/>
<itemvalue="{@template auto_size_text.stepGranularity} The step size in which the font size is being adapted to constraints. The Text scales uniformly in a range between [minFontSize] and [maxFontSize]. Each increment occurs as per the step size set in stepGranularity. Most of the time you don't want a stepGranularity below 1.0. Is being ignored if [presetFontSizes] is set. {@endtemplate}"/>
<itemvalue="MinFontSize must be a multiple of stepGranularity"/>
<itemvalue="type RESUME AND SKIP CUR ACTION"/>
@ -68,10 +72,6 @@
<itemvalue="Raw straight RGBA format. Unencoded bytes, in RGBA row-primary form with straight alpha, 8 bits per channel."/>
<itemvalue="Raw RGBA format. Unencoded bytes, in RGBA row-primary form with premultiplied alpha, 8 bits per channel."/>
<itemvalue="This combines [decodeImage] and [registerFrame] into a single operation that avoids transferring RGB data back to the main isolate. The image is decoded and stored in the worker, returning only the frameId and dimensions."/>
<itemvalue="Decodes and registers an image in one operation (optimized fast-path)."/>
<itemvalue="The [imageBytes] parameter should contain encoded image data (JPEG, PNG, etc.)."/>
<itemvalue="Specifies which face detection model variant to use. Different models are optimized for different use cases: - [frontCamera]: Optimized for selfiefront-facing camera (128x128 input) - [backCamera]: Optimized for rear camera with higher resolution (256x256 input) - [shortRange]: Optimized for close-up faces (128x128 input) - [full]: Full-range detection (192x192 input) - [fullSparse]: Full-range with sparse anchors (192x192 input)"/>