You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
83 lines
7.3 KiB
83 lines
7.3 KiB
<application>
|
|
<component name="Translation.Cache">
|
|
<option name="lastTrimTime" value="1768897037859" />
|
|
</component>
|
|
<component name="Translation.OpenAISettings">
|
|
<option name="OPEN_AI">
|
|
<open-ai>
|
|
<option name="API_PATH" value="/api/paas/v4/chat/completions" />
|
|
<option name="CUSTOM_MODEL" value="glm-4-flash" />
|
|
<option name="ENDPOINT" value="https://open.bigmodel.cn" />
|
|
<option name="USE_CUSTOM_MODEL" value="true" />
|
|
</open-ai>
|
|
</option>
|
|
</component>
|
|
<component name="Translation.Settings">
|
|
<option name="primaryLanguage" value="CHINESE_SIMPLIFIED" />
|
|
<option name="translator" value="OPEN_AI" />
|
|
</component>
|
|
<component name="Translation.States">
|
|
<option name="translationDialogHeight" value="260" />
|
|
<option name="translationDialogLocationX" value="2705" />
|
|
<option name="translationDialogLocationY" value="567" />
|
|
<option name="translationDialogWidth" value="1381" />
|
|
<histories>
|
|
<item value="快捷键" />
|
|
<item value="left wrist motor A ball" />
|
|
<item value="Each pixel is 32 bits, with the highest 8 bits encoding red, the next 8 bits encoding green, the next 8 bits encoding blue, and the lowest 8 bits encoding alpha. Premultiplied alpha is used." />
|
|
<item value="The bytes represent encoded image bytes and can be encoded in any of the following supported image formats: {@macro dart.ui.imageFormats}" />
|
|
<item value="The image to display. Since a [RawImage] is stateless, it does not ever dispose this image. Creators of a [RawImage] are expected to call [dart:ui.Image.dispose] on this image handle when the [RawImage] will no longer be needed." />
|
|
<item value="When running Flutter on the web, only the CanvasKit renderer supports image resizing capabilities (not the HTML renderer). So if image resizing is critical to your use case, and you're deploying to the web, you should build using the CanvasKit renderer." />
|
|
<item value="The [getTargetSize] parameter, when specified, will be invoked and passed the image's intrinsic size to determine the size to decode the image to. The width and the height of the size it returns must be positive values greater than or equal to 1, or null. It is valid to return a [TargetImageSize] that specifies only one of `width` and `height` with the other remaining null, in which case the omitted dimension will be scaled to maintain the aspect ratio of the original dimensions. When both are null or omitted, the image will be decoded at its native resolution (as will be the case if the [getTargetSize] parameter is omitted)." />
|
|
<item value="{@template auto_size_text.stepGranularity} The step size in which the font size is being adapted to constraints. The Text scales uniformly in a range between [minFontSize] and [maxFontSize]. Each increment occurs as per the step size set in stepGranularity. Most of the time you don't want a stepGranularity below 1.0. Is being ignored if [presetFontSizes] is set. {@endtemplate}" />
|
|
<item value="MinFontSize must be a multiple of stepGranularity" />
|
|
<item value="type RESUME AND SKIP CUR ACTION" />
|
|
<item value="fog" />
|
|
<item value="wireframe linejoin" />
|
|
<item value="wireframe linecap" />
|
|
<item value="wireframe linewidth" />
|
|
<item value="wireframe" />
|
|
<item value="refraction ratio" />
|
|
<item value="reflectivity" />
|
|
<item value="env map rotation" />
|
|
<item value="normal scale" />
|
|
<item value="normal map type" />
|
|
<item value="emissive" />
|
|
<item value="type" />
|
|
<item value="color" />
|
|
<item value="specular" />
|
|
<item value="shininess" />
|
|
<item value="bump scale" />
|
|
<item value="[parameters] - (optional) an object with one or more properties defining the material's appearance. Any property of the material (including any property inherited from [Material]) can be passed in here. The exception is the property [color], which can be passed in as a hexadecimal int and is 0xffffff (white) by default. [Color] is called internally." />
|
|
<item value="A material for shiny surfaces with specular highlights. The material uses a non-physically based [Blinn-Phong](https:en.wikipedia.orgwikiBlinn-Phong_shading_model) model for calculating reflectance. Unlike the Lambertian model used in the [MeshLambertMaterial] this can simulate shiny surfaces with specular highlights (such as varnished wood). [MeshPhongMaterial] uses per-fragment shading. Performance will generally be greater when using this material over the [MeshStandardMaterial] or [MeshPhysicalMaterial], at the cost of some graphical accuracy." />
|
|
<item value="flat shading" />
|
|
<item value="nav use percep" />
|
|
<item value="beverage" />
|
|
<item value="https:docs.opencv.org4.xd3d63classcv_1_1Mat.htmla51615ebf17a64c968df0bf49b4de6a3a" />
|
|
<item value="Mat (int rows, int cols, int type, void data, size_t step=AUTO_STEP)" />
|
|
<item value="This function can throw exception, so make sure to free the allocated memory inside a `try-finally` block!" />
|
|
<item value="Be careful when using this constructor, as you are responsible for managing the native pointer yourself. Improper handling may lead to memory leaks or undefined behavior." />
|
|
<item value="[data] should be raw pixels values with exactly same length of [channels] [rows] [cols]" />
|
|
<item value="Create a Mat from self-allocated buffer" />
|
|
<item value="A typed view of a sequence of bytes. It is a compile-time error for a class to attempt to extend or implement `TypedData`." />
|
|
<item value="The offset of this view into the underlying byte buffer, in bytes." />
|
|
<item value="The length of this view, in bytes." />
|
|
<item value="Releases all resources held by the detector. Call this when you're done using the detector to free up memory. After calling dispose, you must call [initialize] again before running any detections." />
|
|
<item value="aperture size" />
|
|
<item value="l 2 gradient" />
|
|
<item value="Outputs for a single detected face. [boundingBox] is the face bounding box in pixel coordinates. [landmarks] provides convenient access to 6 key facial landmarks (eyes, nose, mouth). [mesh] contains 468 facial landmarks as pixel coordinates. [eyes] contains iris center, iris contour, and eye mesh landmarks for both eyes." />
|
|
<item value="cannied" />
|
|
<item value="Canny finds edges in an image using the Canny algorithm. The function finds edges in the input image image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges." />
|
|
<item value="[data] will be copied 2 times, use [Mat.fromVec] or [Mat.fromBuffer] if better performance" />
|
|
<item value="[data] should be raw pixels values with exactly same length of channels [rows] [cols]" />
|
|
<item value="predefined type constants" />
|
|
<item value="Raw unmodified format. Unencoded bytes, in the image's existing format. For example, a grayscale image may use a single 8-bit channel for each pixel." />
|
|
</histories>
|
|
<option name="languageScores">
|
|
<map>
|
|
<entry key="CHINESE_SIMPLIFIED" value="65" />
|
|
<entry key="ENGLISH" value="66" />
|
|
</map>
|
|
</option>
|
|
</component>
|
|
</application> |