Files
AndroidStudioSettings/ChatGPTConversations_170.xml
T

6 lines
246 KiB
XML
Raw Normal View History

<application>
<component name="ee.carlrobert.codegpt.state.conversations.ConversationsState">
<option name="currentConversation" value="{&quot;id&quot;:&quot;899a5165-ee66-4e0c-a2ef-ed71779b5cec&quot;,&quot;title&quot;:null,&quot;messages&quot;:[{&quot;prompt&quot;:&quot;_onFrame 之前需要传 转换为rgba后的_nativeBuffer,现在我调整为了直接传yuv数据的指针即可,帮我修改这个文件\n\n```dart:/home/debuggerx/StudioProjects/aim_master_workspace/apps/aim_master_face_a3/3rd_party/flutter_ffmpeg_sw_player/lib/src/controller.dart\nP\n```&quot;,&quot;id&quot;:&quot;699b4e1b-23a1-42f1-8f42-c1cd7914eb0c&quot;,&quot;response&quot;:&quot;It appears that the content of your `controller.dart` file was accidentally cut off (only the letter `P` was provided in your message). \n\nCould you please provide the full content of `controller.dart`? \n\nOnce you share the file content, I will provide the complete modified code to update the `_onFrame` method call so that it passes the YUV data pointer directly instead of the RGBA `_nativeBuffer`.&quot;,&quot;referencedFilePaths&quot;:null,&quot;conversationsHistoryIds&quot;:null,&quot;imageFilePath&quot;:null,&quot;webSearchIncluded&quot;:false,&quot;personaName&quot;:null,&quot;tool_calls&quot;:null,&quot;tool_call_results&quot;:null},{&quot;prompt&quot;:&quot;给了&quot;,&quot;id&quot;:&quot;e1216639-2ffa-4571-ab48-fdb446fb5f32&quot;,&quot;response&quot;:&quot;了解。既然现在 `_onFrame` 直接接收 YUV 数据的指针,我们不再需要在 Dart 层进行 YUV 到 RGBA/BGRA 的软转换。\n\n### 修改计划:\n1. **移除无用代码**:删除 `_bgraFrameSize` 的计算属性以及 `_convertYuv420pToBgra` 转换方法。\n2. **调整内存分配**:在解析到媒体信息后,将 `_nativeBuffer` 的分配大小从 RGBA 的大小改为 YUV 的大小(`_yuvFrameSize`)。\n3. **零拷贝优化**:直接通过 `_nativeBuffer!.asTypedList(_yuvFrameSize)` 将 Native 内存映射给 `_yuvDartBuffer`。这样在拼凑数据包时,数据会直接写入 Native 内存,省去了一次从 Dart 内存到 Native 内存的拷贝。\n4. **修改渲染逻辑**:在 `_startRender` 中,直接将装满 YUV 数据的 `_nativeBuffer` 传递给 `_onFrame`。\n\n以下是修改后的完整代码:\n\n```dart:/home/debuggerx/StudioProjects/aim_master_workspace/apps/aim_master_face_a3/3rd_party/flutter_ffmpeg_sw_player/lib/src/controller.dart\nimport 'dart:async';\nimport 'dart:collection';\nimport 'dart:ffi';\nimport 'dart:typed_data';\n\nimport 'package:ffi/ffi.dart';\nimport 'package:flutter/material.dart';\nimport 'package:flutter_ffmpeg_sw_player/src/ffmpeg_util.dart';\nimport 'package:flutter_ffmpeg_sw_player/src/fps_ticker.dart';\nimport 'package:flutter_ffmpeg_sw_player/src/media_info.dart';\n\nconst cacheFrames = 5;\n\nenum PlayerStatus {\n idle,\n loading,\n playing,\n paused,\n error,\n}\n\nconst liveSchemas = ['rtmp', 'rtmps', 'rtsp', 'rtsps', 'srt'];\n\nclass FfmpegPlayerController {\n /// 当所属的[FfmpegPlayerView]销毁时自动释放持有的资源\n bool autoDispose;\n\n FfmpegPlayerController({this.autoDispose = true});\n\n final ValueNotifier&lt;PlayerStatus&gt; status = ValueNotifier(PlayerStatus.idle);\n final FpsTicker _fpsTicker = FpsTicker();\n MediaInfo? _mediaInfo;\n void Function(Pointer&lt;Uint8&gt; frameDataPtr, int width, int height)? _onFrame;\n\n Pointer&lt;Uint8&gt;? _nativeBuffer;\n // 收纳一帧 YUV 数据的临时 Dart Buffer (直接映射到 _nativeBuffer)\n Uint8List? _yuvDartBuffer;\n\n /// 数据包缓冲区\n final Queue&lt;Uint8List&gt; _chunkQueue = Queue();\n\n /// 当前数据包缓冲区的总数据长度\n int _totalBufferedBytes = 0;\n\n /// 当前第一个 chunk 用到了哪里\n int _chunkOffset = 0;\n\n Function? _currentFfmpegProcessKiller;\n\n // 管道传入的 YUV420P 大小\n int get _yuvFrameSize =&gt; ((_mediaInfo?.width ?? 0) * (_mediaInfo?.height ?? 0) * 3) ~/ 2;\n\n StreamSubscription&lt;List&lt;int&gt;&gt;? _dataReceiver;\n\n bool _reachEnd = false;\n\n int? _currentPlayKey;\n\n Future&lt;MediaInfo?&gt; play(\n String path, {\n void Function(Duratio
<option name="conversations" value="[{&quot;id&quot;:&quot;2ea372bf-f79d-4805-a228-8cd51bf07dc9&quot;,&quot;title&quot;:null,&quot;messages&quot;:[{&quot;prompt&quot;:&quot;我在适用ProxyAI这个Intellij插件,想要实现代码自动补全,gemini系列的哪个模型最适合?为什么?另外配置里有个“FIM”模板选项,我应该选什么?&quot;,&quot;id&quot;:&quot;ab4b92a6-8055-4844-85d6-57ffd6339399&quot;,&quot;response&quot;:&quot;对于在 IntelliJ IDEA 使 ProxyAI **Inline Code Completion** Gemini FIM \n\n### Gemini 1.5 Flash Gemini 2.0 Flash\n\n**使 `gemini-1.5-flash` `gemini-2.0-flash`** API \n\n** Pro **\n1. **Latency** 300-500 `gemini-1.5-pro` Flash \n2. ****Flash 1M 2M ProxyAI \n3. ****Flash API Pro \n\n* `gemini-1.5-pro`****Flash *\n\n---\n\n### FIM \n\n**FIM (Fill-In-the-Middle)** PrefixSuffixMiddle\n\n**Gemini **\n DeepSeek CoderQwen Coder CodeLlama 使 FIM Token `&lt;|fim_prefix|&gt;``&lt;|fim_suffix|&gt;` **Gemini FIM ** ProxyAI Gemini `DeepSeek` `Qwen` FIM Gemini \n\n****\n\n1. ** Prompt / Instruction **\n ProxyAI FIM `Prompt``Instruction` `Gemini/OpenAI` \n\n2. **CustomFIM **\n FIM 使 XML Gemini XML \n ```text\n You are an expert programmer. Please complete the missing code between &lt;prefix&gt; and &lt;suffix&gt;. \n Output ONLY the missing code, without any markdown formatting, explanations, or repeating the prefix/suffix.\n \n &lt;prefix&gt;\n {prefix}\n &lt;/prefix&gt;\n &lt;suffix&gt;\n {suffix}\n &lt;/suffix&gt;\n ```\n *( `{prefix}` `{suffix}` ProxyAI )*\n\n3. ** FIM**\n FIM Gemini ProxyAI ** FIM ** None/Default退 Gemini &quot;,&quot;referencedFilePaths&quot;:null,&quot;conversationsHistoryIds&quot;:null,&quot;imageFilePath&quot;:null,&quot;
</component>
</application>