Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Ugoira to video #89

Open
dragon-fish opened this issue Feb 29, 2024 · 6 comments
Open

[RFC] Ugoira to video #89

dragon-fish opened this issue Feb 29, 2024 · 6 comments

Comments

@dragon-fish
Copy link
Member

方案一:MediaStream API

https://developer.mozilla.org/zh-CN/docs/Web/API/MediaStream_Recording_API

优点:原生
缺点:需要等待所有帧原速播放一遍

方案二:ffmpeg

https://github.com/ffmpegwasm/ffmpeg.wasm
https://www.npmjs.com/package/@diffusion-studio/ffmpeg-js

优点:转换效率高
缺点:库体积巨大(>30mb)

@dragon-fish
Copy link
Member Author

@ffmpeg/ffmpeg Demo

By GPT-4

import { createFFmpeg, fetchFile } from '@ffmpeg/ffmpeg';

// 假设 fileMap 和 metadata 已经定义
const fileMap = new Map(); // 用你的Map替换这里
const metadata = []; // 用你的元数据数组替换这里
const width = 1920; // 使用实际宽度替换
const height = 1080; // 使用实际高度替换

async function convertImagesToVideo() {
  const ffmpeg = createFFmpeg({ log: true });
  await ffmpeg.load();

  // 将图片添加到FFmpeg中
  for (let [filename, blob] of fileMap) {
    ffmpeg.FS('writeFile', filename, await fetchFile(blob));
  }

  let filterComplexInputs = '';
  metadata.forEach((item, index) => {
    const inputFile = item.file;
    const duration = item.delay / 1000; // 将毫秒转换为秒
    ffmpeg.FS('writeFile', `input_${index}.txt`, `file '${inputFile}'\nduration ${duration}`);
    filterComplexInputs += `[${index}:v] scale=${width}:${height} [v${index}]; `;
  });

  const inputs = metadata.map((item, index) => `-f concat -safe 0 -i input_${index}.txt`).join(' ');
  const filterComplex = `-filter_complex "${filterComplexInputs}concat=n=${metadata.length}:v=1:a=0 [v]" -map "[v]"`;

  await ffmpeg.run(
    ...inputs.split(' '),
    filterComplex,
    '-vsync', 'vfr', '-pix_fmt', 'yuv420p', '-c:v', 'libx264', '-preset', 'medium', '-crf', '23',
    'output.mp4'
  );

  // 读取生成的视频文件
  const data = ffmpeg.FS('readFile', 'output.mp4');

  // 创建一个Blob对象
  const videoBlob = new Blob([data.buffer], { type: 'video/mp4' });

  // 创建一个URL,并将其设置为video元素的src以播放视频
  const videoUrl = URL.createObjectURL(videoBlob);
  const videoElement = document.createElement('video');
  videoElement.src = videoUrl;
  videoElement.controls = true;
  document.body.appendChild(videoElement);
}

convertImagesToVideo().catch(console.error);

@dragon-fish
Copy link
Member Author

MediaStream API Demo

By GPT-4

// 假设你已经有了一个canvas元素
const canvas = document.getElementById('yourCanvasId');
// 获取canvas的mediaStream
const stream = canvas.captureStream(30); // 参数为帧率,这里假设为30fps

// 准备录制
let recorder = new MediaRecorder(stream, { mimeType: 'video/webm' });
let chunks = [];

recorder.ondataavailable = function(e) {
  chunks.push(e.data);
};

recorder.onstop = function() {
  // 当录制停止时,将chunks中的数据合并成一个Blob
  const blob = new Blob(chunks, { 'type' : 'video/webm' });
  chunks = []; // 重置chunks以便再次录制

  // 创建一个视频URL,并将其设置为<a>元素的href以下载
  const videoURL = URL.createObjectURL(blob);
  const downloadLink = document.createElement('a');
  downloadLink.href = videoURL;
  downloadLink.download = 'animation.webm'; // 设置下载的文件名
  document.body.appendChild(downloadLink);
  downloadLink.click();

  // 清理
  document.body.removeChild(downloadLink);
  URL.revokeObjectURL(videoURL); // 释放URL对象
};

// 开始录制
recorder.start();

// 假设你有一个函数来控制动画的播放和停止
// 播放你的帧动画
playFrameAnimation();

// 停止录制,这里假设你已经知道何时停止
// 你可能需要在动画结束时调用这个方法
function stopRecording() {
  recorder.stop();
}

@dragon-fish
Copy link
Member Author

需要评估:是否有必要实现 Ugoira 转 MP4

@dragon-fish
Copy link
Member Author

/cc @AlPha5130

@dragon-fish dragon-fish changed the title [feat] Ugoira to video [RFC] Ugoira to video Mar 1, 2024
@AlPha5130
Copy link
Member

虽然我是想用ffmpeg,但是体积也太大了点

@dragon-fish
Copy link
Member Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants