关于javascript:在网络工作者中解码图像

Decode images in web worker

在我们的WebGL应用程序中,我试图在Web工作程序中加载和解码纹理图像,以避免在主线程中渲染堆积。在工作线程中使用createImageBitmap并将图像位图传输回主线程效果很好,但是在Chrome浏览器中,它将使用三个或更多(可能取决于内核数量?)单独的工作线程(ThreadPoolForegroundWorker),它与主线程和我自己的线程一起使用工人将导致五个线程。

我猜想这会导致四核上剩余的渲染干扰,因为我可以在Chrome的DevTools的"性能"功能中看到一些无法解释的长时间。

因此,我可以某种方式限制createImageBitmap使用的工作程序数量吗?即使我将图像作为Blob或数组缓冲区传输到主线程并从那里激活createImageBitmap,它的工作程序也会与我自己的工作程序和主线程竞争。

我尝试在工作程序中创建常规图像,而不是在其中明确解码它们,但是如果我想将它们创建为元素,则在工作程序上下文中未定义Image,在文档中也未定义Image。而且常规图像也不可转让,因此在主线程上创建它们并将其传递给工作程序似乎也不可行。

期待任何建议...


没有必要在工作程序中使用createImageBitmap(很好,请参阅底部)。浏览器已经在单独的线程中解码图像。在一个工人那里做不会给你任何东西。更大的问题是ImageBitmap无法在最终将图像传递给WebGL时知道如何使用图像。如果您要求的格式与ImageBitmap所解码的格式不同,则WebGL必须再次对其进行转换和/或解码,并且您无法为ImageBitmap提供足够的信息来告诉您要解码的格式。

最重要的是,Chrome中的WebGL必须将图像数据从渲染过程传输到GPU过程,对于大图像而言,GPU副本是一个相对较大的副本(RGBA的1024x1024为4meg)

更好的API IMO可以让您告诉ImageBitmap您想要什么格式以及您想要的位置(CPU,GPU)。这样,浏览器可以异步准备图像,并且完成后不需要繁重的工作。

无论如何,这是一个测试。如果您取消选中"更新纹理",则它仍在下载和解码纹理,但只是不调用gl.texImage2D来上传纹理。在那种情况下,我看不到任何垃圾(无法证明是问题所在,但我认为那是事实)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
const m4 = twgl.m4;
const gl = document.querySelector('#webgl').getContext('webgl');
const ctx = document.querySelector('#graph').getContext('2d');

let update = true;
document.querySelector('#update').addEventListener('change', function() {
  update = this.checked;
});

const vs = `
attribute vec4 position;
uniform mat4 matrix;
varying vec2 v_texcoord;
void main() {
  gl_Position = matrix * position;
  v_texcoord = position.xy;
}
`

const fs = `
precision mediump float;
varying vec2 v_texcoord;
uniform sampler2D tex;
void main() {
  gl_FragColor = texture2D(tex, v_texcoord);
}
`;

const program = twgl.createProgram(gl, [vs, fs]);
const posLoc = gl.getAttribLocation(program, 'position');
const matLoc = gl.getUniformLocation(program, 'matrix');

const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
  0, 0,
  1, 0,
  0, 1,
  0, 1,
  1, 0,
  1, 1,
]), gl.STATIC_DRAW);

gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);

const tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(
    gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE,
    new Uint8Array([0, 0, 255, 255]));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);

const m = m4.identity();
let frameCount = 0;
let previousTime = 0;
let imgNdx = 0;
let imgAspect = 1;

const imageUrls = [
  'https://i.imgur.com/KjUybBD.png',
  'https://i.imgur.com/AyOufBk.jpg',
  'https://i.imgur.com/UKBsvV0.jpg',
  'https://i.imgur.com/TSiyiJv.jpg',
];

async function loadNextImage() {
  const url = `${imageUrls[imgNdx]}?cachebust=${performance.now()}`;
  imgNdx = (imgNdx + 1) % imageUrls.length;
  const res = await fetch(url, {mode: 'cors'});
  const blob = await res.blob();
  const bitmap = await createImageBitmap(blob, {
    premultiplyAlpha: 'none',
    colorSpaceConversion: 'none',
  });
  if (update) {
    gl.bindTexture(gl.TEXTURE_2D, tex);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, bitmap);
    imgAspect = bitmap.width / bitmap.height;
  }
  setTimeout(loadNextImage, 1000);
}
loadNextImage();

function render(currentTime) {
  const deltaTime = currentTime - previousTime;
  previousTime = currentTime;
 
  {
    const {width, height} = ctx.canvas;
    const x = frameCount % width;
    const y = 1000 / deltaTime / 60 * height / 2;
    ctx.fillStyle = frameCount % (width * 2) < width ? 'red' : 'blue';
    ctx.clearRect(x, 0, 1, height);
    ctx.fillRect(x, y, 1, height);
    ctx.clearRect(0, 0, 30, 15);
    ctx.fillText((1000 / deltaTime).toFixed(1), 2, 10);
  }

  gl.useProgram(program);
  const dispAspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
  m4.scaling([1 / dispAspect, 1, 1], m);
  m4.rotateZ(m, currentTime * 0.001, m);
  m4.scale(m, [imgAspect, 1, 1], m);
  m4.translate(m, [-0.5, -0.5, 0], m);
  gl.uniformMatrix4fv(matLoc, false, m);
  gl.drawArrays(gl.TRIANGLES, 0, 6);
 
  ++frameCount;
  requestAnimationFrame(render);
}
requestAnimationFrame(render);
1
2
canvas { border: 1px solid black; margin: 2px; }
#ui { position: absolute; }

1
2
3
4
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js">
<input type="checkbox" id="update" checked><label for="update">Update Texture</label>
<canvas id="webgl"></canvas>
<canvas id="graph"></canvas>

我很确定,唯一可以避免麻烦的方法是自己在工作线程中解码图像,将其作为arraybuffer传输到主线程,然后使用gl.bufferSubData将帧上载几行。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
const m4 = twgl.m4;
const gl = document.querySelector('#webgl').getContext('webgl');
const ctx = document.querySelector('#graph').getContext('2d');

const vs = `
attribute vec4 position;
uniform mat4 matrix;
varying vec2 v_texcoord;
void main() {
  gl_Position = matrix * position;
  v_texcoord = position.xy;
}
`

const fs = `
precision mediump float;
varying vec2 v_texcoord;
uniform sampler2D tex;
void main() {
  gl_FragColor = texture2D(tex, v_texcoord);
}
`;

const program = twgl.createProgram(gl, [vs, fs]);
const posLoc = gl.getAttribLocation(program, 'position');
const matLoc = gl.getUniformLocation(program, 'matrix');

const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
  0, 0,
  1, 0,
  0, 1,
  0, 1,
  1, 0,
  1, 1,
]), gl.STATIC_DRAW);

gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);

function createTexture(gl) {
  const tex = gl.createTexture();
  gl.bindTexture(gl.TEXTURE_2D, tex);
  gl.texImage2D(
      gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE,
      new Uint8Array([0, 0, 255, 255]));
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
  return tex;
}

let drawingTex = createTexture(gl);
let loadingTex = createTexture(gl);

const m = m4.identity();
let frameCount = 0;
let previousTime = 0;

const workerScript = `
const ctx = new OffscreenCanvas(1, 1).getContext('2d');
let imgNdx = 0;
let imgAspect = 1;

const imageUrls = [
  'https://i.imgur.com/KjUybBD.png',
  'https://i.imgur.com/AyOufBk.jpg',
  'https://i.imgur.com/UKBsvV0.jpg',
  'https://i.imgur.com/TSiyiJv.jpg',
];

async function loadNextImage() {
  const url = \\`\\${imageUrls[imgNdx]}?cachebust=\\${performance.now()}\\`;
  imgNdx = (imgNdx + 1) % imageUrls.length;
  const res = await fetch(url, {mode: 'cors'});
  const blob = await res.blob();
  const bitmap = await createImageBitmap(blob, {
    premultiplyAlpha: 'none',
    colorSpaceConversion: 'none',
  });
  ctx.canvas.width = bitmap.width;
  ctx.canvas.height = bitmap.height;
  ctx.drawImage(bitmap, 0, 0);
  const imgData = ctx.getImageData(0, 0, ctx.canvas.width, ctx.canvas.height);
  const data = new Uint8Array(imgData.data);
  postMessage({
    width: imgData.width,
    height: imgData.height,
    data: data.buffer,
  }, [data.buffer]);
}

onmessage = loadNextImage;
`;
const blob = new Blob([workerScript], {type: 'application/javascript'});
const worker = new Worker(URL.createObjectURL(blob));
let imgAspect = 1;
worker.onmessage = async(e) => {
  const {width, height, data} = e.data;
 
  gl.bindTexture(gl.TEXTURE_2D, loadingTex);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);  
 
  const maxRows = 20;
  for (let y = 0; y < height; y += maxRows) {
    const rows = Math.min(maxRows, height - y);
    gl.bindTexture(gl.TEXTURE_2D, loadingTex);
    gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, y, width, rows, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array(data, y * width * 4, rows * width * 4));  
    await waitRAF();
  }
  const temp = loadingTex;
  loadingTex = drawingTex;
  drawingTex = temp;
  imgAspect = width / height;
  await waitMS(1000);
  worker.postMessage('');
};
worker.postMessage('');

function waitRAF() {
  return new Promise(resolve => requestAnimationFrame(resolve));
}

function waitMS(ms = 0) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

function render(currentTime) {
  const deltaTime = currentTime - previousTime;
  previousTime = currentTime;
 
  {
    const {width, height} = ctx.canvas;
    const x = frameCount % width;
    const y = 1000 / deltaTime / 60 * height / 2;
    ctx.fillStyle = frameCount % (width * 2) < width ? 'red' : 'blue';
    ctx.clearRect(x, 0, 1, height);
    ctx.fillRect(x, y, 1, height);
    ctx.clearRect(0, 0, 30, 15);
    ctx.fillText((1000 / deltaTime).toFixed(1), 2, 10);
  }

  gl.useProgram(program);
  const dispAspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
  m4.scaling([1 / dispAspect, 1, 1], m);
  m4.rotateZ(m, currentTime * 0.001, m);
  m4.scale(m, [imgAspect, 1, 1], m);
  m4.translate(m, [-0.5, -0.5, 0], m);
  gl.bindTexture(gl.TEXTURE_2D, drawingTex);
  gl.uniformMatrix4fv(matLoc, false, m);
  gl.drawArrays(gl.TRIANGLES, 0, 6);
 
  ++frameCount;
  requestAnimationFrame(render);
}
requestAnimationFrame(render);
1
canvas { border: 1px solid black; margin: 2px; }
1
2
3
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js">
<canvas id="webgl"></canvas>
<canvas id="graph"></canvas>

注意:我也不知道这行得通。令人恐惧的几个地方并定义了浏览器实现

  • 调整画布大小的性能问题是什么?该代码正在调整worker中OffscreenCanvas的大小。 GPU重复使用可能是一项繁重的操作。

  • 将位图绘制到画布中的性能如何?同样,由于浏览器必须将图像传输到GPU才能将其绘制到GPU 2D画布中,因此会产生较大的GPU性能。

  • getImageData的性能如何?再一次,浏览器必须潜在地冻结GPU以读取GPU内存以取出图像数据。

  • 调整纹理可能会导致性能下降。

  • 目前只有Chrome支持OffscreenCanvas

  • 1、2、3和5都可以通过自己解码jpg,png图像来解决,尽管它确实很烂,但浏览器拥有对图像进行解码的代码,只是您无法以任何有用的方式访问该解码代码。

    对于4,如果有问题,可以通过分配最大的图像大小纹理,然后将较小的纹理复制到矩形区域中来解决。假设这是一个问题

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    const m4 = twgl.m4;
    const gl = document.querySelector('#webgl').getContext('webgl');
    const ctx = document.querySelector('#graph').getContext('2d');

    const vs = `
    attribute vec4 position;
    uniform mat4 matrix;
    varying vec2 v_texcoord;
    void main() {
      gl_Position = matrix * position;
      v_texcoord = position.xy;
    }
    `

    const fs = `
    precision mediump float;
    varying vec2 v_texcoord;
    uniform sampler2D tex;
    void main() {
      gl_FragColor = texture2D(tex, v_texcoord);
    }
    `;

    const program = twgl.createProgram(gl, [vs, fs]);
    const posLoc = gl.getAttribLocation(program, 'position');
    const matLoc = gl.getUniformLocation(program, 'matrix');

    const buf = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, buf);
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
      0, 0,
      1, 0,
      0, 1,
      0, 1,
      1, 0,
      1, 1,
    ]), gl.STATIC_DRAW);

    gl.enableVertexAttribArray(posLoc);
    gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);

    function createTexture(gl) {
      const tex = gl.createTexture();
      gl.bindTexture(gl.TEXTURE_2D, tex);
      gl.texImage2D(
          gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE,
          new Uint8Array([0, 0, 255, 255]));
      gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
      gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
      gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
      return tex;
    }

    let drawingTex = createTexture(gl);
    let loadingTex = createTexture(gl);

    const m = m4.identity();
    let frameCount = 0;
    let previousTime = 0;

    const workerScript = `
    importScripts(
        // from https://github.com/eugeneware/jpeg-js
        'https://greggman.github.io/doodles/js/JPG-decoder.js',
        // from https://github.com/photopea/UPNG.js
        'https://greggman.github.io/doodles/js/UPNG.js',
    );

    let imgNdx = 0;
    let imgAspect = 1;

    const imageUrls = [
      'https://i.imgur.com/KjUybBD.png',
      'https://i.imgur.com/AyOufBk.jpg',
      'https://i.imgur.com/UKBsvV0.jpg',
      'https://i.imgur.com/TSiyiJv.jpg',
    ];

    function decodePNG(arraybuffer) {
      return UPNG.decode(arraybuffer)
    }

    function decodeJPG(arrayBuffer) {
      return decode(new Uint8Array(arrayBuffer), true);
    }

    const decoders = {
      'image/png': decodePNG,
      'image/jpeg': decodeJPG,
      'image/jpg': decodeJPG,
    };

    async function loadNextImage() {
      const url = \\`\\${imageUrls[imgNdx]}?cachebust=\\${performance.now()}\\`;
      imgNdx = (imgNdx + 1) % imageUrls.length;
      const res = await fetch(url, {mode: 'cors'});
      const arrayBuffer = await res.arrayBuffer();
      const type = res.headers.get('Content-Type');
      let decoder = decoders[type];
      if (!decoder) {
        console.error('unknown image type:', type);
      }
      const imgData = decoder(arrayBuffer);
      postMessage({
        width: imgData.width,
        height: imgData.height,
        arrayBuffer: imgData.data.buffer,
      }, [imgData.data.buffer]);
    }

    onmessage = loadNextImage;
    `;
    const blob = new Blob([workerScript], {type: 'application/javascript'});
    const worker = new Worker(URL.createObjectURL(blob));
    let imgAspect = 1;
    worker.onmessage = async(e) => {
      const {width, height, arrayBuffer} = e.data;
     
      gl.bindTexture(gl.TEXTURE_2D, loadingTex);
        gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);  
     
      const maxRows = 20;
      for (let y = 0; y < height; y += maxRows) {
        const rows = Math.min(maxRows, height - y);
        gl.bindTexture(gl.TEXTURE_2D, loadingTex);
        gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, y, width, rows, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array(arrayBuffer, y * width * 4, rows * width * 4));  
        await waitRAF();
      }
      const temp = loadingTex;
      loadingTex = drawingTex;
      drawingTex = temp;
      imgAspect = width / height;
      await waitMS(1000);
      worker.postMessage('');
    };
    worker.postMessage('');

    function waitRAF() {
      return new Promise(resolve => requestAnimationFrame(resolve));
    }

    function waitMS(ms = 0) {
      return new Promise(resolve => setTimeout(resolve, ms));
    }

    function render(currentTime) {
      const deltaTime = currentTime - previousTime;
      previousTime = currentTime;
     
      {
        const {width, height} = ctx.canvas;
        const x = frameCount % width;
        const y = 1000 / deltaTime / 60 * height / 2;
        ctx.fillStyle = frameCount % (width * 2) < width ? 'red' : 'blue';
        ctx.clearRect(x, 0, 1, height);
        ctx.fillRect(x, y, 1, height);
        ctx.clearRect(0, 0, 30, 15);
        ctx.fillText((1000 / deltaTime).toFixed(1), 2, 10);
      }

      gl.useProgram(program);
      const dispAspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
      m4.scaling([1 / dispAspect, 1, 1], m);
      m4.rotateZ(m, currentTime * 0.001, m);
      m4.scale(m, [imgAspect, 1, 1], m);
      m4.translate(m, [-0.5, -0.5, 0], m);
      gl.bindTexture(gl.TEXTURE_2D, drawingTex);
      gl.uniformMatrix4fv(matLoc, false, m);
      gl.drawArrays(gl.TRIANGLES, 0, 6);
     
      ++frameCount;
      requestAnimationFrame(render);
    }
    requestAnimationFrame(render);
    1
    canvas { border: 1px solid black; margin: 2px; }
    1
    2
    3
    <script src="https://twgljs.org/dist/4.x/twgl-full.min.js">
    <canvas id="webgl"></canvas>
    <canvas id="graph"></canvas>

    请注意,jpeg解码器运行缓慢。如果您发现或制作速度更快,请发表评论


    更新资料

    我只想说ImageBitmap应该足够快,而我上面关于它的一些评论没有足够的信息可能并不完全正确。

    我目前的理解是,如果ImageBitmap是要使上传速度更快的话,这很重要。您应该给它一个blob,然后异步地将该图像加载到GPU中,这样才能正常工作。调用texImage2D时,浏览器可以将该图像"变白"(使用GPU渲染)成纹理。我不知道为什么在第一个示例中有jank,但是我每6个左右的图像就会看到jank。

    另一方面,将图像上传到GPU是ImageBitmap的重点,但不需要浏览器将图像上传到GPU。即使用户没有GPU,ImageBitmap仍然可以正常工作。关键在于,由浏览器决定如何实现该功能,以及该功能的快速,慢速或无垃圾操作完全取决于浏览器。