WebGL glfx.js matrix transform (perspective) crops the image if it rotates
我正在使用glfx.js库,以便使用矩阵转换为图像创建透视效果。在我的应用中,该系统的工作方式与photoshop的智能对象一样(您可以在其中渲染平面图像,并在渲染后获得透视结果)
glfx.js使用此功能
我的问题是,如果转换后我想要的结果图像大于原始图像(如果旋转图像,则可能发生),那么WebGL画布将裁剪我的图像。
请看以下小提琴:
https://jsfiddle.net/human_a/o4yrheeq/
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | window.onload = function() { try { var canvas = fx.canvas(); } catch (e) { alert(e); return; } // convert the image to a texture var image = document.getElementById('image'); var texture = canvas.texture(image); // apply the perspective filter canvas.draw(texture).perspective( [0,0,774,0,0,1094,774,1094], [0,389,537,0,732,1034,1269,557] ).update(); image.src = canvas.toDataURL('image/png'); // or even if you replace the image with the canvas // image.parentNode.insertBefore(canvas, image); // image.parentNode.removeChild(image); }; |
1 2 | <script src="https://evanw.github.io/glfx.js/glfx.js"> <img id="image" crossOrigin="anonymous" src="https://images.unsplash.com/photo-1485207801406-48c5ac7286b2?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=600&fit=max&s=9bb1a18da78ab0980d5e7870a236af88"> |
关于如何使WebGL画布适合旋转图像(而不是缩小图像)或以某种方式提取整个图像而不是裁剪的图像的任何想法?
更多像素
没有涵盖所有解决方案。这是因为当您从2D转换为3D时,投影图像的大小可能会接近无穷大(近裁剪会阻止无穷大),因此无论您将图像输出多大,总有可能会应用某些裁剪。
避免这种警告的情况是,在大多数情况下都可以找到一种避免裁剪的解决方案。非常简单,只需扩展画布即可容纳其他内容。
找到界限
为简化计算,我将after数组更改为一组标准化点(它们将after坐标表示为图像尺寸的比例因子)。然后,我使用图像大小转换为真实像素坐标。然后从中开始锻炼纹理的最小大小,以同时保留原始图像和投影。
有了这些信息,我就创建了纹理(作为画布)来绘制图像。必要时调整befor数组(以防某些投影点位于负空间中)并应用滤镜。
因此,我们有一个具有宽度和高度的图像对象。然后,您便得到了这些点的投影。
1 2 3 | // assuming image has been loaded and is ready var imgW = image.naturalWidth; var imgH = image.naturalHeight; |
设置转角数组(之前)
1 | var before = [0, 0, imgW, 0, 0, imgH, imgW, imgH]; |
投影点。为了便于处理,我将投影点归一化为图像大小
1 | var projectNorm = [[0, 0.3556], [0.6938, 0], [0.9457, 0.9452], [1.6395, 0.5091]]; |
如果要使用小提琴的after数组中的绝对坐标,请使用以下代码。下一步之后,该片段中的归一化反转,因此您可以跳过归一化。由于时间紧迫,我刚刚迅速更新了答案。
1 2 3 4 5 | var afterArray = [0,389,537,0,732,1034,1269,557]; projectNorm = []; for(var i = 0; i < afterArray.length; i+= 2){ afterArray.push([afterArray[i] / before[i], afterArray[i + 1] / before[i + 1]]); } |
现在计算投影的大小。这是重要的部分,因为它可以计算出画布的大小。
1 2 3 4 5 6 7 8 9 10 11 12 | var top, left, right, bottom; top = 0; left = 0; bottom = imgH; right = imgW; var project = projectNorm.map(p => [p[0] * imgW, p[1] * imgH]); project.forEach(p => { top = Math.min(p[1], top); left = Math.min(p[0], left); bottom = Math.max(p[1], bottom); right = Math.max(p[0], right); }); |
现在,我们已经收集了所有需要的数据,我们可以创建一个可以容纳投影的新图像。 (假设投影点与投影真实)
1 2 3 4 | var texture = document.createElement("canvas"); var ctx = texture.getContext("2d"); texture.width = Math.ceil(right - left); texture.height = Math.ceil(bottom - top); |
以0,0绘制图像
1 2 3 | ctx.setTransform(1, 0, 0, 1, left, top); // put origin so image is at 0,0 ctx.drawImage(image,0,0); ctx.setTransform(1, 0, 0, 1, 0, 0); // reset transform |
然后展平投影点数组
1 2 | var after = []; project.forEach(p => after.push(...p)); |
将所有点移动到正投影空间中
1 2 3 4 5 6 7 8 9 | after.forEach((p,i) => { if (i % 2) { before[i] += -top; after[i] += -top; } else { before[i] += -left; after[i] += -left; } }); |
最后一步是创建glfx.js对象并应用过滤器
1 2 3 4 5 6 7 8 | // create a fx canvas var canvas = fx.canvas(); // create the texture var glfxTexture = canvas.texture(texture); // apply the filter canvas.draw(glfxTexture).perspective( before, after ).update(); // show the result on the page document.body.appendChild(canvas); |
演示版
使用上述方法的片段演示(图像加载略有修改)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | // To save time typing I have just kludged a simple load image wait poll waitForLoaded(); function waitForLoaded(){ if(image.complete){ projectImage(image); }else{ setTimeout(waitForLoaded,500); } } function projectImage(image){ var imgW = image.naturalWidth; var imgH = image.naturalHeight; var projectNorm = [[0, 0.3556], [0.6938, 0], [0.9457, 0.9452], [1.6395, 0.5091]]; var before = [0, 0, imgW, 0, 0, imgH, imgW, imgH]; var top, left, right, bottom; top = 0; left = 0; bottom = imgH; right = imgW; var project = projectNorm.map(p => [p[0] * imgW, p[1] * imgH]); project.forEach(p => { top = Math.min(p[1], top); left = Math.min(p[0], left); bottom = Math.max(p[1], bottom); right = Math.max(p[0], right); }); var texture = document.createElement("canvas"); var ctx = texture.getContext("2d"); texture.width = Math.ceil(right - left); texture.height = Math.ceil(bottom - top); ctx.setTransform(1, 0, 0, 1, left, top); // put origin so image is at 0,0 ctx.drawImage(image,0,0); ctx.setTransform(1, 0, 0, 1, 0, 0); // reset transform var after = []; project.forEach(p => after.push(...p)); after.forEach((p,i) => { if (i % 2) { before[i] += -top; after[i] += -top; } else { before[i] += -left; after[i] += -left; } }); // create a fx canvas var canvas = fx.canvas(); // create the texture var glfxTexture = canvas.texture(texture); // apply the filter canvas.draw(glfxTexture).perspective( before, after ).update(); // show the result on the page document.body.appendChild(canvas); } |
1 2 3 | #image { display : none; } |
1 2 | <script src="https://evanw.github.io/glfx.js/glfx.js"> <img id="image" crossOrigin="anonymous" src="https://images.unsplash.com/photo-1485207801406-48c5ac7286b2?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&s=9bb1a18da78ab0980d5e7870a236af88"> |
注意事项和警告
Note that the projection points (after array) do not always match the final corner points of the projected image. If this happens the final image may be clipped.
Note This method only works if the before points represent the exterme corners of the original image. If the points (before) are inside the image then this method may fail.
Warning There is no vetting of the resulting image size. Large Images can cause the browser to become sluggish, and sometimes crash. For production code you should do your best to keep the image size within the limits of the device that is using your code. Clients seldom return to pages that are slow and/or crash.