关于ios:将图像转换为像素格式并返回UIImage格式更改iPhone中的图像

convert image to pixels format and back to UIImage format changes image in iPhone

我首先将图像转换为原始像素,然后再次将像素转换回UIImage,在转换图像后,它更改了颜色并变得有些透明,我做了很多尝试,但未能解决问题。这是我的代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
-(UIImage*)markPixels:(NSMutableArray*)pixels OnImage:(UIImage*)image{
CGImageRef inImage = image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
    size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
int r = 3;
int p = 2*r+1;
unsigned char* data = CGBitmapContextGetData (cgctx);
int i = 0;
while (data[i]&&data[i+1]) {
    //        NSLog(@"%d",pixels[i]);
    i++;
}
NSLog(@"%d %zd %zd",i,w,h);
NSLog(@"%ld",sizeof(CGBitmapContextGetData (cgctx)));
for(int i = 0; i< pixels.count-1 ; i++){
    NSValue*touch1  = [pixels objectAtIndex:i];
    NSValue*touch2  = [pixels objectAtIndex:i+1];
    NSArray *linePoints = [self returnLinePointsBetweenPointA:[touch1 CGPointValue] pointB:[touch2 CGPointValue]];
    for(NSValue *touch in linePoints){
        NSLog(@"point = %@",NSStringFromCGPoint([touch CGPointValue]));
        CGPoint location = [touch CGPointValue];
        for(int i = -r ; i<p ;i++)
            for(int j= -r; j<p;j++)
            {
                if(i<=0 && j<=0 && i>image.size.height && j>image.size.width)
                    continue;
                NSInteger index = (location.y+i) * w*4 + (location.x+j)* 4;
                index = 0;
                data[index +3] = 125;
            }
    }
}
// When finished, release the context
CGContextRelease(cgctx);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dp = CGDataProviderCreateWithData(NULL, data, w*h*4, NULL);
CGImageRef img = CGImageCreate(w, h, 8, 32, 4*w, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big, dp, NULL, NO, kCGRenderingIntentDefault);
UIImage* ret_image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
CGColorSpaceRelease(colorSpace);
// Free image data memory for the context
if (data) { free(data); }
return ret_image;
}


First one is original image and second image is after applying this code.


您必须询问CGImageRef是否使用alpha,以及每个像素的组件格式-查看所有CGImageGet ...函数。该图像很可能不是ARGB而是BGRA。

我经常创建和渲染纯绿色图像,然后打印出第一个像素以确保正确(BGRA-> 0 255 0 255),等等。它确实与主机顺序等混淆,并且首先或最后一个alpha混淆(这样做之前或之后应用主机命令之前是什么意思?)

编辑:您告诉CGDataProviderCreateWithData使用'kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big",但我看不到您询问原始图像的配置方式。我的猜测是将'kCGBitmapByteOrder32Big'更改为'kCGBitmapByteOrder32Little'将解决您的问题,但是alpha可能也是错误的。

图像的alpha和字节顺序可以具有不同的值,因此您确实需要询问原始图像如何配置然后适应该图像(或将内存中的字节重新映射为所需的任何格式。)