Welcome toVigges Developer Community-Open, Learning,Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
365 views
in Technique[技术] by (71.8m points)

ios - How can I speed up displaying bitmap graphics with Swift and an image buffer?

I've been working on an open source 2d bitmap renderer for iOS to study different computer graphics and pixel-based algorithms (like Bresenham's line algorithm, Catmull-Rom splines, Conway's Game of Life, etc). It's inspired by immediate-mode graphics and frameworks like Processing.

It works but my problem is that it is very slow. For example: Drawing Voronoi diagram with a for loop essentially doesn't work at all at full screen unless I scale everything down to under 500 px x 500 px.

My technique is naive and I am deliberately not using Apple's primitives for learning purposes.

The idea is to write to an image buffer which is essentially an array of colors using different techniques and update that 60 frames per second (or whatever frame rate I'd like). Here is the buffer:

var buffer: [Color] = Array(repeating: Color(0), count: width * height)

Color is a struct:

public struct Color {
        var r:UInt8 = 255
        var g:UInt8 = 255
        var b:UInt8 = 255
        var a :UInt8 = 255
        
        init(_ r: UInt8, _ g: UInt8, _ b: UInt8, _ a: UInt8) {
            self.r = r
            self.g = g
            self.b = b
            self.a = a
        }
        
        init(_ r: UInt8, _ g: UInt8, _ b: UInt8) {
            self.r = r
            self.g = g
            self.b = b
            self.a = 255
        }
        
        init(_ w: UInt8) {
            self.r = w
            self.g = w
            self.b = w
            self.a = 255
        }
    }

I then create a CADisplayLink to be able to manipulate the frame rate:

   func createDisplayLink(fps: Int) {
        displaylink = CADisplayLink(target: self,
                                    selector: #selector(step))
        
        displaylink.preferredFramesPerSecond = fps
        
        displaylink.add(to: .current,
                        forMode: .default)
        
        if startTimeRecorded == false {
            sketchStartTime = Int(CACurrentMediaTime() * 1000)
            startTimeRecorded = true
        }
    }

Then I step forward at my fps and call the draw loop by transforming the buffer to a UIImage and displaying that in a UIImage IBOutlet:

    @objc func step(displaylink: CADisplayLink) {
        imageView.image = imageFromARGB32Bitmap(pixels: canvas, width: Int(width), height: Int(height))
        frameCount += 1
        draw()
    }

Here's my image creation code:

    func imageFromARGB32Bitmap(pixels: [Color], width: Int, height: Int) -> UIImage? {
        guard width > 0 && height > 0 else { return nil }
        guard pixels.count == width * height else { return nil }
        
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
        let bitsPerComponent = 8
        let bitsPerPixel = 32
        
        var data = pixels // Copy to mutable []
        guard let providerRef = CGDataProvider(data: NSData(bytes: &data,
                                                            length: data.count * MemoryLayout<Color>.size)
        )
        else { return nil }
        
        guard let cgim = CGImage(
            width: width,
            height: height,
            bitsPerComponent: bitsPerComponent,
            bitsPerPixel: bitsPerPixel,
            bytesPerRow: width * MemoryLayout<Color>.size,
            space: rgbColorSpace,
            bitmapInfo: bitmapInfo,
            provider: providerRef,
            decode: nil,
            shouldInterpolate: false,
            intent: .defaultIntent
        )
        else { return nil }
        
        return UIImage(cgImage: cgim)
    }

Here are my questions:

  1. What are my bottlenecks here? I believe this is entirely CPU-bound (it also locks up the interface while drawing). Is there a way to incorporate the GPU and how would I go about learning how to do this?
  2. Is it improper/slow to be converting the buffer to a UIImage for display? Is there a lower level structure that I should be using?
  3. Is it OK for my image buffer to just be an array of colors? Does this slow things down? Is there an image buffer object that I should really be using instead that is lower level?
  4. Is Metal something that would speed things up that I could implement? If so can you tell me what I should look for and how I might go about learning how to implement it? I'm having trouble wrapping my head around the idea of displaying a single pixel because I know that's not really how 3D works.
  5. Am I making any incorrect assumptions about what I should and could be doing with computer graphics? If so, what are they and can you recommend any texts/papers to help correct my misunderstandings?

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
等待大神答复

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to Vigges Developer Community for programmer and developer-Open, Learning and Share
...