dmrschmidt/DSWaveformImage

different colors

MysteryRan opened this issue · 20 comments

截屏2021-02-20 下午3 02 15

want to realize like this function,set different part different color, how to work it? thx.

There's many different ways this could be achieved.
In this particular case, it looks like the "2nd color" is simply the same white, just with 40% or so opacity.
So what you could do is:

  • render yourself one waveform as a 100% white image
  • assign this to one waveform image view (below that's the viewModel.playbackWaveformImageView)
  • assign it to a 2nd waveform image view that you position exactly underneath the 1st one
  • set the secondImageView.opacity = 0.4 or similar
  • as the audio plays back, update your upmost waveform's layerMask so that it only covers a part of the space

I happen to have done this masking in one of my apps, so for illustration here's that code:

func updateProgressWaveform(_ progress: Double) {
    let fullRect = viewModel.playbackWaveformImageView.bounds
    let newWidth = Double(fullRect.size.width) * progress

    let maskLayer = CAShapeLayer()
    let maskRect = CGRect(x: 0.0, y: 0.0, width: newWidth, height: Double(fullRect.size.height))

    let path = CGPath(rect: maskRect, transform: nil)
    maskLayer.path = path

    viewModel.playbackWaveformImageView.layer.mask = maskLayer
}

截屏2021-02-20 下午4 13 48

thx.

There’s no intrinsic content size being calculated. So the short answer is, there is none. Instead, you define the size of the view (and thus waveform) by either setting the view’s frame or via auto layout constraints. The audio file is then downsampled to fit into the width and height you define.

If you want a specific resolution instead, you’ll have to do some manual math based on the audio files total duration and then set the view’s dimension accordingly.

So by „unfilled“, you mean „unplayed“ then, assuming that image you posted represents playback progress?

In that case, you‘d need to do sth similar to what I had originally described in #21 (comment)

If you do still need the dimensions and position of the unplayed / unfilled area, you’ll just need to calculate the „inverse“ of that answer‘s maskLayer. So sth along the lines of let unplayedWidth = fullRect.width - newWidth. The origin is essentially newWidth Plus the x-position of the waveform view.

Im just on my phone right now, so can’t write a full code sample but I hope this gives the direction.

Hey @dmrschmidt,

Any idea if is possible doing this in SwiftUI? thx.

There's tons of different ways to achieve this, one of the simplest might be

// @State var progress: CGFloat = 0 // must be between 0 and 1

ZStack(alignment: .leading) {
    WaveformView(audioURL: audioURL, configuration: configuration)
    WaveformView(audioURL: audioURL, configuration: configuration.with(style: .filled(.red)))
        .mask(alignment: .leading) {
            GeometryReader { geometry in
                Rectangle().frame(width: geometry.size.width * progress)
            }
        }
}

[Edit] I've added this to the README now so that it's easier to find in the future :)

can you please help me how can i make it draggable also currently i have the design requirement to show stripped waves
thank you @dmrschmidt

Hey @tayyab13-git,

getting a striped waveform is easy via sth similar to this

WaveformView(audioURL: audioURL, configuration: configuration.with(style: .striped(StripeConfig(color: .red, width: 3, spacing: 5, lineCap: .round))))

Making the overlay from the above example draggable is also relatively straightforward. You'll just need to add a DragGesture on the outer ZStack which would need to modify some @State variable using its onChanged(_:) modifier. That would then be used instead of the simplistic progress to manipulate the width of the .mask.

Maybe also have a look at https://developer.apple.com/documentation/swiftui/adding-interactivity-with-gestures in case you haven't used gestures in SwiftUI yet.

Im using UIKit and i think it will same on UIKIt too i need to add gesture in updateProgressWaveform function? Am i right?

Ah my bad. Well. So with UIKit then, yeah, you could in principle just re-use updateProgressWaveform as-is. You'd then need to add a UIPanGestureRecognizer to your view hierarchy where it makes sense in your specific case. And then just use its translation(in:) to infer a value within the interval (0...1) to be able to call it without modifications.

Thank you for answering Please tell me should i need to add another image over the static image as mask then on that mask i should add pan gesture and for progress i should use the value that is coming from Pan gesture in and pass it to updateProgrssWaveform Thanks for help 🙌🏻

So in the example above you need 2 identical images of your waveform on top of each other, each with a different color. The one referenced in that example as playbackWaveformImageView is the top one, which indicates the playback progress / dragging position.

That is the one getting the mask applied, so you only need 2 images. (The lower one just isn't referenced in that code, cause its static).

Where you add the pan gesture recognizer depends on your desired UX. One option is you'll add it on the view containing both the images. And yes, then you will need to do some math to map the current dragging position to the desired progress of the waveform. Maybe taking the initial position where the user touched into account or maybe not. Definitely with some calculation, because the pan recognizer gives you CGPoint and updateProgrssWaveform requires a Double between 0 and 1. It really depends a whole lot on how you want this to behave in the end.

So in the example above you need 2 identical images of your waveform on top of each other, each with a different color. The one referenced in that example as playbackWaveformImageView is the top one, which indicates the playback progress / dragging position.

That is the one getting the mask applied, so you only need 2 images. (The lower one just isn't referenced in that code, cause its static).

Where you add the pan gesture recognizer depends on your desired UX. One option is you'll add it on the view containing both the images. And yes, then you will need to do some math to map the current dragging position to the desired progress of the waveform. Maybe taking the initial position where the user touched into account or maybe not. Definitely with some calculation, because the pan recognizer gives you CGPoint and updateProgrssWaveform requires a Double between 0 and 1. It really depends a whole lot on how you want this to behave in the end.

Thank you soo much. I am working on a text to speech converter app and I'm working on a simple player to show the progress and allow user to drag the seek bar

you're welcome. and good luck with that @tayyab13-git!

Can you give me detailed instructions on how to play an audio file that has a waveform like this?
Thank you @dmrschmidt

I still don't understand what is the viewModel here? and where is playbackWaveformIamgeView

Are you using UIKit or SwiftUI?

I use SwiftUI

Then the code mentioned in #21 (comment) does everything you need to show the playback progress.

The comments referencing playbackWaveformImageView are irrelevant for you, as that’s UIKit.

tuo commented
// @State var progress: CGFloat = 0 // must be between 0 and 1

ZStack(alignment: .leading) {
    WaveformView(audioURL: audioURL, configuration: configuration)
    WaveformView(audioURL: audioURL, configuration: configuration.with(style: .filled(.red)))
        .mask(alignment: .leading) {
            GeometryReader { geometry in
                Rectangle().frame(width: geometry.size.width * progress)
            }
        }
}
Screenshot 2024-07-06 at 5 44 41 PM

Thanks, this trick is nice! Just one minor issue, if we put GeometryReader inside mask , it would cause the height of mask, in extreme case(loud voice), get smaller than the wave view it is applied to. To fix this alignment off, we could move GeometryReader into ZStack. That should work.

Screenshot 2024-07-06 at 5 40 52 PM