g200kg/webaudio-tinysynth

Meaning and implementation of program parameters

Closed this issue · 11 comments

Hi, very cool project! I was wondering if you can answer some questions about your implementation of MIDI instruments (i.e. the timbre map).

  1. Regarding the raw parameter values ([{w:"sine",d:0.7,v:0.2,}, ... and so on) - do these values come from a reference somewhere, or did you just work them all out?

  2. Regarding the sound editor parameters, I assume that V is volume, F is frequency, and ADSR is the oscillator envelope. But what are the others?

    • G
    • T
    • H
    • P
    • K

The reason I ask is that I'm trying to recreate a similar kind of functionality for dynamic audio project. My project isn't related to MIDI so I don't think I can use your project's API directly, but I'd like to recreate some of the functionality.

Thanks!

Hello,
All the parameters are trial-and-error tuned with my ears.

Parameters means :
G... Output destination. 0: to final output 1-n: FM to specified oscillator.
T... Tune factor. if 1, according to note# pitch. The frequency = T(Note#frequecy) + F(fixed frequency)
H... Hold time. keep the attack peak. Generally called AHDSR envelope.
P... Pitch bend. If not 1, pitch will be bended to (BaseFrequency)*P while release.
K... Volume key tracking. Volume will be increased according to pitch if plus, decreased if minus.

Thanks, I mostly understand.

Have you considered making this implementation usable as a standalone dependency (i.e. without the MIDI support, UI support, and so on)?

If not, I think I'll do this, if you don't mind!

Hi,
I have not figure out yet, but what kind of API is assumed for stand alone?
Is it supposed to have an API like NoteOn () / NoteOff () instead of send ([MIDI-message])?
I'd like to support by case, but it may be like a bridge to MIDI commands. Or do you think something different?

Of course you can fork and make something freely.

Hi,

Yes, that's basically what I meant - or maybe with other APIs to do whatever else can be done with MIDI commands? I had supposed that functionality like starting and stopping notes would be conceptually lower-level than handling MIDI commands, but I don't know anything about MIDI; it might not be that simple.

Either way I'd also need to remove some UI-related stuff, and add a CommonJS wrapper, so it might be simplest if I fork and try to track any changes.

(ちなみに日本語は一応話せますが、もしその方がよければおっしゃってください)

OK, I see.

Excepting SMF related timing control,
originally, MIDI is a collection of noteOn/noteOff level commands.
These level commands can be directly mapped to MIDI commands.

Though it may be just a bridge to the MIDI command,
I think it is okay to prepare human readable API.

Current implemented functions will be named like :

noteOn(ch, note, velocity)
noteOff(ch, note)
program(ch, prg) // timbre select
bend(ch, bend) // pitch bend
control(cc#, val) //
bendRange(ch, brange)
allSoundOff(ch)
resetAllController(ch)

  • ch=0-15

  • note=0-127 60=middle C
    ch10 is drum track.
    each note is assigned to each percussive instrument.

  • prg=0-127 (select GM mapped timbre)

  • cc# :
    1: vibrato depth
    7: ch vol
    10: pan
    11: expression
    64: sustain pedal

Let me know if you think something missing.

About GUI,
webaudio-tinysynth already has JavaScript library version that has no GUI. Please check the webaudio-tinysynth.js and jstest.html. Currently It has a little extra code but I will remove that.

Anyway, please fork and modify freely this project.

日本語について、私の英語が読みにくくなければどちらでも構いません。ありがとうございます。

Ah, I see what you mean. Yes, it would be easier for non-MIDI people to have APIs like setPan(), if that's possible.

Regarding UI stuff, I saw that file but I was referring to the DOM listeners for mousedown, drag/drop and so on. But if you're planning to move those out of the standalone file that'd be cool!

Now updated. friendly named APIs are added. And remove GUI related code from JS version.

Hey, looks very cool! I'm trying to feel out the API. If I want to play a given note at a given time with, say, a given attack and a given duration, is this the right way to do it?

synth.program[42].p[0].a = 0.5;
synth.setProgram(0, 42);
var t = synth.getAudioContext().currentTime;
synth.noteOn(0, 72, 64, t + 0.1);
synth.noteOff(0, 72, t + 0.6);

And if I want to play a chord, I just call noteOn on multiple notes - I don't need to use different channels, right?

Great! You already deeply hacked undocumented inside :)
I had not thought about dynamically modification of timbre parameters.

Yes, it is possible.
However, please note the following points

  • Timbre parameters still have the possibility to change or add

  • Since FM synthesis is used, in some case it may be difficult to directly change the parameters to create the intended timbre.

For example, program[42].p[0].a changes the final attack time, but depending on the timbre (e.g. program [16]) oscillators are connected in parallel, both p[0].a and p[1].a must be controlled to change final attack time.
FM synthesis can produce various sounds with less resources, but creating the sound as intended is not intuitive. If you want to change the tone parameters in real time, the subtraction synthesis method that having the VCO - VCF - VCA configuration may be more suitable.

About playing a chord,
Yes, just call multiple noteOn(). Each channel has polyphonic synth with individual timbre.

Ah, thanks for the further info. I understand what you mean about parallel waveforms and possibly needing to change the attack for both. I guess that to do this robustly I'd need to change the value for each oscillator whose g parameter is 0 then? (With that said, dynamically changing the timbre of the note would be over my head - I was just testing how I can control the ADHSR envelope for each note being played, so those are probably the only parameters I'd be changing.)

Thanks for the great changes!

Yes. it is needed for g=0 OSCs.
Thank you. have fun :)