emer/axon

deep: eliminate ctxt prjn and just build-in 1-to-1

Closed this issue · 1 comments

this definitely works best and asserting it directly makes code simpler and more obvious, and also eliminates the special learning rule for context inputs which is now moot.

this is not possible because lateral connections within the context layer itself are critical.

Also, this is one place where Act level "display only" vars are used in learning -- I just updated ActPrv = AvgM instead of ActP and that significantly improved deep_fsa learning -- AvgM contains much more information about integrated state from last trial..