berkeleybop/artificial-intelligence-ontology

relate layers to networks

Opened this issue · 5 comments

part of (after deduplication)

Not all of these claimed layers are defined:

  • Noisy Input
  • Hidden
  • Matched Output-Input
  • Input
  • Kernel
  • Convolutional/Pool
  • Probabilistic Hidden
  • Output
  • Backfed Input
  • Memory Cell
  • Weight
  • BN
  • ReLU
  • Addition
  • Spiking Hidden
  • Policy
  • Recurrent

I will add these classes

OK, some of these already existed as "x Layer" but here are all the classes whether they are new or not:

  • Noisy Input: AIO:NoisyInput
  • Hidden: AIO:HiddenLayer
  • Matched Output-Input: AIO:MatchedInputOutputLayer
  • Input: AIO:InputLayer (there is also AIO:InputLayerLayer - not sure that needs to exist)
  • Kernel: AIO:KernelLayer
  • Convolutional/Pool: AIO:ConvolutionalLayer (also AIO:PoolingLayer)
  • Probabilistic Hidden: AIO:ProbabilisticHiddenLayer
  • Output: AIO:OutputLayer
  • Backfed Input: AIO:BackfedInputLayer
  • Memory Cell: AIO:MemoryCellLayer
  • Weight: AIO:WeightedLayer
  • BN: AIO:BatchNormalizationLayer
  • ReLU: AIO:ReLULayer
  • Addition: AIO:AdditionLayer (the class AIO:AddLayer already exists and may be identical, though)
  • Spiking Hidden: AIO:SpikingHiddenLayer
  • Policy: AIO:PolicyLayer
  • Recurrent: AIO:RecurrentLayer

In this pass, the axioms will just be "Network N has part some Layer L". These assertions wont be ordered in any way and each layer type will only be listed once. We can get more sophisticated if necessary.

The original "Layers" comment is retained, with ordering and duplication, but with the prefix "Layers: " to indicate the nature of the comment

We could also just make our own 'layers_list' textual annotation, as opposed to using rdfs:comment

Ideally we would be able to retain the order in some way, but the individual assertions + comment will work for now.
In some cases the order is self-explanatory (e.g., input layers come before output layers) and in other cases the general order matters but isn't really strict (the architecture just needs to include a certain type of layer somewhere , like how an LSTM needs to include memory layers somewhere but there may be other stuff in there too if it isn't a traditional LSTM).