How to cite
Adam-Vandervorst opened this issue · 1 comments
What's the best way to cite the Atomspace as a (hyper)graph processing system used in AI?
Hi Adam, Great question, one that has not been asked before.
Two or three answers.
-
if the goal is to say "the AtomSpace exists", then the citation would be the URL of the git repo directly. I have read papers that cite in this way, but I don't recall the format used. Perhaps this:
Surname, Given, et al. "Tempus Foo Git" (2024) https://github.com/foo
which fits the author-title-date citation style. Sometimes I've sen just the bare URL.
Authors would be Ben Goertzel & myself with et al encompassing all other contribs. -
Here's a paper that explains how the AtomSpace actually works:
Linas Vepstas, "Graphs, Metagraphs, RAM, CPU" , (2023) (OpenCog Technical Report) https://github.com/opencog/atomspace/raw/master/opencog/sheaf/docs/ram-cpu.pdf
It's unpublished; I've made several attempts, but can't find an appropriate venue (i.e. journal that would be interested in this topic). -
A third possibility is to not cite the AtomSpace directly, but instead go for this sheaf-theoretic approach that I keep blabbering about. That is an ongoing, evolving concept. At least that has been published: the most recent incarnation is in AGI 2022, site as
Linas Vepstas, (2023) Purely Symbolic Induction of Structure. International Conference on Artificial General Intelligence AGI 2022: Artificial General Intelligence pp 134–144. doi: 10.1007/978-3-031-19907-3_13
The URL of the preprint is https://github.com/opencog/learn/raw/master/learn-lang-diary/agi-2022/grammar-induction.pdf
Off-topic; you might be interested in this: I've started applying the link-grammar-sheaf ideas to create sensory-motor action-perception API's and agents that can perceive and act through them. It's .. well, its an interesting idea. Who knows if it will pan out. Basically, work on (self-)assembling organisms, where the assembly processing uses Link Grammar to couple to the external environment, and also to define the structure of the agent itself. FWIW, there's probably some way of doing this using DL/NN as well, but I've not been thinking in that direction. There's no triaining corpus for DL/NN. There's no corpus that says "here's an eyeball and here's a robot arm and here's how to connect them." Unless you download a bunch of electronics EDA netlists, and use those to train the DL/NN. But then you'd get an electronics LLM, instead of an action-perception sensorimotor agent LLM.