/arc_coreV1

ARC Core is a framework for building continual learning AI systems with biological learning mechanisms, enabling true adaptive behavior and knowledge retention.

Primary LanguagePythonOtherNOASSERTION

ARC Core - Adaptive Recursive Consciousness Engine

PyPI version Python 3.8+ License

ARC Core is a framework for building continual learning AI systems with biological learning mechanisms, enabling true adaptive behavior and knowledge retention.

Features

  • Multi-Architecture Support: Works with various model types (GPT, LLaMA, DeepSeek, etc.) with automatic configuration
  • Biological Learning: Implements contextual gating, cognitive inhibition, and sleep-like consolidation
  • Continual Learning: Real-time learning with LoRA adapters without catastrophic forgetting
  • Automatic Model Configuration: Smart detection of model architecture and optimal LoRA settings
  • Flexible Training: Customize learning parameters and target modules for different architectures
  • Reasoning Engine: Graph-based reasoning and pattern recognition
  • Teaching Pack System: Modular training with specialized learning modules
  • CLI Interface: Simple command-line tools for model management

Installation

Basic installation:

pip install metisos-arc-core

For GPU support (NVIDIA):

pip install metisos-arc-core[gpu]

For Apple Silicon:

pip install metisos-arc-core[apple]

Quick Start

Using the CLI

# Initialize a new model (default: TinyDolphin-2.8-1.1b)
arc init

# Start an interactive chat
arc chat

# Check system status
arc status

Python API Example

Basic Usage

from arc_core import LearningARCConsciousness

# Initialize with default model (GPT-2)
model = LearningARCConsciousness()

# Process interaction
response = model.process_user_interaction("Hello, how can you help me?")
print(f"ARC: {response['thought']}")

Using Different Models

ARC Core supports various model architectures with automatic configuration:

# LLaMA 2 (7B parameters)
llama_model = LearningARCConsciousness(
    model_name="meta-llama/Llama-2-7b-hf",
    device_map="auto"  # Automatically handles device placement
)

# DeepSeek model
deepseek_model = LearningARCConsciousness(
    model_name="deepseek-ai/deepseek-llm-7b",
    device_map="auto"
)

# Custom model with specific LoRA configuration
custom_model = LearningARCConsciousness(
    model_name="cognitivecomputations/TinyDolphin-2.8-1.1b",
    lora_config={
        'r': 8,
        'lora_alpha': 16,
        'target_modules': ['q_proj', 'v_proj']  # Optional: specify modules
    }
)

Saving and Loading

# Save the complete learning state
model.save_learning_state('my_model_state')

# Load the state later
model.load_learning_state('my_model_state')

CLI Commands

arc init       Initialize a new ARC model configuration
arc chat       Start an interactive chat session
arc pack       Manage teaching packs
arc teach      Train the model using a teaching pack
arc test       Test the model using a teaching pack
arc save       Save the current model state
arc load       Load a saved model state
arc status     Show current model status and configuration
arc stats      Show learning statistics
arc check      Check system and package health

Teaching Packs

ARC Core supports teaching packs for specialized training:

# List available teaching packs
arc pack list

# Install a teaching pack
arc pack install sentiment-basic

# Train using a teaching pack
arc teach sentiment-basic

Available Packs

  • sentiment-basic: Basic sentiment analysis training
  • dialogue-basic: Basic conversation patterns
  • science-facts: General science knowledge

What is ARC Core?

ARC Core is a sophisticated AI learning system that implements biological learning mechanisms in language models, enabling true continual learning and adaptive consciousness.

Key Features:

  • Biological Learning Mechanisms: Contextual gating, cognitive inhibition, and sleep-like consolidation
  • Hierarchical Memory Systems: Working, episodic, and semantic memory with temporal context
  • Continual Learning: Real weight updates without catastrophic forgetting
  • Safety-First Design: Multi-layered cognitive inhibition and metacognitive monitoring
  • Teaching Pack System: Modular training with specialized learning modules
  • Modular Teaching Packs: Easy-to-use training modules for specific domains
  • CLI Interface: Simple command-line tools for model management
  • Hugging Face Integration: Seamless model loading and saving

Quick Start

Installation

pip install metisos-arc-core

For GPU support:

pip install metisos-arc-core[gpu]

For Apple Silicon:

pip install metisos-arc-core[apple]

Basic Usage

1. Initialize ARC with a base model

arc init --base-model cognitivecomputations/TinyDolphin-2.8-1.1b

2. Teach the model using a training pack

arc teach sentiment-basic

3. Test the model's performance

arc test sentiment-basic

4. Interactive chat with your enhanced model

arc chat

5. Save your trained model

arc save --out ./my-arc-model

Advanced Python API Usage

from arc_core import LearningARCConsciousness

# Initialize with custom configuration
config = {
    "model_name": "cognitivecomputations/TinyDolphin-2.8-1.1b",
    "device": "cuda",  # or "cpu", "mps" for Apple Silicon
    "learning_rate": 5e-5,
    "max_memory_items": 1000
}

# Create model instance
model = LearningARCConsciousness(config)

# Process interactions and learn in real-time
response = model.process_user_interaction("I'm feeling great today!")
print(f"ARC: {response['thought']}")

# View learning statistics
stats = model.get_learning_statistics()
print(f"Total interactions: {stats['total_interactions']}")

# Save the complete learning state
model.save_learning_state("my_saved_state")

# Later, load the state to continue learning
model.load_learning_state("my_saved_state")

Architecture

ARC Core implements several biologically-inspired learning mechanisms:

Memory Systems

  • Working Memory: Short-term context and active processing
  • Episodic Memory: Specific interaction memories with temporal context
  • Semantic Memory: Extracted concepts and knowledge patterns

Safety Mechanisms

  • Cognitive Inhibition: Filters harmful or inappropriate responses
  • Contextual Gating: Controls memory encoding and retrieval
  • Metacognitive Monitoring: Self-assessment of response quality

Learning Systems

  • LoRA Adapters: Efficient parameter updates without full retraining
  • Elastic Weight Consolidation: Prevents catastrophic forgetting
  • Continual Learning: Accumulates knowledge across training sessions

Teaching Packs

Teaching packs are modular training datasets that enable targeted learning:

Built-in Packs

  • sentiment-basic: Basic sentiment analysis and appropriate responses

Creating Custom Packs

Create a directory with the following structure:

my-pack/
├── pack.yml          # Metadata and configuration
├── training.jsonl    # Training data
└── test_suite.jsonl  # Evaluation data

Example pack.yml:

name: my-pack
version: 1.0.0
description: Custom training pack
author: Your Name

learning_objectives:
  - Objective 1
  - Objective 2

datasets:
  training: training.jsonl
  
test_suite: test_suite.jsonl

Example training data (training.jsonl):

{"input": "User message", "output": "Model response"}
{"input": "Another message", "output": "Another response"}

CLI Commands

Command Description
arc init Initialize ARC with a base model
arc teach <pack> Train on a teaching pack
arc test <pack> Test model performance
arc chat Interactive chat session
arc save Save trained model
arc status Show system status
arc check Health check and requirements

CLI Examples

# Initialize with specific settings
arc init --base-model cognitivecomputations/TinyDolphin-2.8-1.1b --lora-rank 32 --device cuda

# Train with custom data
arc teach my-pack --data-path ./custom-data.jsonl --max-steps 200

# Chat with learning enabled
arc chat --max-turns 20 --learning

# Save in specific format
arc save --out ./models/my-model --format safetensors

Configuration

ARC Core uses a flexible configuration system:

from arc_core import ARCConfig

config = ARCConfig()

# Model settings
config.base_model = "cognitivecomputations/TinyDolphin-2.8-1.1b"
config.context_length = 1024
config.device = "auto"

# LoRA settings
config.lora.r = 16
config.lora.alpha = 32
config.lora.dropout = 0.1

# Training settings
config.training.learning_rate = 5e-4
config.training.max_steps = 100
config.training.ewc_lambda = 0.4

# Memory settings
config.memory.working_memory_size = 10
config.memory.episodic_memory_size = 1000

# Safety settings
config.safety.enable_cognitive_inhibition = True
config.safety.enable_contextual_gating = True
config.safety.enable_metacognitive_monitoring = True

# Save configuration
config.save("my-config.json")

# Load configuration
config = ARCConfig.load("my-config.json")

Examples

Example 1: Customer Service Bot

from arc_core import ARCTrainer, ARCConfig

# Setup for customer service
config = ARCConfig()
config.safety.politeness_threshold = 0.8
config.memory.episodic_memory_size = 2000  # Remember more interactions

trainer = ARCTrainer(config)
trainer.initialize_model("cognitivecomputations/TinyDolphin-2.8-1.1b")

# Train on customer service pack (custom)
trainer.train_on_pack("customer-service-basic")

# Use in production
response = trainer.generate_response("I'm having trouble with my order")

Example 2: Educational Assistant

# Setup for education
config = ARCConfig()
config.safety.enable_metacognitive_monitoring = True  # Self-correction
config.memory.semantic_memory_size = 5000  # Large knowledge base

trainer = ARCTrainer(config)
trainer.initialize_model("cognitivecomputations/TinyDolphin-2.8-1.1b")

# Sequential learning
trainer.train_on_pack("math-basics")
trainer.train_on_pack("science-basics")
trainer.train_on_pack("history-basics")

# The model retains knowledge from all domains
math_response = trainer.generate_response("What is calculus?")
science_response = trainer.generate_response("Explain photosynthesis")

Research and Development

ARC Core is designed for researchers and developers working on:

  • Continual Learning: Avoiding catastrophic forgetting in neural networks
  • Cognitive Architectures: Biologically-inspired AI systems
  • Memory Systems: Hierarchical and associative memory models
  • AI Safety: Cognitive safety mechanisms and alignment
  • Human-AI Interaction: Natural and safe conversational AI

Extending ARC Core

from arc_core.memory import MemorySystem
from arc_core.safety import SafetySystem

# Custom memory implementation
class CustomMemorySystem(MemorySystem):
    def consolidate_memories(self):
        # Custom consolidation logic
        pass

# Custom safety mechanism  
class CustomSafetySystem(SafetySystem):
    def evaluate_response(self, response):
        # Custom safety evaluation
        return safety_score

Performance

ARC Core is designed to be efficient:

  • Memory Usage: ~2-4GB RAM for medium models (with optimizations)
  • Training Speed: ~1-5 minutes per teaching pack (100 samples)
  • Inference Speed: ~100-500ms per response (GPU)
  • Model Size: Base model + ~10-50MB LoRA adapters

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

git clone https://github.com/metisos/arc_coreV1.git
cd arc-core
pip install -e .[dev]
pre-commit install

Running Tests

pytest tests/

License

Apache License 2.0 - see LICENSE file for details.

Acknowledgments

  • Inspired by cognitive science research on human learning and memory
  • Built on the excellent work of Hugging Face Transformers and PEFT
  • Special thanks to the continual learning research community

Support


ARC Core - Enabling truly adaptive and conscious-like learning in AI systems