openai/automated-interpretability

Requires python >=3.9 instead of 3.7 specified in setup.py

tuomaso opened this issue · 3 comments

Running demos/generate_and_score_explanation.ipynb with python 3.8 gives a type error due to type hints used:

TypeError Traceback (most recent call last)
Input In [2], in
1 import os
3 os.environ["OPENAI_API_KEY"] = "put-key-here"
----> 5 from neuron_explainer.activations.activation_records import calculate_max_activation
6 from neuron_explainer.activations.activations import ActivationRecordSliceParams, load_neuron
7 from neuron_explainer.explanations.calibrated_simulator import UncalibratedNeuronSimulator

File ~/interpretability/automated-interpretability/neuron-explainer/neuron_explainer/activations/activation_records.py:6, in
3 import math
4 from typing import Optional, Sequence
----> 6 from neuron_explainer.activations.activations import ActivationRecord
8 UNKNOWN_ACTIVATION_STRING = "unknown"
11 def relu(x: float) -> float:

File ~/interpretability/automated-interpretability/neuron-explainer/neuron_explainer/activations/activations.py:36, in
31 neuron_index: int
32 """The neuron's index within in its layer. Indices start from 0 in each layer."""
35 def _check_slices(
---> 36 slices_by_split: dict[str, slice],
37 expected_num_values: int,
38 ) -> None:
39 """Assert that the slices are disjoint and fully cover the intended range."""
40 indices = set()

TypeError: 'type' object is not subscriptable

pull request welcome :)

I made a pull request

thanks, merged!