ORION
Pre-requisites
Matlab
Download Tensor and Poblano Toolbox and add it to Matlab Path
Python
- Scipy
- Sklearn
- Numpy
- Matplotlib
- Tensorly
If you are using Anaconda for Python Packages, you can use the following commands:
# Create a new environment, so that you don't mess up your exisiting environment
conda create --name <your new env name>
conda activate <your new env name>
conda install scikit-learn
conda install -c tensorly tensorly
conda install -c conda-forge matplotlib
Datasets
How to Run
- Git clone or download this code
- Run
init.m
in matlab to create required folders likedataset
,results
,tensorDataset
. - Download dataset(s) from the above provided links into the
dataset
folder. - In
runDataset.m
set below variables according to the dataset you are using. For example if you are running it for IndianPines dataset:After runningdatasetFname = 'dataset/Indian_pines_corrected.mat' datasetGt = 'dataset/Indian_pines_gt.mat' outFile= 'IndianPines' % IMPORTANT: Change X and Y according to variable stored in the .mat(dataset) file X = data.indian_pines_corrected; Y = gt. indian_pines_gt; testSize = 0.2 % Number of datasets to be created numData = 10 % Tensor decompostion rank ranks = [1000, 2000]
runDataset.m
using Matlab, it will create .mat files intensorDataset/8020/IndianPines
based onoutFile
andtestSize
variable (in above example testSize was0.2
, so 80-20 split). - Now to run
ORION
method, navigate to python folder and set following variables inorion.py
file.
dataPath = '../tensorDataset/8020/IndianPines/'
After running the orion.py
file, it will generate results(figures and .mat files) in results/orion/8020/IndianPines/
. Path of the result depends on the dataset being used and train-test split(in above example testSize was 0.2
, so 80-20 split).
- We have also provided the code for baselines used in our paper, to run Linear, Polynomial and RBF SVM set follwing variables in
baselines.py
and to run Multi Layer perceptron set the same following variables inmlpBaseline.py
file:
dataX = loadmat('../dataset/Indian_pines_corrected.mat')
dataY = loadmat('../dataset/Indian_pines_gt.mat')
Xog = dataX['indian_pines_corrected'] # 3D object
Y2d = dataY['indian_pines_gt'] # 2D object
folderName = 'IndianPines' # Make sure this is correct. Result folders will be created based on this.
# number of runs
runs = 10
testSize = 0.2
7.(Optional) Our code uses GridSearchCV
from sklearn
for hyperparameter tuning, to make it run faster(in parallel) you can set njobs
variable in trainModelSVM and trainNN in models.py
according to your system configuration. For details refer to this link
Above instructions to set variables is for IndianPines dataset, to use any other dataset follow instructions 3-6 and set the variables accordingly.