EpistasisLab/tpot2
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
Jupyter NotebookLGPL-3.0
Issues
- 0
- 4
Remove jupyter as a dependency (+maybe others)
#146 opened by chimaerase - 1
- 1
allow individual modules to be specified by string for config dicts. For example ["LogisticRegression", "KNN"] etc.
#60 opened by perib - 0
#TODO, mutate all steps or just one?
#154 opened by jgh9094 - 2
Forgot to instantiate the rng.
#151 opened by jgh9094 - 2
- 0
'np.random.normal(0, 1)' should be 'rng.normal(0, 1)'
#152 opened by jgh9094 - 0
Add a destroyer function to properly close dask clients when the evolver or estimator instance is deleted.
#70 opened by perib - 1
Learning Machine
#117 opened by RADj375 - 0
- 2
Fix miscellaneous invalid hyperparameter options in the configuration dictionary. And add n_jobs=1 when possible.
#59 opened by perib - 0
API : improve selector module organization
#87 opened by perib - 0
TPOT1 feature Parity: cuML configuration dictionaries
#101 opened by perib - 0
Rename survival_percentage to something clearer.
#89 opened by perib - 0
Automatically generate names for lambda/partial functions used as custom objective functions
#88 opened by perib - 0
Implement Meta Learner
#91 opened by perib - 0
Idea for a two step preprocessing pipeline
#100 opened by perib - 0
Add support for ordinal categorical columns
#98 opened by perib - 0
Better integrate Optuna Optimization
#90 opened by perib - 0
implement a callback function/class
#99 opened by perib - 0
Genetic Feature Selection Improvements
#96 opened by perib - 0
optimization of preprocessing pipeline
#95 opened by perib - 0
- 0
This issue for TPOT1 brought up a good idea about outlier detection https://github.com/EpistasisLab/tpot/issues/1166
#93 opened by perib - 0
Add option for successive halving based on time or convergence rather than number of generations.
#92 opened by perib - 0
add in unit tests
#80 opened by perib - 0
TPOT1 Feature Parity: Write an export function that takes a graphpipeline and generates a .py file similar to TPOT1
#58 opened by perib - 0
TPOT1 Feature Parity: add "sample_weight" and "groups" parameters to the fit function in TPOTEstimator and the graphpipeline
#57 opened by perib - 0
Add successive halving algorithm to steady state
#66 opened by perib - 0
add a debugging tutorial
#64 opened by perib - 0
Avoid duplicate work for steady state and base evolvers. Have one evolver that can do both generational EA, but also allow for extra individuals to be generated on the fly to make use of idle cores.
#65 opened by perib - 0
- 0
rename baseevolver.optimize to .evolve? To avoid confusion with individual.optimize
#79 opened by perib - 0
Licensing text in source files
#77 opened by perib - 1
- 0
Test that TPOTEstimator, TPOTClassifier, TPOTRegressor have a) same params, b) meet the sklearn baseestimator conventions
#81 opened by perib - 0
shuffle cv folds every generation?
#82 opened by perib - 0
- 0
- 0
Idea for Successive Halving to avoid repeated re-evaluations of the same pipeline.
#85 opened by perib - 0
edit/consolidate default configuration dictionary
#76 opened by perib - 0
- 0
- 0
add covariate adjustment
#68 opened by perib - 0
function that generates a learning curve
#63 opened by perib - 0
better plotting for graph pipeline
#62 opened by perib - 0
- 0
Memory caching with GraphPipeline may miss some nodes where the ordering on inputs happens to be different between two nodes. - potential solution is to make graphs canonical - there is a C package that can be imported into TPOT that may do this
#74 opened by perib - 0
FeatureSetSelector module needs to implement a mask to be consistent with sklearn
#71 opened by perib