/mllibs

NLP Assisting Machine Learning Library

Primary LanguagePythonMIT LicenseMIT

License Downloads

Some key points about the library:

  • mllibs is a Machine Learning (ML) library which utilises natural language processing (NLP)
  • Development of such helper modules are motivated by the fact that everyones understanding of coding & subject matter (ML in this case) may be different
  • Often we see people create functions and classes to simplify the process of code automation (which is good practice)
  • Likewise, NLP based interpreters follow this trend as well, except, in this case our only inputs for activating certain code is natural language
  • Using python, we can interpret natural language in the form of string type data, using natural langauge interpreters
  • mllibs aims to provide an automated way to do machine learning using natural language

CODE AUTOMATION

TYPES OF AUTOMATION APPROACHES

There are different ways we can automate code execution:

  • The first two should be familiar, such approaches presume we have coding knowledge
  • If we utilise language, we can utilise natural language processing nlp in order to interpret the request and activate relevant activation functions
function class natural language
def fib_list(n):
    result = []
    a,b = 0,1
    while a<n:
        result.append(a)
        a,b = b, a + b
    return result

fib_list(5) 
class fib_list:
    
    def __init__(self,n):
        self.n = n

    def get_list(self):
        result = []
        a,b = 0,1
        while a<self.n:
            result.append(a)
            a,b = b, a + b
        return result

fib = fib_list(5)
fib.get_list()
input = 'calculate the fibonacci
         sequence for the value of 5'

nlp_interpreter(input) 
[0, 1, 1, 2, 3] [0, 1, 1, 2, 3] [0, 1, 1, 2, 3]

LIBRARY COMPONENTS

mllibs consists of two parts:

(1) modules associated with the interpreter

  • nlpm - groups together everything required for the interpreter module nlpi
  • nlpi - main interpreter component module (requires nlpm instance)
  • snlpi - single request interpreter module (uses nlpi)
  • mnlpi - multiple request interpreter module (uses nlpi)
  • interface - interactive module (chat type)

(2) custom added modules, for mllibs these library are associated with machine learning

You can check all the activations functions using session.fl() as shown in the sample notebooks in folder examples


MODULE COMPONENT STRUCTURE

Currently new modules can be added using a custom class sample and a configuration dictionary configure_sample

# sample module class structure
class sample(nlpi):
    
    # called in nlpm
    def __init__(self,nlp_config):
        self.name = 'sample'             # unique module name identifier (used in nlpm/nlpi)
        self.nlp_config = nlp_config  # text based info related to module (used in nlpm/nlpi)
        
    # called in nlpi
    def sel(self,args:dict):
        
        self.select = args['pred_task']
        self.args = args
        
        if(self.select == 'function'):
            self.function(self.args)
        
    # use standard or static methods
        
    def function(self,args:dict):
        pass
        
    @staticmethod
    def function(args:dict):
        pass
    

corpus_sample = OrderedDict({"function":['task']}
info_sample = {'function': {'module':'sample',
                            'action':'action',
                            'topic':'topic',
                            'subtopic':'sub topic',
                            'input_format':'input format for data',
                            'output_format':'output format for data',
                            'description':'write description'}}
                         
# configuration dictionary (passed in nlpm)
configure_sample = {'corpus':corpus_sample,'info':info_sample}

CREATING A COLLECTION

There are two ways to start an interpreter session, manually importing and grouping modules or using interface class

FIRST APPROACH

First we need to combine all our module components together, this will link all passed modules together

collection = nlpm()
collection.load([loader(configure_loader),
                 simple_eda(configure_eda),
                 encoder(configure_nlpencoder),
                 embedding(configure_nlpembed),
                 cleantext(configure_nlptxtclean),
                 sklinear(configure_sklinear),
                 hf_pipeline(configure_hfpipe),
                 eda_plot(configure_edaplt)])
                 

Then we need to train interpreter models

collection.train()

Lastly, pass the collection of modules (nlpm instance) to the interpreter nlpi

session = nlpi(collection)

class nlpi can be used with method exec for user input interpretation

session.exec('create a scatterplot using data with x dimension1 y dimension2')

SECOND APPROACH

The faster way, includes all loaded modules and groups them together for us:

from mllibs.interface import interface
session = interface()

SAMPLE NOTEBOOKS

Here are some notebooks that will help you familiarise yourself with the library:

These notebooks can also be found in folder examples and the kaggle dataset


HOW TO CONTRIBUTE

Want to add your own project to our collection? We welcome all contributions, big or small. Here's how you can get started:

  1. Fork the repository
  2. Create a new branch for your changes
  3. Make your changes and commit them
  4. Submit a pull request

CONTACT

If you have any questions or feedback, feel free to reach out on my telegram channel