/MyCamNetwork

Primary LanguagePythonMIT LicenseMIT

MyCamNetwork

1. MyCamNetwork

If you push r key, it detect me !!!!!! Add your own model to detect you with tensorflow !!!!!!!!

If you push f, it put some glass on your face !!!!!!!!!!!!!!!

Look at the code for more info !!!!!! 🤣

Run the file mycam.py !!!!!! Made with python, openCV and tensorflow !!!!!!!!!

glass

2. Create his own model

To create your own model (neural network).

Use the class MyNetwork:

    mynet = MyNetwork("regis", 180, 180, "image/copy")
    mynet.init_model(25, 0.2)
    mynet.create_model()  
    mynet.train_model(10)
    mynet.get_history()
    mynet.save_model("model/yournamemodel") 

3. Create your photo

Run mycam.py, modify a little the code if it dooesn't run like the path ...

Take your photo by pushing the key p, (key == 112 in the code). And train the model with it !!!

Take about 250 photo of you in different angle, and another category of photo who is different of your face(😵‍💫 ).

Thus, you create your directory in image/yourname and the other category image/photodifferentofyou. This will be your dataset for the model.

4. Load the model

Uncomment this code in mynetwork.py(it is uncommented), and load the model with this code.

    mynet = MyNetwork("regis", 180, 180, "image/copy")
    mynet.load_model("model/regismodel")
    mynet.predict("image/test.png")

5. Filter

Look at the Key code and try some fun filter like sobel, colored, canny, hough

6. Soon a game

Push **t key **and after that track your hand by a circle who displays it. After again, push g key

and try to catch the ball or fight it by hittin' in that ball with your tracked hand !!!!!

This is the game :

cam

7. Audio detection

With MySound Class in mysound.py record your voice in saying who is it, and with other sentence, to create a neural network

who recognize your voice. To create the audio neural network use the class MyAudio in myaudio.py like that 👍 :`

This is the code for record your voice:

    mysound = MySound("regis.wav")
    mysound.record(3, 16000, 1)
    mysound.wait()
    mysound.save()
    mysound.play()

And this is the code to create the model who will recognize your voice:

    myaudio = MyAudio("sound")
    myaudio.init_model(64)
    myaudio.create_model()
    myaudio.test_model()
    myaudio.save_model("model/audiomodel")

You have to create a sound directory with subdirectory with the name of their class like for the image.

For example, no, yes, or who (for who is it sentence). Their subdirectory will store your voice record like i mention above.

In the application, you have to use the d key to do this job for using the voice recognition action.

8. Soon an AI vocal

Push b and ask : "Mets le filtre s noir", ou "Mets le jeu", or other command see in the code.

And the assistant answer and put what you asked in action.

Final

Run mycam.py to show that !!!!!!!!

PS: Don't forget to install open-cv-contrib, spacy, vosk, gtts, playsound, sounddevice, scipy, tensorflow, matplotlib...

DON'T WORRY see in the code or on the internet for help !!!!