/Pyautogui-module-using-audio

📌 This repo is all about how we implemented pyttsx3,speech_recognition,colored all three modules with pyautogui module.

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

Hello programmer Welcome to this repo GitHub issues GitHub forks GitHub stars GitHub contributors


PyAutoGUI

PyAutoGUI is a cross-platform GUI automation Python module for human beings. Used to programmatically control the mouse & keyboard.

Full documentation available at https://pyautogui.readthedocs.org

You Can Read Full documentation here : documentation

Pyttsx3

pyttsx3 is a text-to-speech conversion library in Python. Unlike alternative libraries, it works offline, and is compatible with both Python 2 and 3.

Full documentation available at https://pypi.org/project/pyttsx3/

speech_recognition

Speech recognition means that when humans are speaking, a machine understands it. speech recognition system needs to do is convert the audio signal into a form a computer can understand.

Full documentation available at https://pypi.org/project/SpeechRecognition/

colored

Very simple Python library for color and formatting in terminal. Collection of color codes and names for 256 color terminal setups.

Full documentation available at https://pypi.org/project/colored/

Pre-requisites:

Python3
pyautogui
pyttsx3
speech_recognition
colored

Installation:

 $ pip install pyautogui
 $ pip install pyttsx3
 $ pip install SpeechRecognition
 $ pip install colored

Importing module:

import pyautogui
import pyttsx3
import speech_recognition
import colored

Mouse and keyboard automation using Python:

In this we know how to automate movements of mouse and keyboard using pyautogui module in python.

  import pyautogui
  screenWidth, screenHeight = pyautogui.size() # Returns two integers, the width and height of the screen. (The primary monitor, in multi-monitor setups.)
  currentMouseX, currentMouseY = pyautogui.position() # Returns two integers, the x and y of the mouse cursor's current position.
  pyautogui.moveTo(100, 150) # Move the mouse to the x, y coordinates 100, 150.
  pyautogui.click() # Click the mouse at its current location.
  pyautogui.click(200, 220) # Click the mouse at the x, y coordinates 200, 220.
  pyautogui.move(None, 10)  # Move mouse 10 pixels down, that is, move the mouse relative to its current position.
  pyautogui.doubleClick() # Double click the mouse at the
  pyautogui.moveTo(500, 500, duration=2, tween=pyautogui.easeInOutQuad) # Use tweening/easing function to move mouse over 2 seconds.
  pyautogui.write('Hello world!', interval=0.25)  # Type with quarter-second pause in between each key.
  pyautogui.press('esc') # Simulate pressing the Escape key.
  pyautogui.keyDown('shift')
  pyautogui.write(['left', 'left', 'left', 'left', 'left', 'left'])
  pyautogui.keyUp('shift')
  pyautogui.hotkey('ctrl', 'c')

Display Message Boxes using pyautogui and pyttsx3 :

import pyautogui,pyttsx3
# Speaks The Audio.
engine = pyttsx3.init('sapi5')
voices = engine.getProperty('voices')
engine.setProperty('voice', voices[1].id)

def speak(audio):
    engine.say(audio)
    engine.runAndWait()    
speak('Hey! This is an alert box :')    
pyautogui.alert('This is an alert box :D.')
speak('Shall I proceed?')
pyautogui.confirm('Shall I proceed?')
speak(' Please Enter your option')
pyautogui.confirm('please Enter your option.', buttons=['K', 'L', 'P'])
speak('What is your name?')
speak('please Enter your name')
pyautogui.prompt('What is your name?')
speak("Enter password \n and don't worry text will be hidden")
pyautogui.password('Enter password (text will be hidden)')
speak('Thank You we save your details')

Demo

That's how the map look like after executing the code.

Download full demo with audio from here

Screenshot Functions using pyautogui :

import pyautogui
im1 = pyautogui.screenshot()
im1.save('my_screenshot.png')
im2 = pyautogui.screenshot('my_screenshot2.png')

You can also locate where an image is on the screen:

import pyautogui
location = pyautogui.locateOnScreen('button.png') # returns (left, top, width, height) of matching region
print(location)
buttonx, buttony = pyautogui.center(button7location)
print(buttonx, buttony)
pyautogui.click(buttonx, buttony)  # clicks the center of where the button was found

Final program:

I used below functions with the all modules (pyautogui,pyttsx3,speech_recognition,colored)

  • 1 : dragto(drag mouse cursor)
  • 2 : maximize(maximize your window)
  • 3 : minimize(minimize your window)
  • 4 : current title(To Get Current window title)
  • 5 : getalltitle(To getall the current opening application)
  • 6 : getinfo(To get all information about your window)
  • 7 : size(To get current window screen size)
  • 8 : position(To get exact screen position)
  • 9 : livemouseposition(To get the Live Mouse Position)
  • 10: move to(To get the specific point on your screen)
  • 11: click(This will perform the click task)

Source code available at here

Watch full video https://youtu.be/ZLM7glLn7ls

LICENSE:

Copyright (c) 2020 Kushal Das

This project is licensed under the GNU General Public License v3.0



Let's connect! Find me on the web.



If you have any Queries or Suggestions, feel free to reach out to me.

Show some  ❤️  by starring some of the repositories!