run python file-sharing.py you can see the qrcode or python file-sharing.py --dir <目录路径> --port <端口号>
启动一个 HTTP 服务器,共享指定目录下的文件 生成二维码,指向服务器地址 自动打开浏览器展示二维码和链接 程序退出时清理临时文件(index.html 和 myqr.png) 支持命令行参数设置端口和共享目录
- 解析命令行参数 (--dir, --port)
- 获取用户桌面路径(默认目录)
- 切换当前工作目录到目标目录
- 获取本机局域网 IP 地址
- 检查指定端口是否被占用
- 生成二维码文件 (myqr.png)
- 生成 index.html 页面(包含二维码和链接)
- 启动 HTTP 服务器,监听指定 IP 和端口
- 程序退出时清理临时文件
This project is a web scraper written in Python using Selenium and BeautifulSoup. It is designed to scrape job listings from naukrigulf.com.
- Scrapes job titles from naukrigulf.com
- Configurable scraping settings
- Supports headless browsing
- Handles SSL certificate errors
- Automatically closes the browser after scraping
- Python 3.x
- Selenium
- BeautifulSoup4
- ChromeDriver (ensure it's in your PATH or specify the path in the code)
-
Install the required packages:
-
Download the appropriate ChromeDriver for your Chrome version and update the path in the code.
-
Run the script:
-
The script will print the top job titles from naukrigulf.com.
The script is configured to scrape job titles from naukrigulf.com. You can modify the TARGET_CONFIG dictionary in the grap.py file to scrape different websites or data.
- Add support for more websites
- Implement data storage (e.g., CSV, JSON)
- Add error handling for network issues
- Implement multi-threading for faster scraping
Contributions are welcome! Please fork the repository and submit a pull request with your changes.
This project is open source and available under the MIT License.