BrutalCoding/shady.ai

[BUG] ๐Ÿ› - MacOS Shady flutter app is crashing

Closed this issue ยท 3 comments

Iโ€™m using a rwkv bin model file. But my app is crashing when the model file is loaded without any specific error. Can you please help me in loading this app first for macOS and then for Android and iOS devices.

I am attaching my error report below for your convenience.
ErrorReport.pdf

Hey @abusaadp,

Sorry for this experience. This is an issue on my side. The whole point of ShadyAI is that it should be easy enough for any layman to use but it clearly is in an unfinished state.

Please be aware that ShadyAI, right now, can only load an RWKV model in memory and nothing else. No chat bot, no image generator, no automations, there is nothing useful yet.

The RWKV model that I used was BlinkDL's RWKV4-7B model with "98% English / 2% others". I believe it loaded successfully either quantized or not, so that shouldn't matter.

Also, I started with MacOS first and thus far any other platform simply doesn't work.

Regarding your bug report: There's a good chance that the wrong model got loaded and thus crashed. For example: a model that's too big, or more likely: a (newer) model that is incompatible with ShadyAI.

Again, sorry for this experience @abusaadp. I appreciate your bug report.

In order for you to interact with an RWKV model, I would like you to redirect you to better repositories:

  1. RWKV: https://github.com/BlinkDL/RWKV-LM
  • Author: @BlinkDL
  • TLDR: This is a repo from the original author of RWKV. Have a look at the README. The README also references to other projects that could make it much easier for you to get started. One such reference found there is the repo 'rwkv.cpp'. See below.
  1. rwkv.cpp: https://github.com/saharNooby/rwkv.cpp
  • Author: @saharNooby
  • TLDR: An implementation that makes RWKV run faster. For example: It allows you to talk to a chatbot in your terminal which runs on a regular laptop. No need for a GPU because it runs off your CPU.
  1. llama.cpp: https://github.com/ggerganov/llama.cpp#using-gpt4all
  • Author: @ggerganov
  • TLDR: If not mistaken, he was the first person that converted the original LLaMA model (from Meta/Facebook) into a format that makes it run on your CPU instead of a powerful GPU. Thus, it suddenly enabled dozen of projects with similar goals as ShadyAI: Running 'AI' (LLM's) on lower end devices (regular PC's, laptops, and even mobile phones).

Since I have lost too much context regarding ShadyAI, I must re-think what I personally want ShadyAI to do first and which AI model is truly open source (e.g. Falcon 40B).

Cheers,
Daniel

@abusaadp Iโ€™ve been trying my best lately to rewrite some parts and get it closer to a working prototype.

Can you try out the Mac version of the app? No other platform yet. Not an intel Mac either, preferably an M series because thats what Iโ€™ve been working with recently.

Closing this issue due to lack of response and many new changes. Please re-open this issue @abusaadp if you have a new issue. Thanks for the report!