An example using a minimal ASP.NET Core server to stream responses from OpenAI to a console app, over SignalR.
Implemented using the suggested approach in the Microsoft Copilot implementation blog post, using no third-party libraries (even for OpenAI) beyond plain SignalR.
StreamAI.mp4
Before running the server, set your OpenAI API key by running:
dotnet user-secrets set OpenAI:Key <your key>
Then open the solution in Visual Studio and set both projects as startup projects.
If you think you need to stream responses because the LLM is too slow, make yourself a favor and try groq.
I can hardly think of a scenario where I'd need streaming anymore once that tech can run GPT-4 level models.