Pinned Repositories
anarchist-hitler
Play Secret Hitler without a server
dask-on-ray-blog
kindlebox
Send books and personal documents to Kindle through a Dropbox folder
ownership-nsdi2021-artifact
ray
An experimental distributed execution engine
stephanie-wang's Repositories
stephanie-wang/kindlebox
Send books and personal documents to Kindle through a Dropbox folder
stephanie-wang/anarchist-hitler
Play Secret Hitler without a server
stephanie-wang/ownership-nsdi2021-artifact
stephanie-wang/dask-on-ray-blog
stephanie-wang/ray
An experimental distributed execution engine
stephanie-wang/flink-wordcount
Benchmark for lineage stash paper, SOSP'19
stephanie-wang/gifs
Josh can dance if he wants to
stephanie-wang/hodor
Serializing orthogonal range (k-d) trees for fast on-disk queries
stephanie-wang/ray-core-tutorial
Introduction to Ray Core Design Patterns and APIs.
stephanie-wang/rise-camp-ray-tutorial-2020
stephanie-wang/abseil-cpp
Abseil Common Libraries (C++)
stephanie-wang/academy
Ray tutorials from Anyscale
stephanie-wang/arrow
Apache Arrow is a columnar in-memory analytics layer designed to accelerate big data. It houses a set of canonical in-memory representations of flat and hierarchical data along with multiple language-bindings for structure manipulation. It also provides IPC and common algorithm implementations.
stephanie-wang/assignment1
Part 1 of the introductory assignment on building our own compiler
stephanie-wang/comm.prod
a btb "ratings" comm.prod
stephanie-wang/dask
Parallel computing with task scheduling
stephanie-wang/dotfiles
stephanie-wang/ds2
DS2 is an auto-scaling controller for distributed streaming dataflows
stephanie-wang/hanabi
stephanie-wang/hiredis
Minimalistic C client for Redis >= 1.2
stephanie-wang/lineage-stash-artifact
Scripts for plotting and example data
stephanie-wang/ray-scheduler-prototype
Experimental code for Ray scheduler evaluation
stephanie-wang/requests
Python HTTP Requests for Humans™.
stephanie-wang/Salix
stephanie-wang/streamcorpus
stephanie-wang/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs