A simple, safe HTTP client.
Ureq's first priority is being easy for you to use. It's great for anyone who wants a low-overhead HTTP client that just gets the job done. Works very well with HTTP APIs. Its features include cookies, JSON, HTTP proxies, HTTPS, and charset decoding.
Ureq is in pure Rust for safety and ease of understanding. It avoids using
unsafe
directly. It uses blocking I/O instead of async I/O, because that keeps
the API simple and and keeps dependencies to a minimum. For TLS, ureq uses
rustls or native-tls.
Version 2.0.0 was released recently and changed some APIs. See the changelog for details.
In its simplest form, ureq looks like this:
fn main() -> Result<(), ureq::Error> {
let body: String = ureq::get("http://example.com")
.set("Example-Header", "header value")
.call()?
.into_string()?;
Ok(())
}
For more involved tasks, you'll want to create an Agent. An Agent holds a connection pool for reuse, and a cookie store if you use the "cookies" feature. An Agent can be cheaply cloned due to an internal Arc and all clones of an Agent share state among each other. Creating an Agent also allows setting options like the TLS configuration.
use ureq::{Agent, AgentBuilder};
use std::time::Duration;
let agent: Agent = ureq::AgentBuilder::new()
.timeout_read(Duration::from_secs(5))
.timeout_write(Duration::from_secs(5))
.build();
let body: String = agent.get("http://example.com/page")
.call()?
.into_string()?;
// Reuses the connection from previous request.
let response: String = agent.put("http://example.com/upload")
.set("Authorization", "example-token")
.call()?
.into_string()?;
Ureq supports sending and receiving json, if you enable the "json" feature:
// Requires the `json` feature enabled.
let resp: String = ureq::post("http://myapi.example.com/ingest")
.set("X-My-Header", "Secret")
.send_json(ureq::json!({
"name": "martin",
"rust": true
}))?
.into_string()?;
ureq returns errors via Result<T, ureq::Error>
. That includes I/O errors,
protocol errors, and status code errors (when the server responded 4xx or
5xx)
use ureq::Error;
match ureq::get("http://mypage.example.com/").call() {
Ok(response) => { /* it worked */},
Err(Error::Status(code, response)) => {
/* the server returned an unexpected status
code (such as 400, 500 etc) */
}
Err(_) => { /* some kind of io/transport error */ }
}
More details on the Error type.
To enable a minimal dependency tree, some features are off by default. You can control them when including ureq as a dependency.
ureq = { version = "*", features = ["json", "charset"] }
tls
enables https. This is enabled by default.native-certs
makes the default TLS implementation use the OS' trust store (see TLS doc below).cookies
enables cookies.json
enables Response::into_json() and Request::send_json() via serde_json.charset
enables interpreting the charset part of the Content-Type header (e.g.Content-Type: text/plain; charset=iso-8859-1
). Without this, the library defaults to Rust's built inutf-8
.socks-proxy
enables proxy config using thesocks4://
,socks4a://
,socks5://
andsocks://
(equal tosocks5://
) prefix.native-tls
enables an adapter so you can pass anative_tls::TlsConnector
instance toAgentBuilder::tls_connector
. Due to the risk of diamond dependencies accidentally switching on an unwanted TLS implementation,native-tls
is never picked up as a default or used by the crate level convenience calls (ureq::get
etc) – it must be configured on the agent. Thenative-certs
feature does nothing fornative-tls
.gzip
enables requests of gzip-compressed responses and decompresses them. This is enabled by default.brotli
enables requests brotli-compressed responses and decompresses them.
Most standard methods (GET, POST, PUT etc), are supported as functions from the top of the library (get(), post(), put(), etc).
These top level http method functions create a Request instance which follows a build pattern. The builders are finished using:
.call()
without a request body..send()
with a request body as Read (chunked encoding support for non-known sized readers)..send_string()
body as string..send_bytes()
body as bytes..send_form()
key-value pairs as application/x-www-form-urlencoded.
By enabling the ureq = { version = "*", features = ["json"] }
feature,
the library supports serde json.
request.send_json()
send body as serde json.response.into_json()
transform response to json.
The library will send a Content-Length header on requests with bodies of
known size, in other words, those sent with
.send_string()
,
.send_bytes()
,
.send_form()
, or
.send_json()
. If you send a
request body with .send()
,
which takes a Read of unknown size, ureq will send Transfer-Encoding:
chunked, and encode the body accordingly. Bodyless requests
(GETs and HEADs) are sent with .call()
and ureq adds neither a Content-Length nor a Transfer-Encoding header.
If you set your own Content-Length or Transfer-Encoding header before sending the body, ureq will respect that header by not overriding it, and by encoding the body or not, as indicated by the headers you set.
let resp = ureq::post("http://my-server.com/ingest")
.set("Transfer-Encoding", "chunked")
.send_string("Hello world");
By enabling the ureq = { version = "*", features = ["charset"] }
feature,
the library supports sending/receiving other character sets than utf-8
.
For response.into_string()
we read the
header Content-Type: text/plain; charset=iso-8859-1
and if it contains a charset
specification, we try to decode the body using that encoding. In the absence of, or failing
to interpret the charset, we fall back on utf-8
.
Similarly when using request.send_string()
,
we first check if the user has set a ; charset=<whatwg charset>
and attempt
to encode the request body using that.
ureq supports two kinds of proxies, HTTP CONNECT
, SOCKS4
and SOCKS5
, the former is
always available while the latter must be enabled using the feature
ureq = { version = "*", features = ["socks-proxy"] }
.
Proxies settings are configured on an Agent (using [AgentBuilder]). All request sent through the agent will be proxied.
fn proxy_example_1() -> std::result::Result<(), ureq::Error> {
// Configure an http connect proxy. Notice we could have used
// the http:// prefix here (it's optional).
let proxy = ureq::Proxy::new("user:password@cool.proxy:9090")?;
let agent = ureq::AgentBuilder::new()
.proxy(proxy)
.build();
// This is proxied.
let resp = agent.get("http://cool.server").call()?;
Ok(())
}
fn proxy_example_2() -> std::result::Result<(), ureq::Error> {
// Configure a SOCKS proxy.
let proxy = ureq::Proxy::new("socks5://user:password@cool.proxy:9090")?;
let agent = ureq::AgentBuilder::new()
.proxy(proxy)
.build();
// This is proxied.
let resp = agent.get("http://cool.server").call()?;
Ok(())
}
On platforms that support rustls, ureq uses rustls. On other platforms, native-tls can
be manually configured using [AgentBuilder::tls_connector
].
You might want to use native-tls if you need to interoperate with servers that only support less-secure TLS configurations (rustls doesn't support TLS 1.0 and 1.1, for instance). You might also want to use it if you need to validate certificates for IP addresses, which are not currently supported in rustls.
Here's an example of constructing an Agent that uses native-tls. It requires the "native-tls" feature to be enabled.
use std::sync::Arc;
use ureq::Agent;
let agent = ureq::AgentBuilder::new()
.tls_connector(Arc::new(native_tls::TlsConnector::new().unwrap()))
.build();
When you use rustls (tls
feature), ureq defaults to trusting
webpki-roots, a
copy of the Mozilla Root program that is bundled into your program (and so won't update if your
program isn't updated). You can alternately configure
rustls-native-certs which extracts the roots from your
OS' trust store. That means it will update when your OS is updated, and also that it will
include locally installed roots.
When you use native-tls
, ureq will use your OS' certificate verifier and root store.
Ureq uses blocking I/O rather than Rust's newer asynchronous (async) I/O. Async I/O allows serving many concurrent requests without high costs in memory and OS threads. But it comes at a cost in complexity. Async programs need to pull in a runtime (usually async-std or tokio). They also need async variants of any method that might block, and of any method that might call another method that might block. That means async programs usually have a lot of dependencies - which adds to compile times, and increases risk.
The costs of async are worth paying, if you're writing an HTTP server that must serve many many clients with minimal overhead. However, for HTTP clients, we believe that the cost is usually not worth paying. The low-cost alternative to async I/O is blocking I/O, which has a different price: it requires an OS thread per concurrent request. However, that price is usually not high: most HTTP clients make requests sequentially, or with low concurrency.
That's why ureq uses blocking I/O and plans to stay that way. Other HTTP clients offer both an async API and a blocking API, but we want to offer a blocking API without pulling in all the dependencies required by an async API.
Ureq is inspired by other great HTTP clients like superagent and the fetch API.
If ureq is not what you're looking for, check out these other Rust HTTP clients: surf, reqwest, isahc, attohttpc, actix-web, and hyper.