/tesla

The flexible HTTP client library for Elixir, with support for middleware and multiple adapters.

Primary LanguageElixirMIT LicenseMIT

Tesla

Build Status Hex.pm Hexdocs.pm Hex.pm Hex.pm codecov Inline docs

Tesla is an HTTP client loosely based on Faraday. It embraces the concept of middleware when processing the request/response cycle.

Note that this README refers to the master branch of Tesla, not the latest released version on Hex. See the documentation for the documentation of the version you're using.

For the list of changes, checkout the latest release notes.


HTTP Client example

Define module with use Tesla and choose from a variety of middleware.

defmodule GitHub do
  use Tesla

  plug Tesla.Middleware.BaseUrl, "https://api.github.com"
  plug Tesla.Middleware.Headers, [{"authorization", "token xyz"}]
  plug Tesla.Middleware.JSON

  def user_repos(login) do
    get("/users/" <> login <> "/repos")
  end
end

Then use it like this:

{:ok, response} = GitHub.user_repos("teamon")

response.status
# => 200

response.body
# => [%{…}, …]

response.headers
# => [{"content-type", "application/json"}, ...]

See below for documentation.

Installation

Add :tesla as dependency in mix.exs:

defp deps do
  [
    {:tesla, "~> 1.4"},

    # optional, but recommended adapter
    {:hackney, "~> 1.17"},

    # optional, required by JSON middleware
    {:jason, ">= 1.0.0"}
  ]
end

Tesla uses Semantic Versioning 2.0.

Configure default adapter in config/config.exs (optional).

# config/config.exs

config :tesla, adapter: Tesla.Adapter.Hackney

The default adapter is erlang's built-in httpc, but it is not recommended to use it in production environment as it does not validate SSL certificates among other issues.

Documentation

Middleware

Tesla is built around the concept of composable middlewares. This is very similar to how Plug Router works.

Basic

Formats

Auth

Error handling

Runtime middleware

All HTTP functions, such as Tesla.get/3 and Tesla.post/4, can take a dynamic client as the first argument. This allows to use convenient syntax for modifying the behaviour in runtime.

Consider the following case: GitHub API can be accessed using OAuth token authorization.

We can't use plug Tesla.Middleware.Headers, [{"authorization", "token here"}] since this would be compiled only once and there is no way to insert dynamic user token.

Instead, we can use Tesla.client to create a client with dynamic middleware:

defmodule GitHub do
  # notice there is no `use Tesla`

  def user_repos(client, login) do
    # pass `client` argument to `Tesla.get` function
    Tesla.get(client, "/user/" <> login <> "/repos")
  end

  def issues(client) do
    Tesla.get(client, "/issues")
  end

  # build dynamic client based on runtime arguments
  def client(token) do
    middleware = [
      {Tesla.Middleware.BaseUrl, "https://api.github.com"},
      Tesla.Middleware.JSON,
      {Tesla.Middleware.Headers, [{"authorization", "token: " <> token }]}
    ]

    Tesla.client(middleware)
  end
end

and then:

client = GitHub.client(user_token)
client |> GitHub.user_repos("teamon")
client |> GitHub.get("/me")

Adapters

Tesla supports multiple HTTP adapter that do the actual HTTP request processing.

When using adapter other than :httpc remember to add it to the dependencies list in mix.exs

defp deps do
  [
    {:tesla, "~> 1.4.0"},
    {:hackney, "~> 1.10"} # when using hackney adapter
  ]
end

Adapter options

In case there is a need to pass specific adapter options you can do it in one of four ways:

Supplying them as a keyword list in a tuple via config:

config :tesla, adapter: {Tesla.Adapter.Hackney, [recv_timeout: 30_000]}

Using adapter macro:

defmodule GitHub do
  use Tesla

  adapter Tesla.Adapter.Hackney, recv_timeout: 30_000, ssl_options: [certfile: "certs/client.crt"]
end

Using Tesla.client/2:

def new(...) do
  middleware = [...]
  adapter = {Tesla.Adapter.Hackney, [recv_timeout: 30_000]}
  Tesla.client(middleware, adapter)
end

Passing directly to request functions such as MyClient.get/3 or Tesla.get/3.

MyClient.get("/", opts: [adapter: [recv_timeout: 30_000]])
Tesla.get(client, "/", opts: [adapter: [recv_timeout: 30_000]])

Streaming

If adapter supports it, you can pass a Stream as body, e.g.:

defmodule ElasticSearch do
  use Tesla

  plug Tesla.Middleware.BaseUrl, "http://localhost:9200"
  plug Tesla.Middleware.JSON

  def index(records_stream) do
    stream = records_stream |> Stream.map(fn record -> %{index: [some, data]} end)
    post("/_bulk", stream)
  end
end

Each piece of stream will be encoded as JSON and sent as a new line (conforming to JSON stream format).

Multipart

You can pass a Tesla.Multipart struct as the body:

alias Tesla.Multipart

mp =
  Multipart.new()
  |> Multipart.add_content_type_param("charset=utf-8")
  |> Multipart.add_field("field1", "foo")
  |> Multipart.add_field("field2", "bar",
    headers: [{"content-id", "1"}, {"content-type", "text/plain"}]
  )
  |> Multipart.add_file("test/tesla/multipart_test_file.sh")
  |> Multipart.add_file("test/tesla/multipart_test_file.sh", name: "foobar")
  |> Multipart.add_file_content("sample file content", "sample.txt")

{:ok, response} = MyApiClient.post("https://httpbin.org/post", mp)

Testing

You can set the adapter to Tesla.Mock in tests:

# config/test.exs
# Use mock adapter for all clients
config :tesla, adapter: Tesla.Mock
# or only for one
config :tesla, MyApi, adapter: Tesla.Mock

Then, mock requests before using your client:

defmodule MyAppTest do
  use ExUnit.Case

  import Tesla.Mock

  setup do
    mock(fn
      %{method: :get, url: "https://example.com/hello"} ->
        %Tesla.Env{status: 200, body: "hello"}

      %{method: :post, url: "https://example.com/world"} ->
        json(%{"my" => "data"})
    end)

    :ok
  end

  test "list things" do
    assert {:ok, %Tesla.Env{} = env} = MyApi.get("https://example.com/hello")
    assert env.status == 200
    assert env.body == "hello"
  end
end

Writing middleware

A Tesla middleware is a module with c:Tesla.Middleware.call/3 function, that at some point calls Tesla.run/2 with env and next to process the rest of stack.

defmodule MyMiddleware do
  @behaviour Tesla.Middleware

  def call(env, next, options) do
    env
    |> do_something_with_request()
    |> Tesla.run(next)
    |> do_something_with_response()
  end
end

The arguments are:

  • env - Tesla.Env instance
  • next - middleware continuation stack; to be executed with Tesla.run/2 with env and next
  • options - arguments passed during middleware configuration (plug MyMiddleware, options)

There is no distinction between request and response middleware, it's all about executing Tesla.run/2 function at the correct time.

For example, a request logger middleware could be implemented like this:

defmodule Tesla.Middleware.RequestLogger do
  @behaviour Tesla.Middleware

  def call(env, next, _) do
    env
    |> IO.inspect()
    |> Tesla.run(next)
  end
end

and response logger middleware like this:

defmodule Tesla.Middleware.ResponseLogger do
  @behaviour Tesla.Middleware

  def call(env, next, _) do
    env
    |> Tesla.run(next)
    |> IO.inspect()
  end
end

See built-in middlewares for more examples.

Middleware should have documentation following this template:

defmodule Tesla.Middleware.SomeMiddleware do
  @moduledoc """
  Short description what it does

  Longer description, including e.g. additional dependencies.


  ### Examples

  ```
  defmodule MyClient do
    use Tesla

    plug Tesla.Middleware.SomeMiddleware, most: :common, options: "here"
  end
  ```

  ### Options

  - `:list` - all possible options
  - `:with` - their default values
  """

  @behaviour Tesla.Middleware
end

Direct usage

You can also use Tesla directly, without creating a client module. This however won’t include any middleware.

# Example get request
{:ok, response} = Tesla.get("https://httpbin.org/ip")

response.status
# => 200

response.body
# => "{\n  "origin": "87.205.72.203"\n}\n"

response.headers
# => [{"content-type", "application/json" ...}]

{:ok, response} = Tesla.get("https://httpbin.org/get", query: [a: 1, b: "foo"])

# Example post request
{:ok, response} =
  Tesla.post("https://httpbin.org/post", "data", headers: [{"content-type", "application/json"}])

Cheatsheet

Making requests 101

# GET /path
get("/path")

# GET /path?a=hi&b[]=1&b[]=2&b[]=3
get("/path", query: [a: "hi", b: [1, 2, 3]])

# GET with dynamic client
get(client, "/path")
get(client, "/path", query: [page: 3])

# arguments are the same for GET, HEAD, OPTIONS & TRACE
head("/path")
options("/path")
trace("/path")

# POST, PUT, PATCH
post("/path", "some-body-i-used-to-know")
put("/path", "some-body-i-used-to-know", query: [a: "0"])
patch("/path", multipart)

Configuring HTTP functions visibility

# generate only get and post function
use Tesla, only: ~w(get post)a

# generate only delete function
use Tesla, only: [:delete]

# generate all functions except delete and options
use Tesla, except: [:delete, :options]

Disable docs for HTTP functions

use Tesla, docs: false

Encode only JSON request (do not decode response)

plug Tesla.Middleware.EncodeJson

Decode only JSON response (do not encode request)

plug Tesla.Middleware.DecodeJson

Use other JSON library

# use JSX
plug Tesla.Middleware.JSON, engine: JSX, engine_opts: [strict: [:comments]]

# use custom functions
plug Tesla.Middleware.JSON, decode: &JSX.decode/1, encode: &JSX.encode/1

Custom middleware

defmodule Tesla.Middleware.MyCustomMiddleware do
  def call(env, next, options) do
    env
    |> do_something_with_request()
    |> Tesla.run(next)
    |> do_something_with_response()
  end
end

Documentation for 0.x branch

Contributing

  1. Fork it (https://github.com/teamon/tesla/fork)
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details

Copyright (c) 2015-2021 Tymon Tobolski


Sponsors

This project is sponsored by ubots - Useful bots for Slack