/AzureSDKForRust

Microsoft Azure SDK for Rust

Primary LanguageRustApache License 2.0Apache-2.0

Microsoft Azure SDK for Rust

legal

Build Status Coverage Status stability-unstable

tag release commitssince

GitHub contributors

Crate repo Docs Crates.io Downloads Downloads@Latest
azure_sdk_auth_aad docs Crate cratedown cratelastdown
azure_sdk_core docs Crate cratedown cratelastdown
azure_sdk_cosmos docs Crate cratedown cratelastdown
azure_sdk_service_bus docs Crate cratedown cratelastdown
azure_sdk_storage_account docs Crate cratedown cratelastdown
azure_sdk_storage_blob docs Crate cratedown cratelastdown
azure_sdk_storage_core docs Crate cratedown cratelastdown
azure_sdk_storage_table docs Crate cratedown cratelastdown

Ancillary crates

Ancillary crates are maintained in a separated GitHub repo by other members of the community. If you have a crate that you want to list here do not hesitate to drop a line/PR.

Crate repo Maintainer Docs Crates.io Downloads Downloads@Latest Contributors
azure-sdk-keyvault Guy Waldman docs Crate cratedown cratelastdown GitHub contributors

Introduction

Microsoft Azure exposes its technologies via REST API. These APIs are easily consumable from any language (good) but are weakly typed. With this library and its related crate you can exploit the power of Microsoft Azure from Rust in a idiomatic way.

This crate relies heavily on the excellent crate called Hyper. As of this library version 0.30.0 all the methods are async/await compliant (futures 0.3).

From version 0.8.0 for Cosmos and 0.9.0 for Storage the repo is embracing the builder pattern. As of 0.10.0, most of storage APIs have been migrated to the builder pattern but there are methods still missing. Please chech the relevant issues to follow the update process. This is still an in-progress transition but the resulting API is much more easy to use. Also most checks have been moved to compile-time. Unfortunately the changes are not backward-compatibile. I have blogged about my appoach here: https://dev.to/mindflavor/rust-builder-pattern-with-types-3chf.

From version 0.12.0 the library switched from hyper-tls to hyper-rustls as suggested by bmc-msft in the issue #120. This should allow the library to be 100% rust.

NOTE: This repository is under heavy development and is likely to break over time. The current releases will probabily contain bugs. As usual open issues if you find any.

Disclaimer

Although I am a Microsoft employee, this is not a Microsoft endorsed project. It's simply a pet project of mine: I love Rust (who doesn't? 😏) and Microsoft Azure technologies so I thought to close the gap between them. It's also a good project for learning Rust. This library relies heavily on Hyper. We use the latest Hyper code so this library is fully async with Futures and Tokio.

Example

You can find examples in the examples folder of each sub-crate. Here is a glimpse:

main.rs

#[macro_use]
extern crate serde_derive;
// Using the prelude module of the Cosmos crate makes easier to use the Rust Azure SDK for Cosmos
// DB.
use azure_sdk_core::prelude::*;
use azure_sdk_cosmos::prelude::*;
use futures::stream::StreamExt;
use std::borrow::Cow;
use std::error::Error;

// This is the stuct we want to use in our sample.
// Make sure to have a collection with partition key "a_number" for this example to
// work (you can create with this SDK too, check the examples folder for that task).
#[derive(Serialize, Deserialize, Debug)]
struct MySampleStruct<'a> {
    id: Cow<'a, str>,
    a_string: Cow<'a, str>,
    a_number: u64,
    a_timestamp: i64,
}

// This code will perform these tasks:
// 1. Create 10 documents in the collection.
// 2. Stream all the documents.
// 3. Query the documents.
// 4. Delete the documents returned by task 4.
// 5. Check the remaining documents.
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    // Let's get Cosmos account and master key from env variables.
    // This helps automated testing.
    let master_key =
        std::env::var("COSMOS_MASTER_KEY").expect("Set env variable COSMOS_MASTER_KEY first!");
    let account = std::env::var("COSMOS_ACCOUNT").expect("Set env variable COSMOS_ACCOUNT first!");

    let database_name = std::env::args()
        .nth(1)
        .expect("please specify the database name as first command line parameter");
    let collection_name = std::env::args()
        .nth(2)
        .expect("please specify the collection name as first command line parameter");

    // First, we create an authorization token. There are two types of tokens, master and resource
    // constrained. This SDK supports both.
    // Please check the Azure documentation for details or the examples folder
    // on how to create and use token-based permissions.
    let authorization_token = AuthorizationToken::new_master(&master_key)?;

    // Next we will create a Cosmos client.
    let client = ClientBuilder::new(account, authorization_token)?;
    // We know the database so we can obtain a database client.
    let database_client = client.with_database_client(database_name);
    // We know the collection so we can obtain a collection client.
    let collection_client = database_client.with_collection_client(collection_name);

    // TASK 1 - Insert 10 documents
    println!("Inserting 10 documents...");
    let mut session_token = None;
    for i in 0..10 {
        // define the document.
        let document_to_insert = Document::new(MySampleStruct {
            id: Cow::Owned(format!("unique_id{}", i)),
            a_string: Cow::Borrowed("Something here"),
            a_number: i * 100, // this is the partition key
            a_timestamp: chrono::Utc::now().timestamp(),
        });

        // insert it and store the returned session token for later use!
        session_token = Some(
            collection_client
                .create_document()
                .with_partition_keys(
                    PartitionKeys::new().push(&document_to_insert.document.a_number)?,
                )
                .with_is_upsert(true) // this option will overwrite a preexisting document (if any)
                .execute_with_document(&document_to_insert)
                .await?
                .session_token, // get only the session token, if everything else was ok!
        );
    }
    // wow that was easy and fast, wasnt'it? :)
    println!("Done!");

    let session_token = ConsistencyLevel::from(session_token.unwrap());

    // TASK 2
    {
        println!("\nStreaming documents");
        // we limit the number of documents to 3 for each batch as a demonstration. In practice
        // you will use a more sensible number (or accept the Azure default).
        let stream = collection_client
            .list_documents()
            .with_consistency_level(session_token.clone())
            .with_max_item_count(3);
        let mut stream = Box::pin(stream.stream::<MySampleStruct>());
        // TODO: As soon as the streaming functionality is stabilized
        // in Rust we can substitute this while let Some... into
        // for each (or whatever the Rust team picks).
        while let Some(res) = stream.next().await {
            let res = res?;
            println!("Received {} documents in one batch!", res.documents.len());
            res.documents.iter().for_each(|doc| println!("{:#?}", doc));
        }
    }

    // TASK 3
    println!("\nQuerying documents");
    let query_documents_response = collection_client
        .query_documents()
        .with_query(&("SELECT * FROM A WHERE A.a_number < 600".into())) // there are other ways to construct a query, this is the simplest.
        .with_query_cross_partition(true) // this will perform a cross partition query! notice how simple it is!
        .with_consistency_level(session_token)
        .execute::<MySampleStruct>() // This will make sure the result is our custom struct!
        .await?
        .into_documents() // queries can return Documents or Raw json (ie without etag, _rid, etc...). Since our query return docs we convert with this function.
        .unwrap(); // we know in advance that the conversion to Document will not fail since we SELECT'ed * FROM table

    println!(
        "Received {} documents!",
        query_documents_response.results.len()
    );

    query_documents_response
        .results
        .iter()
        .for_each(|document| {
            println!("number ==> {}", document.result.a_number);
        });

    // TASK 4
    let session_token = ConsistencyLevel::from(query_documents_response.session_token.clone());
    for ref document in query_documents_response.results {
        // From our query above we are sure to receive a Document.
        println!(
            "deleting id == {}, a_number == {}.",
            document.result.id, document.result.a_number
        );

        // to spice the delete a little we use optimistic concurreny
        collection_client
            .with_document_client(&document.result.id as &str, document.result.a_number.into())
            .delete_document()
            .with_consistency_level(session_token.clone())
            .with_if_match_condition((&document.document_attributes).into())
            .execute()
            .await?;
    }

    // TASK 5
    // Now the list documents should return 4 documents!
    let list_documents_response = collection_client
        .list_documents()
        .with_consistency_level(session_token)
        .execute::<serde_json::Value>() // you can use this if you don't know/care about the return type!
        .await?;
    assert_eq!(list_documents_response.documents.len(), 4);

    Ok(())
}

State of the art

Right now the key framework is in place (authentication, enumerations, parsing and so on). If you want to contribute please do! Methods are added daily so please check the release page for updates on the progress. Also note that the project is in early stages so the APIs are bound to change at any moment. I will strive to keep things steady but since I'm new to Rust I'm sure I'll have to correct some serious mistake before too long 😄. I generally build for the latest nightly and leave to Travis to check the retrocompatibility.

Contributing

If you want to contribute please do! No formality required! 😉. Please note that asking for a pull request you accept to yield your code as per Apache license, version 2.0.

Run E2E test

Linux

export STORAGE_ACCOUNT=<account>
export STORAGE_MASTER_KEY=<key>

export AZURE_SERVICE_BUS_NAMESPACE=<azure_service_bus_namespace>
export AZURE_EVENT_HUB_NAME=<azure_event_hub_name>
export AZURE_POLICY_NAME=<azure_policy_name>
export AZURE_POLICY_KEY=<azure policy key>

export COSMOS_ACCOUNT=<cosmos_account>
export COSMOS_KEY=<cosmos_master_key>

cd azure_sdk_service_bus
cargo test --features=test_e2e

cd ../azure_sdk_storage_blob
cargo test --features=test_e2e

cd ../azure_sdk_storage_account
cargo test --features=test_e2e

cd ../azure_sdk_cosmos
cargo test --features=test_e2e

Windows

set STORAGE_ACCOUNT=<account>
set STORAGE_MASTER_KEY=<key>

set AZURE_SERVICE_BUS_NAMESPACE=<azure_service_bus_namespace>
set AZURE_EVENT_HUB_NAME=<azure_event_hub_name>
set AZURE_POLICY_NAME=<azure_policy_name>
set AZURE_POLICY_KEY=<azure policy key>

set COSMOS_ACCOUNT=<cosmos_account>
set COSMOS_MASTER_KEY=<cosmos_master_key>

cd azure_sdk_service_bus
cargo test --features=test_e2e

cd ../azure_sdk_storage_blob
cargo test --features=test_e2e

cd ../azure_sdk_storage_account
cargo test --features=test_e2e

cd ../azure_sdk_cosmos
cargo test --features=test_e2e

License

This project is published under Apache license, version 2.0.