Amazon Kinesis Video Streams Media Interface provides abstract interfaces for board specific media APIs. This repository also contains boards sensors/encoder implementations for Amazon Kinesis Video Streams Producer and WebRTC with out-of-box samples.
- FILE(Dummy boards that can capture from sample frames)
- x86/x64(By selecting board
V4L2
orFILE
) - RPi(By selecting board
V4L2
orFILE
) - T31(By selecting board
T31
orFILE
)
- Generate AWS credentials by using this script
- Clone the code:
git clone https://github.com/aws-samples/amazon-kinesis-video-streams-media-interface.git
- Copy SoC SDK dependencies into 3rdparty/$BOARD/. See 3rdparty/README.md.
- Build the code:
export CC=${YOUR_C_TOOLCHAIN}
export CXX=${YOUR_CXX_TOOLCHAIN}
cd amazon-kinesis-video-streams-media-interface
mkdir build; cd build; cmake .. -DBOARD=${YOUR_BOARD_NAME}
make
- Upload built artifacts(i.e kvswebrtcmaster-static) at amazon-kinesis-video-streams-media-interface/build/samples/webrtc/ to your board.
- Setup AWS credentials in environment variables. The sample is using AWS IoT certificates by default:
export AWS_KVS_LOG_LEVEL=2
export AWS_DEFAULT_REGION=us-east-1
export AWS_KVS_CACERT_PATH=rootca.pem
export AWS_IOT_CORE_THING_NAME=your_camera_name
export AWS_IOT_CORE_CREDENTIAL_ENDPOINT=xxxxxxxxxxxxxx.credentials.iot.us-east-1.amazonaws.com
export AWS_IOT_CORE_CERT=your_camera_certificate.pem
export AWS_IOT_CORE_PRIVATE_KEY=your_camera_private.key
export AWS_IOT_CORE_ROLE_ALIAS=your_camera_role_alias
You can use Root CA in resources/certs/rootca.pem, or you can download it from Amazon Trust Services
Alternatively, you can choose use access key id/access key secret by turn off IOT_CORE_ENABLE_CREDENTIALS
in Samples.h and setup credentials:
export AWS_KVS_LOG_LEVEL=2
export AWS_DEFAULT_REGION=us-east-1
export AWS_KVS_CACERT_PATH=cacert.pem
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- Execute sample on your board:
./kvswebrtcmaster-static
. If you are using access key id/access key secret, execute sample with channel name as parameter:./kvswebrtcmaster-static your_channel_name
- Check WebRTC live stream via AWS console or AWS WebRTC test page
- Clone the code:
git clone https://github.com/aws-samples/amazon-kinesis-video-streams-media-interface.git
- Copy SoC SDK dependencies into 3rdparty/$BOARD/. See 3rdparty/README.md.
- Build the code:
export CC=${YOUR_C_TOOLCHAIN}
export CXX=${YOUR_CXX_TOOLCHAIN}
cd amazon-kinesis-video-streams-media-interface
mkdir build; cd build; cmake .. -DBUILD_WEBRTC_SAMPLES=OFF -DBUILD_KVS_SAMPLES=ON -DBOARD=${YOUR_BOARD_NAME}
make
- Upload built artifacts(i.e kvsproducer-static) at amazon-kinesis-video-streams-media-interface/build/samples/kvs/ to your board.
- Setup AWS credentials in environment variables. The sample is using AWS access key/secret key by default:
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export AWS_DEFAULT_REGION=us-east-1
export AWS_KVS_HOST=kinesisvideo.us-east-1.amazonaws.com
- Execute sample on your board:
./kvsproducer-static $YOUR_STREAM_NAME
- Check DASH live stream via AWS KVS test page
- Download video clips via AWS console.
In current stage, browser doesn't support G.711 via HLS/DASH. To verify audio content in G.711 formats, user must download video clips.
- Clone the code:
git clone https://github.com/aws-samples/amazon-kinesis-video-streams-media-interface.git
- Copy SoC SDK dependencies into 3rdparty/$BOARD/. See 3rdparty/README.md.
- Build the code:
export CC=${YOUR_C_TOOLCHAIN}
export CXX=${YOUR_CXX_TOOLCHAIN}
cd amazon-kinesis-video-streams-media-interface
mkdir build; cd build; cmake .. -DBUILD_WEBRTC_SAMPLES=OFF -DBUILD_SAVE_FRAME_SAMPLES=ON -DBOARD=${YOUR_BOARD_NAME}
make
- Execute sample on your board:
./saveframe-static $FILE_NAME
To adapt other platforms SDKs with Amazon Kinesis Video Streams Media Interface, you need to implement interfaces in include/com/amazonaws/kinesis/video/capturer/VideoCapturer.h, include/com/amazonaws/kinesis/video/capturer/AudioCapturer.h and include/com/amazonaws/kinesis/video/player/AudioPlayer.h:
- VideoCapturer.h: abstract interfaces defined for video capturer sensor/encoder.
- AudioCapturer.h: abstract interfaces defined for audio capturer sensor/encoder.
- AudioPlayer.h: abstract interfaces defined for audio playback sensor/encoder.
The implementations of those interfaces should be put into source/${BOARD_NAME} and follow the name rules:
${BOARD_NAME}VideoCapturer.c
${BOARD_NAME}AudioCapturer.c
${BOARD_NAME}AudioPlayer.c
After implementation, you also need to create a platform specific CMake file in CMake named as ${BOARD_NAME}.cmake
:
if(BOARD STREQUAL "BOARD_NAME")
set(BOARD_SDK_DIR ${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/${BOARD})
# Add board related SDK sources here
set(BOARD_SRCS
)
# Add board related SDK header here
set(BOARD_INCS_DIR
${BOARD_SDK_DIR}/include/
${BOARD_SDK_DIR}/samples/libimp-samples/
)
# Add board related SDK lib here
set(BOARD_LIBS_SHARED
)
set(BOARD_LIBS_STATIC
)
endif()